Greg Stark wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
Have you considered having the background writer check the pages it is
about to write to see if they can be added to the FSM, thereby reducing
the need for vacuum? Seems we would need to add a statistics parameter
so
Bruce Momjian wrote:
Greg Stark wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
Have you considered having the background writer check the pages it is
about to write to see if they can be added to the FSM, thereby reducing
the need for vacuum? Seems we would need to add a statistics parameter
Bruce Momjian wrote:
Jan Wieck wrote:
If the system is write-bound, the checkpointer will find that many dirty
blocks that he has no time to nap and will burst them out as fast as
possible anyway. Well, at least that's the theory.
PostgreSQL with the non-overwriting storage concept can never
Jan Wieck wrote:
Bruce Momjian wrote:
Jan Wieck wrote:
If the system is write-bound, the checkpointer will find that many dirty
blocks that he has no time to nap and will burst them out as fast as
possible anyway. Well, at least that's the theory.
PostgreSQL with the
Bruce Momjian wrote:
Jan Wieck wrote:
What doing frequent fdatasync/fsync during a constant ongoing checkpoint
will cause is to significantly lower the physical write storm happening
at the sync(), which is causing huge problems right now.
I don't see that frankly because sync() is syncing
Jan Wieck [EMAIL PROTECTED] writes:
I am not really aiming at removing sync() alltogether.
...
What doing frequent fdatasync/fsync during a constant ongoing checkpoint
will cause is to significantly lower the physical write storm happening
at the sync(), which is causing huge problems right
Jan Wieck wrote:
Bruce Momjian wrote:
Jan Wieck wrote:
What doing frequent fdatasync/fsync during a constant ongoing checkpoint
will cause is to significantly lower the physical write storm happening
at the sync(), which is causing huge problems right now.
I don't see that
Bruce Momjian [EMAIL PROTECTED] writes:
I am not really aiming at removing sync() alltogether. We know already
that open,fsync,close does not guarantee you flush dirty OS-buffers for
which another process might so far only have done open,write. And you
So for what it's worth, though the
Greg Stark wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
I am not really aiming at removing sync() alltogether. We know already
that open,fsync,close does not guarantee you flush dirty OS-buffers for
which another process might so far only have done open,write. And you
So for what
Jan Wieck wrote:
It also contains the starting work of the discussed background buffer
writer. Thus far, the BufferSync() done at a checkpoint only writes out
all dirty blocks in their LRU order and over a configurable time
(lazy_checkpoint_time in seconds). But that means at least, while
Jan Wieck wrote:
If the system is write-bound, the checkpointer will find that many dirty
blocks that he has no time to nap and will burst them out as fast as
possible anyway. Well, at least that's the theory.
PostgreSQL with the non-overwriting storage concept can never have
hot-written
Bruce Momjian [EMAIL PROTECTED] writes:
Have you considered having the background writer check the pages it is
about to write to see if they can be added to the FSM, thereby reducing
the need for vacuum?
The 7.4 rewrite of FSM depends on the assumption that all the free space
in a given
Tom Lane wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
Have you considered having the background writer check the pages it is
about to write to see if they can be added to the FSM, thereby reducing
the need for vacuum?
The 7.4 rewrite of FSM depends on the assumption that all the free
Bruce Momjian [EMAIL PROTECTED] writes:
Have you considered having the background writer check the pages it is
about to write to see if they can be added to the FSM, thereby reducing
the need for vacuum? Seems we would need to add a statistics parameter
so pg_autovacuum would know how many
My plan is to create another background process very similar to
the checkpointer and to let that run forever basically looping over that
BufferSync() with a bool telling that it's the bg_writer.
Why not use the checkpointer itself inbetween checkpoints ?
use a min and a max dirty setting
Zeugswetter Andreas SB SD wrote:
My plan is to create another background process very similar to
the checkpointer and to let that run forever basically looping over that
BufferSync() with a bool telling that it's the bg_writer.
Why not use the checkpointer itself inbetween checkpoints ?
use a
Why not use the checkpointer itself inbetween checkpoints ?
use a min and a max dirty setting like Informix. Start writing
when more than max are dirty stop when at min. This avoids writing
single pages (which is slow, since it cannot be grouped together
by the OS).
Current approach
Zeugswetter Andreas SB SD wrote:
Why not use the checkpointer itself inbetween checkpoints ?
use a min and a max dirty setting like Informix. Start writing
when more than max are dirty stop when at min. This avoids writing
single pages (which is slow, since it cannot be grouped together
by
Jan Wieck wrote:
Jan Wieck wrote:
I will follow up shortly with an approach that integrates Tom's delay
mechanism plus my first READ_BY_VACUUM hack into one combined experiement.
Okay,
the attached patch contains the 3 already discussed and one additional
change.
Ooopsy
the B1/B2 queue
Jan Wieck wrote:
Jan Wieck wrote:
Jan Wieck wrote:
I will follow up shortly with an approach that integrates Tom's delay
mechanism plus my first READ_BY_VACUUM hack into one combined experiement.
Okay,
the attached patch contains the 3 already discussed and one additional
change.
Ooopsy
Attached is a first trial implementation of the Adaptive Replacement
Cache (ARC). The patch is against CVS HEAD of 7.4.
The algorithm is based on what's described in these papers:
http://www.almaden.ibm.com/StorageSystems/autonomic_storage/ARC/arcfast.pdf
Jan Wieck wrote:
I will follow up shortly with an approach that integrates Tom's delay
mechanism plus my first READ_BY_VACUUM hack into one combined experiement.
Okay,
the attached patch contains the 3 already discussed and one additional
change. I also made a few changes.
1) ARC policy. Has
22 matches
Mail list logo