Shridhar Daithankar wrote:
On Tuesday 11 November 2003 00:50, Neil Conway wrote:
Jan Wieck [EMAIL PROTECTED] writes:
We can't resize shared memory because we allocate the whole thing in
one big hump - which causes the shmmax problem BTW. If we allocate
that in chunks of multiple blocks, we only
On Tuesday 11 November 2003 18:55, Jan Wieck wrote:
Shridhar Daithankar wrote:
On Tuesday 11 November 2003 00:50, Neil Conway wrote:
Jan Wieck [EMAIL PROTECTED] writes:
We can't resize shared memory because we allocate the whole thing in
one big hump - which causes the shmmax problem
Shridhar Daithankar wrote:
On Tuesday 11 November 2003 18:55, Jan Wieck wrote:
And how does a newly mmap'ed segment propagate into a running backend?
It wouldn't. Just like we allocate fixed amount of shared memory at startup
now, we would do same for mmaped segments. Allocate maximum
Shridhar Daithankar [EMAIL PROTECTED] writes:
If the parent postmaster mmaps anonymous memory segments and shares them with
children, postgresql wouldn't be dependent upon any kernel resourse aka
shared memory anymore.
Anonymous memory mappings aren't shared, at least not unless you're
Greg Stark wrote:
Shridhar Daithankar [EMAIL PROTECTED] writes:
If the parent postmaster mmaps anonymous memory segments and shares them with
children, postgresql wouldn't be dependent upon any kernel resourse aka
shared memory anymore.
Anonymous memory mappings aren't shared, at least not
Bruce Momjian wrote:
I would be interested to know if you have the background write process
writing old dirty buffers to kernel buffers continually if the sync()
load is diminished. What this does is to push more dirty buffers into
the kernel cache in hopes the OS will write those buffers on its
Bruce Momjian wrote:
Now, O_SYNC is going to force every write to the disk. If we have a
transaction that has to write lots of buffers (has to write them to
reuse the shared buffer)
So make the background writer/checkpointer keeping the LRU head clean. I
explained that 3 times now.
Jan
--
Jan Wieck wrote:
Bruce Momjian wrote:
Now, O_SYNC is going to force every write to the disk. If we have a
transaction that has to write lots of buffers (has to write them to
reuse the shared buffer)
So make the background writer/checkpointer keeping the LRU head clean. I
explained
Bruce Momjian [EMAIL PROTECTED] writes:
Now, if we are sure that writes will happen only in the checkpoint
process, O_SYNC would be OK, I guess, but will we ever be sure of that?
This is a performance issue, not a correctness issue. It's okay for
backends to wait for writes as long as it
Jan Wieck wrote:
Bruce Momjian wrote:
I would be interested to know if you have the background write process
writing old dirty buffers to kernel buffers continually if the sync()
load is diminished. What this does is to push more dirty buffers into
the kernel cache in hopes the OS will
What bothers me a little is that you keep telling us that you have all
that great code from SRA. Do you have any idea when they intend to share
this with us and contribute the stuff? I mean at least some pieces
maybe? You personally got all the code from NuSphere AKA PeerDirect even
weeks
Bruce Momjian wrote:
Tom Lane wrote:
Andrew Sullivan [EMAIL PROTECTED] writes:
On Sun, Nov 02, 2003 at 01:00:35PM -0500, Tom Lane wrote:
real traction we'd have to go back to the take over most of RAM for
shared buffers approach, which we already know to have a bunch of
severe disadvantages.
Bruce Momjian wrote:
Jan Wieck wrote:
Bruce Momjian wrote:
Now, O_SYNC is going to force every write to the disk. If we have a
transaction that has to write lots of buffers (has to write them to
reuse the shared buffer)
So make the background writer/checkpointer keeping the LRU head clean. I
Bruce Momjian wrote:
Jan Wieck wrote:
Bruce Momjian wrote:
I would be interested to know if you have the background write process
writing old dirty buffers to kernel buffers continually if the sync()
load is diminished. What this does is to push more dirty buffers into
the kernel cache in
Tom Lane wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
Now, if we are sure that writes will happen only in the checkpoint
process, O_SYNC would be OK, I guess, but will we ever be sure of that?
This is a performance issue, not a correctness issue. It's okay for
backends to wait for writes as
that works well enough to make it uncommon for backends to have to
write dirty buffers for themselves. If we can, then doing all the
writes O_SYNC would not be a problem.
One problem with O_SYNC would be, that the OS does not group writes any
more. So the code would need to eighter do it's
Zeugswetter Andreas SB SD [EMAIL PROTECTED] writes:
One problem with O_SYNC would be, that the OS does not group writes any
more. So the code would need to eighter do it's own sorting and grouping
(256k) or use aio, or you won't be able to get the maximum out of the disks.
Or just run
Bruce Momjian [EMAIL PROTECTED] writes:
Now, the disadvantages of large kernel cache, small PostgreSQL buffer
cache is that data has to be transfered to/from the kernel buffers, and
second, we can't control the kernel's cache replacement strategy, and
will probably not be able to in the near
One problem with O_SYNC would be, that the OS does not group writes any
more. So the code would need to eighter do it's own sorting and grouping
(256k) or use aio, or you won't be able to get the maximum out of the disks.
Or just run multiple writer processes, which I believe is
--On Monday, November 10, 2003 11:40:45 -0500 Neil Conway
[EMAIL PROTECTED] wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
Now, the disadvantages of large kernel cache, small PostgreSQL buffer
cache is that data has to be transfered to/from the kernel buffers, and
second, we can't control the
Zeugswetter Andreas SB SD wrote:
One problem with O_SYNC would be, that the OS does not group writes any
more. So the code would need to eighter do it's own sorting and grouping
(256k) or use aio, or you won't be able to get the maximum out of the disks.
Or just run multiple writer
Larry Rosenman [EMAIL PROTECTED] writes:
You might also look at Veritas' advisory stuff.
Thanks for the suggestion -- it looks like we can make use of
this. For the curious, the cache advisory API is documented here:
http://www.lerctr.org:8458/en/man/html.7/vxfsio.7.html
Jan Wieck wrote:
What bothers me a little is that you keep telling us that you have all
that great code from SRA. Do you have any idea when they intend to share
this with us and contribute the stuff? I mean at least some pieces
maybe? You personally got all the code from NuSphere AKA
On Sun, Nov 09, 2003 at 08:54:25PM -0800, Joe Conway wrote:
two servers, mounted to the same data volume, and some kind of
coordination between the writer processes. Anyone know if this is
similar to how Oracle handles RAC?
It is similar, yes, but there's some mighty powerful magic in that
Tom Lane wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
Now, if we are sure that writes will happen only in the checkpoint
process, O_SYNC would be OK, I guess, but will we ever be sure of that?
This is a performance issue, not a correctness issue. It's okay for
backends to wait for
Jan Wieck wrote:
Bruce Momjian wrote:
Tom Lane wrote:
Andrew Sullivan [EMAIL PROTECTED] writes:
On Sun, Nov 02, 2003 at 01:00:35PM -0500, Tom Lane wrote:
real traction we'd have to go back to the take over most of RAM for
shared buffers approach, which we already know to have a
Jan Wieck wrote:
If the background cleaner has to not just write() but write/fsync or
write/O_SYNC, it isn't going to be able to clean them fast enough. It
creates a bottleneck where we didn't have one before.
We are trying to eliminate an I/O storm during checkpoint, but the
Jan Wieck wrote:
Bruce Momjian wrote:
Jan Wieck wrote:
Bruce Momjian wrote:
Now, O_SYNC is going to force every write to the disk. If we have a
transaction that has to write lots of buffers (has to write them to
reuse the shared buffer)
So make the background
Jan Wieck wrote:
Bruce Momjian wrote:
Jan Wieck wrote:
Bruce Momjian wrote:
I would be interested to know if you have the background write process
writing old dirty buffers to kernel buffers continually if the sync()
load is diminished. What this does is to push more dirty buffers
Tom Lane wrote:
Zeugswetter Andreas SB SD [EMAIL PROTECTED] writes:
One problem with O_SYNC would be, that the OS does not group writes any
more. So the code would need to eighter do it's own sorting and grouping
(256k) or use aio, or you won't be able to get the maximum out of the disks.
--On Monday, November 10, 2003 13:40:24 -0500 Neil Conway
[EMAIL PROTECTED] wrote:
Larry Rosenman [EMAIL PROTECTED] writes:
You might also look at Veritas' advisory stuff.
Thanks for the suggestion -- it looks like we can make use of
this. For the curious, the cache advisory API is documented
Bruce Momjian [EMAIL PROTECTED] writes:
Another idea --- if fsync() is slow because it can't find the dirty
buffers, use write() to write the buffers, copy the buffer to local
memory, mark it as clean, then open the file with O_SYNC and write
it again.
Yuck.
Do we have any idea how many
Jan Wieck wrote:
Zeugswetter Andreas SB SD wrote:
One problem with O_SYNC would be, that the OS does not group writes any
more. So the code would need to eighter do it's own sorting and grouping
(256k) or use aio, or you won't be able to get the maximum out of the disks.
Or just
Jan Wieck [EMAIL PROTECTED] writes:
We can't resize shared memory because we allocate the whole thing in
one big hump - which causes the shmmax problem BTW. If we allocate
that in chunks of multiple blocks, we only have to give it a total
maximum size to get the hash tables and other stuff
Neil Conway wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
Another idea --- if fsync() is slow because it can't find the dirty
buffers, use write() to write the buffers, copy the buffer to local
memory, mark it as clean, then open the file with O_SYNC and write
it again.
Yuck.
Do we
Bruce Momjian wrote:
Jan Wieck wrote:
Bruce Momjian wrote:
Jan Wieck wrote:
Bruce Momjian wrote:
Now, O_SYNC is going to force every write to the disk. If we have a
transaction that has to write lots of buffers (has to write them to
reuse the shared buffer)
So make the background
Jan Wieck wrote:
If the background cleaner has to not just write() but write/fsync or
write/O_SYNC, it isn't going to be able to clean them fast enough. It
creates a bottleneck where we didn't have one before.
We are trying to eliminate an I/O storm during checkpoint, but the
Neil Conway wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
Another idea --- if fsync() is slow because it can't find the dirty
buffers, use write() to write the buffers, copy the buffer to local
memory, mark it as clean, then open the file with O_SYNC and write
it again.
Yuck.
This
Bruce Momjian wrote:
Jan Wieck wrote:
Zeugswetter Andreas SB SD wrote:
One problem with O_SYNC would be, that the OS does not group writes any
more. So the code would need to eighter do it's own sorting and grouping
(256k) or use aio, or you won't be able to get the maximum out of the
Bruce Momjian wrote:
Jan Wieck wrote:
If the background cleaner has to not just write() but write/fsync or
write/O_SYNC, it isn't going to be able to clean them fast enough. It
creates a bottleneck where we didn't have one before.
We are trying to eliminate an I/O storm during
Jan Wieck wrote:
Bruce Momjian wrote:
Jan Wieck wrote:
If the background cleaner has to not just write() but write/fsync or
write/O_SYNC, it isn't going to be able to clean them fast enough. It
creates a bottleneck where we didn't have one before.
We are trying to
Jan Wieck wrote:
Bruce Momjian wrote:
Jan Wieck wrote:
Zeugswetter Andreas SB SD wrote:
One problem with O_SYNC would be, that the OS does not group writes any
more. So the code would need to eighter do it's own sorting and grouping
(256k) or use aio, or you won't be able to
Andrew Sullivan wrote:
On Sun, Nov 09, 2003 at 08:54:25PM -0800, Joe Conway wrote:
two servers, mounted to the same data volume, and some kind of
coordination between the writer processes. Anyone know if this is
similar to how Oracle handles RAC?
It is similar, yes, but there's some mighty
Jan Wieck wrote:
What bothers me a little is that you keep telling us that you have all
that great code from SRA. Do you have any idea when they intend to share
this with us and contribute the stuff? I mean at least some pieces
maybe? You personally got all the code from NuSphere AKA
On Tuesday 11 November 2003 00:50, Neil Conway wrote:
Jan Wieck [EMAIL PROTECTED] writes:
We can't resize shared memory because we allocate the whole thing in
one big hump - which causes the shmmax problem BTW. If we allocate
that in chunks of multiple blocks, we only have to give it a
Manfred Spraul [EMAIL PROTECTED] writes:
Greg Stark wrote:
I'm assuming fsync syncs writes issued by other processes on the same file,
which isn't necessarily true though.
It was already pointed out that we can't rely on that assumption.
So the NetBSD and Sun developers I checked
Greg Stark [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] writes:
You want to find, open, and fsync() every file in the database cluster
for every checkpoint? Sounds like a non-starter to me.
Except a) this is outside any critical path, and b) only done every few
minutes and c) the
Greg Stark wrote:
I'm assuming fsync syncs writes issued by other processes on the same file,
which isn't necessarily true though.
It was already pointed out that we can't rely on that assumption.
So the NetBSD and Sun developers I checked with both asserted fsync does in
fact
The delay patch worked so well, I couldn't resist asking if a similar patch
could be added for COPY command (pg_dump). It's just an extension of the
same idea. On a large DB, backups can take very long while consuming a lot
of IO slowing down other select and write operations. We operate on a
Tom Lane wrote:
Jan Wieck [EMAIL PROTECTED] writes:
How I can see the background writer operating is that he's keeping the
buffers in the order of the LRU chain(s) clean, because those are the
buffers that most likely get replaced soon. In my experimental ARC code
it would traverse the T1 and
Tom Lane wrote:
Greg Stark [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] writes:
You want to find, open, and fsync() every file in the database cluster
for every checkpoint? Sounds like a non-starter to me.
Except a) this is outside any critical path, and b) only done every few
Tom Lane wrote:
Jan Wieck [EMAIL PROTECTED] writes:
What still needs to be addressed is the IO storm cause by checkpoints. I
see it much relaxed when stretching out the BufferSync() over most of
the time until the next one should occur. But the kernel sync at it's
end still pushes the
scott.marlowe wrote:
On Tue, 4 Nov 2003, Tom Lane wrote:
Jan Wieck [EMAIL PROTECTED] writes:
What still needs to be addressed is the IO storm cause by checkpoints. I
see it much relaxed when stretching out the BufferSync() over most of
the time until the next one should occur. But
I would be interested to know if you have the background write process
writing old dirty buffers to kernel buffers continually if the sync()
load is diminished. What this does is to push more dirty buffers into
the kernel cache in hopes the OS will write those buffers on its own
before the
Tom Lane wrote:
Jan Wieck [EMAIL PROTECTED] writes:
That is part of the idea. The whole idea is to issue physical writes
at a fairly steady rate without increasing the number of them
substantial or interfering with the drives opinion about their order too
much. I think O_SYNC for
Tom Lane wrote:
Andrew Sullivan [EMAIL PROTECTED] writes:
On Sun, Nov 02, 2003 at 01:00:35PM -0500, Tom Lane wrote:
real traction we'd have to go back to the take over most of RAM for
shared buffers approach, which we already know to have a bunch of
severe disadvantages.
I know there
Bruce Momjian wrote:
Having another process do the writing does allow some paralellism, but
people don't seem to care of buffers having to be read in from the
kernel buffer cache, so what big benefit do we get by having someone
else write into the kernel buffer cache, except allowing a central
Bruce Momjian wrote:
Agreed, we can't resize shared memory, but I don't think most OS's swap
out shared memory, and even if they do, they usually have a kernel
configuration parameter to lock it into kernel memory. All the old
unixes locked the shared memory into kernel address space and in fact
The only idea I have come up with is to move all buffer write operations
into a background writer process, which could easily keep track of
every file it's written into since the last checkpoint.
I fear this approach. It seems to limit a lot of design flexibility later. But
I can't
Or... It seems to me that we have been observing something on the order
of 10x-20x slowdown for vacuuming a table. I think this is WAY
overcompensating for the original problems, and would cause it's own
problem as mentioned above. Since the granularity of delay seems to be
the
Zeugswetter Andreas SB SD wrote:
Or... It seems to me that we have been observing something on the order
of 10x-20x slowdown for vacuuming a table. I think this is WAY
overcompensating for the original problems, and would cause it's own
problem as mentioned above. Since the granularity of
Jan Wieck [EMAIL PROTECTED] writes:
What still needs to be addressed is the IO storm cause by checkpoints. I
see it much relaxed when stretching out the BufferSync() over most of
the time until the next one should occur. But the kernel sync at it's
end still pushes the system hard against
Tom Lane wrote:
Jan Wieck [EMAIL PROTECTED] writes:
What still needs to be addressed is the IO storm cause by checkpoints. I
see it much relaxed when stretching out the BufferSync() over most of
the time until the next one should occur. But the kernel sync at it's
end still pushes the
Jan Wieck [EMAIL PROTECTED] writes:
Tom Lane wrote:
I have never been happy with the fact that we use sync(2) at all.
Sure does it do too much. But together with the other layer of
indirection, the virtual file descriptor pool, what is the exact
guaranteed behaviour of
write();
Ang Chin Han wrote:
Christopher Browne wrote:
Centuries ago, Nostradamus foresaw when Stephen
[EMAIL PROTECTED] would write:
As it turns out. With vacuum_page_delay = 0, VACUUM took 1m20s (80s)
to complete, with vacuum_page_delay = 1 and vacuum_page_delay = 10,
both VACUUMs completed in 18m3s
Andrew Dunstan [EMAIL PROTECTED] writes:
Actually, once you build it this way, you could make all writes
synchronous (open the files O_SYNC) so that there is never any need for
explicit fsync at checkpoint time.
Or maybe fdatasync() would be slightly more efficient - do we care about
Tom Lane wrote:
Jan Wieck [EMAIL PROTECTED] writes:
What still needs to be addressed is the IO storm cause by checkpoints. I
see it much relaxed when stretching out the BufferSync() over most of
the time until the next one should occur. But the kernel sync at it's
end still pushes the system
Jan Wieck [EMAIL PROTECTED] writes:
vacuum_page_per_delay = 2
vacuum_time_per_delay = 10
That's exactly what I did ... look at the combined experiment posted under
subject Experimental ARC implementation. The two parameters are named
vacuum_page_groupsize and vacuum_page_delay.
FWIW
Tom Lane wrote:
Jan Wieck [EMAIL PROTECTED] writes:
Tom Lane wrote:
I have never been happy with the fact that we use sync(2) at all.
Sure does it do too much. But together with the other layer of
indirection, the virtual file descriptor pool, what is the exact
guaranteed behaviour of
Tom Lane [EMAIL PROTECTED] writes:
I would like to see us go over to fsync, or some other technique that
gives more certainty about when the write has occurred. There might be
some scope that way to allow stretching out the I/O, too.
The main problem with this is knowing which files need
On Tue, 4 Nov 2003, Tom Lane wrote:
Jan Wieck [EMAIL PROTECTED] writes:
What still needs to be addressed is the IO storm cause by checkpoints. I
see it much relaxed when stretching out the BufferSync() over most of
the time until the next one should occur. But the kernel sync at it's
Greg Stark [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] writes:
The main problem with this is knowing which files need to be fsync'd.
Why could the postmaster not just fsync *every* file?
You want to find, open, and fsync() every file in the database cluster
for every checkpoint?
I don't mind the long delay as long as we have a choice as we clearly do in
this case to set vacuum_page_delay=WHATEVER. Of course, if VACUUM can be
improved with better code placement for the delays or buffer replacement
policies then I'm all for it. Right now, I'm pretty satisfied with the
Jan Wieck [EMAIL PROTECTED] writes:
That is part of the idea. The whole idea is to issue physical writes
at a fairly steady rate without increasing the number of them
substantial or interfering with the drives opinion about their order too
much. I think O_SYNC for random access can be in
scott.marlowe wrote:
On Tue, 4 Nov 2003, Tom Lane wrote:
The main problem with this is knowing which files need to be fsync'd.
Wasn't this a problem that the win32 port had to solve by keeping a list
of all files that need fsyncing since Windows doesn't do sync() in the
classical sense?
Tom Lane [EMAIL PROTECTED] writes:
Greg Stark [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] writes:
The main problem with this is knowing which files need to be fsync'd.
Why could the postmaster not just fsync *every* file?
You want to find, open, and fsync() every file in the
The world rejoiced as [EMAIL PROTECTED] (Hannu Krosing) wrote:
Christopher Browne kirjutas E, 03.11.2003 kell 02:15:
Well, actually, the case where it _would_ be troublesome would be
where there was a combination of huge tables needing vacuuming and
smaller ones that are _heavily_ updated
On Sun, Nov 02, 2003 at 01:00:35PM -0500, Tom Lane wrote:
real traction we'd have to go back to the take over most of RAM for
shared buffers approach, which we already know to have a bunch of
severe disadvantages.
I know there are severe disadvantages in the current implementation,
but are
Christopher Browne wrote:
The world rejoiced as [EMAIL PROTECTED] (Hannu Krosing) wrote:
Christopher Browne kirjutas E, 03.11.2003 kell 02:15:
Well, actually, the case where it _would_ be troublesome would be
where there was a combination of huge tables needing vacuuming and
smaller ones
Andrew Sullivan [EMAIL PROTECTED] writes:
On Sun, Nov 02, 2003 at 01:00:35PM -0500, Tom Lane wrote:
real traction we'd have to go back to the take over most of RAM for
shared buffers approach, which we already know to have a bunch of
severe disadvantages.
I know there are severe
Christopher Browne kirjutas E, 03.11.2003 kell 15:22:
Can't one just run a _separate_ VACUUM on those smaller tables ?
Yes, but that defeats the purpose of having a daemon that tries to
manage this all for you.
If a dumb deamon can't do its work well, we need smarter daemons ;)
Christopher Browne wrote:
The world rejoiced as [EMAIL PROTECTED] (Hannu Krosing) wrote:
Christopher Browne kirjutas E, 03.11.2003 kell 02:15:
Well, actually, the case where it _would_ be troublesome would be
where there was a combination of huge tables needing vacuuming and
smaller ones that are
Christopher Browne wrote:
Centuries ago, Nostradamus foresaw when Stephen [EMAIL PROTECTED] would write:
As it turns out. With vacuum_page_delay = 0, VACUUM took 1m20s (80s)
to complete, with vacuum_page_delay = 1 and vacuum_page_delay = 10,
both VACUUMs completed in 18m3s (1080 sec). A factor of
Ang Chin Han wrote:
Christopher Browne wrote:
Centuries ago, Nostradamus foresaw when Stephen [EMAIL PROTECTED] would write:
As it turns out. With vacuum_page_delay = 0, VACUUM took 1m20s (80s)
to complete, with vacuum_page_delay = 1 and vacuum_page_delay = 10,
both VACUUMs completed in 18m3s
Jan Wieck [EMAIL PROTECTED] writes:
I am currently looking at implementing ARC as a replacement strategy. I
don't have anything that works yet, so I can't really tell what the
result would be and it might turn out that we want both features.
It's likely that we would. As someone (you?)
As it turns out. With vacuum_page_delay = 0, VACUUM took 1m20s (80s) to
complete, with vacuum_page_delay = 1 and vacuum_page_delay = 10, both
VACUUMs completed in 18m3s (1080 sec). A factor of 13 times! This is for a
single 350 MB table.
Apparently, it looks like the upcoming Linux kernel 2.6
Not surprising, I should have thought. Why would you care that much?
The idea as I understand it is to improve the responsiveness of things
happening alongside vacuum (real work). I normally run vacuum when I
don't expect anything else much to be happening - but I don't care how
long it takes
Tom Lane kirjutas P, 02.11.2003 kell 20:00:
Jan Wieck [EMAIL PROTECTED] writes:
I am currently looking at implementing ARC as a replacement strategy. I
don't have anything that works yet, so I can't really tell what the
result would be and it might turn out that we want both features.
Centuries ago, Nostradamus foresaw when Stephen [EMAIL PROTECTED] would write:
As it turns out. With vacuum_page_delay = 0, VACUUM took 1m20s (80s)
to complete, with vacuum_page_delay = 1 and vacuum_page_delay = 10,
both VACUUMs completed in 18m3s (1080 sec). A factor of 13 times!
This is for
Tom Lane wrote:
Attached is an extremely crude prototype patch for making VACUUM delay
by a configurable amount between pages, in hopes of throttling its disk
bandwidth consumption. By default, there is no delay (so no change in
behavior). In some quick testing, setting vacuum_page_delay to 10
I tried the Tom Lane's patch on PostgreSQL 7.4-BETA-5 and it works
fantastically! Running a few short tests show a significant improvement in
responsiveness on my RedHat 9 Linux 2.4-20-8 (IDE 120GB 7200RPM UDMA5).
I didn't feel any noticeable delay when vacuum_page_delay is set to 5ms, 10
ms.
Stephen wrote:
I tried the Tom Lane's patch on PostgreSQL 7.4-BETA-5 and it works
fantastically! Running a few short tests show a significant improvement in
responsiveness on my RedHat 9 Linux 2.4-20-8 (IDE 120GB 7200RPM UDMA5).
I am currently looking at implementing ARC as a replacement
Tom Lane wrote:
Matthew T. O'Connor [EMAIL PROTECTED] writes:
Tom Lane wrote:
2. I only bothered to insert delays in the processing loops of plain
VACUUM and btree index cleanup. VACUUM FULL and cleanup of non-btree
indexes aren't done yet.
I thought we didn't want the delay in
Bruce Momjian [EMAIL PROTECTED] writes:
What is the advantage of delaying vacuum per page vs. just doing vacuum
less frequently?
The point is the amount of load VACUUM poses while it's running. If
your setup doesn't have a lot of disk bandwidth to spare, a background
VACUUM can hurt the
Bruce Momjian wrote:
Tom Lane wrote:
Matthew T. O'Connor [EMAIL PROTECTED] writes:
Tom Lane wrote:
2. I only bothered to insert delays in the processing loops of plain
VACUUM and btree index cleanup. VACUUM FULL and cleanup of non-btree
indexes aren't done yet.
I thought we didn't want
[EMAIL PROTECTED] (Bruce Momjian) writes:
Tom Lane wrote:
Best practice would likely be to leave the default vacuum_page_delay at
zero, and have the autovacuum daemon set a nonzero value for vacuums it
issues.
What is the advantage of delaying vacuum per page vs. just doing vacuum
less
Great! I haven't tried it yet, but I love the thought of it already :-)
I've been waiting for something like this for the past 2 years and now it's
going to make my multi-gigabyte PostgreSQL more usable and responsive. Will
the delay be tunable per VACUUM invocation? This is needed for different
Attached is an extremely crude prototype patch for making VACUUM delay
by a configurable amount between pages, in hopes of throttling its disk
bandwidth consumption. By default, there is no delay (so no change in
behavior). In some quick testing, setting vacuum_page_delay to 10
(milliseconds)
Tom Lane wrote:
Attached is an extremely crude prototype patch for making VACUUM delay
by a configurable amount between pages,
Cool!
Assuming that this is found to be useful, the following issues would
have to be dealt with before the patch would be production quality:
2. I only bothered to
Matthew T. O'Connor [EMAIL PROTECTED] writes:
Tom Lane wrote:
2. I only bothered to insert delays in the processing loops of plain
VACUUM and btree index cleanup. VACUUM FULL and cleanup of non-btree
indexes aren't done yet.
I thought we didn't want the delay in vacuum full since it locks
100 matches
Mail list logo