On Mon, 2005-02-21 at 18:45 -0500, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
...but do you agree with my comments on the lack of scalability in cache
miss situations?
No. Grabbing a lock during a cache miss is the least of your worries;
you're going to do I/O, or at least a
My understanding from this is:
If we have a buffer cache hit ratio of 93%, then we should expect:
- 93% of buffer requests to require only shared BufMappingLocks
- 7% of buffer requests would require an exclusive BufFreelistLock then
an exclusive BufMappingLock.
That seems like an improvement
Simon Riggs [EMAIL PROTECTED] writes:
[I'm assuming that there are no system-wide locks held across I/Os, that
bit seems a bit unclear from the description]
That's always been true and still is, so I didn't dwell on it. Only a
per-buffer lock is held while doing either input or output.
On Mon, 2005-02-21 at 18:01 -0500, Tom Lane wrote:
Simon Riggs [EMAIL PROTECTED] writes:
[I'm assuming that there are no system-wide locks held across I/Os, that
bit seems a bit unclear from the description]
That's always been true and still is, so I didn't dwell on it. Only a
per-buffer
Simon Riggs [EMAIL PROTECTED] writes:
...but do you agree with my comments on the lack of scalability in cache
miss situations?
No. Grabbing a lock during a cache miss is the least of your worries;
you're going to do I/O, or at least a kernel call, so it hardly matters
as long as you're not
Simon Riggs [EMAIL PROTECTED] writes:
This design seems to be a clear improvement on the current design. I am
still encouraged that the freelist structures should be subdivided into
many smaller pieces, thereby producing finer grained locks (the earlier
bufferpools proposal).
As I already
Tom Lane wrote:
Jim C. Nasby [EMAIL PROTECTED] writes:
The advantage of using a counter instead of a simple active
bit is that buffers that are (or have been) used heavily will be able to
go through several sweeps of the clock before being freed. Infrequently
used buffers (such as those
Tom Lane wrote:
Jim C. Nasby [EMAIL PROTECTED] writes:
The advantage of using a counter instead of a simple active
bit is that buffers that are (or have been) used heavily will be able to
go through several sweeps of the clock before being freed. Infrequently
used buffers (such as those from a
Would there be any value in incrementing by 2 for index accesses and 1
for seq-scans/vacuums? Actually, it should probably be a ratio based on
random_page_cost shouldn't it?
What happens with very small hot tables that are only a few pages and thus have
no index defined.
I think it would
On Sun, Feb 13, 2005 at 06:56:47PM -0500, Tom Lane wrote:
Bruce Momjian pgman@candle.pha.pa.us writes:
Tom Lane wrote:
One thing I realized quickly is that there is no natural way in a clock
algorithm to discourage VACUUM from blowing out the cache. I came up
with a slightly ugly idea
Jim C. Nasby [EMAIL PROTECTED] writes:
The advantage of using a counter instead of a simple active
bit is that buffers that are (or have been) used heavily will be able to
go through several sweeps of the clock before being freed. Infrequently
used buffers (such as those from a vacuum or seq.
On Wed, Feb 16, 2005 at 12:33:38PM -0500, Tom Lane wrote:
Jim C. Nasby [EMAIL PROTECTED] writes:
The advantage of using a counter instead of a simple active
bit is that buffers that are (or have been) used heavily will be able to
go through several sweeps of the clock before being freed.
On Wed, Feb 16, 2005 at 11:42:11AM -0600, Kenneth Marshall wrote:
I have seen this algorithm described as a more generalized clock type
algorithm. As the size of the counter increases, up to the number of
buffers, the clock algorithm becomes LRU. One bit is the lightest
weight approximation.
Tom == Tom Lane [EMAIL PROTECTED] writes:
Tom and changing the buf_table hash table. The only common
Tom operation that needs exclusive lock is reading in a page that
Tom was not in shared buffers already, which will require at
Tom least a kernel call and usually a wait for I/O,
I'm working on an experimental patch to break up the BufMgrLock along
the lines we discussed a few days ago --- in particular, using a clock
sweep algorithm instead of LRU lists for the buffer replacement strategy.
I started by writing up some design notes, which are attached for
review in case
Tom Lane wrote:
I'm working on an experimental patch to break up the BufMgrLock along
the lines we discussed a few days ago --- in particular, using a clock
sweep algorithm instead of LRU lists for the buffer replacement strategy.
I started by writing up some design notes, which are attached
Bruce Momjian pgman@candle.pha.pa.us writes:
Tom Lane wrote:
One thing I realized quickly is that there is no natural way in a clock
algorithm to discourage VACUUM from blowing out the cache. I came up
with a slightly ugly idea that's described below. Can anyone do better?
Uh, is the clock
Tom Lane wrote:
Bruce Momjian pgman@candle.pha.pa.us writes:
Tom Lane wrote:
One thing I realized quickly is that there is no natural way in a clock
algorithm to discourage VACUUM from blowing out the cache. I came up
with a slightly ugly idea that's described below. Can anyone do
Bruce Momjian pgman@candle.pha.pa.us writes:
Tom Lane wrote:
One thing I realized quickly is that there is no natural way in a clock
algorithm to discourage VACUUM from blowing out the cache. I came up
with a slightly ugly idea that's described below. Can anyone do better?
Uh, is
19 matches
Mail list logo