2011/3/24 Jim Nasby j...@nasby.net:
On Mar 22, 2011, at 11:46 AM, Cédric Villemain wrote:
2011/3/22 Greg Stark gsst...@mit.edu:
On Mon, Mar 21, 2011 at 6:08 PM, Jim Nasby j...@nasby.net wrote:
Has anyone looked at the overhead of measuring how long IO requests to the
kernel take? If we did
On Thursday, March 24, 2011 06:32:10 PM Jim Nasby wrote:
Is there an equivalent in other OSes?
Some have mincore which can be used for that in combination with mmap.
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
2011/3/24 Jim Nasby j...@nasby.net:
On Mar 22, 2011, at 11:46 AM, Cédric Villemain wrote:
2011/3/22 Greg Stark gsst...@mit.edu:
On Mon, Mar 21, 2011 at 6:08 PM, Jim Nasby j...@nasby.net wrote:
Has anyone looked at the overhead of measuring how long IO requests to the
kernel take? If we did
On Mar 22, 2011, at 11:46 AM, Cédric Villemain wrote:
2011/3/22 Greg Stark gsst...@mit.edu:
On Mon, Mar 21, 2011 at 6:08 PM, Jim Nasby j...@nasby.net wrote:
Has anyone looked at the overhead of measuring how long IO requests to the
kernel take? If we did that not only could we get an idea of
2011/3/22 Greg Stark gsst...@mit.edu:
On Mon, Mar 21, 2011 at 6:08 PM, Jim Nasby j...@nasby.net wrote:
Has anyone looked at the overhead of measuring how long IO requests to the
kernel take? If we did that not only could we get an idea of what our IO
workload looked like, we could also
On Mar 16, 2011, at 7:44 PM, Robert Haas wrote:
It
would be really nice (for this and for other things) if we had some
way of measuring the I/O saturation of the system, so that we could
automatically adjust the aggressiveness of background processes
accordingly.
Has anyone looked at the
On Mon, Mar 21, 2011 at 6:08 PM, Jim Nasby j...@nasby.net wrote:
Has anyone looked at the overhead of measuring how long IO requests to the
kernel take? If we did that not only could we get an idea of what our IO
workload looked like, we could also figure out whether a block came out of
Robert Haas wrote:
Right. Really-lazy vacuum could freeze tuples. Unlike regular
vacuum, it can also sensibly be done incrementally. One thing I was
thinking about is counting the number of times that we fetched a tuple
that was older than RecentGlobalXmin and had a committed xmin and an
2011/3/17 Robert Haas robertmh...@gmail.com:
On Wed, Mar 16, 2011 at 6:36 PM, Jim Nasby j...@nasby.net wrote:
One way to look at this is that any system will have a limit on how quickly
it can vacuum everything. If it's having trouble dedicating enough IO to
vacuum, then autovac is going to
On Thu, Mar 17, 2011 at 4:17 AM, Jesper Krogh jes...@krogh.cc wrote:
Is it obvious that the visibillity map bits should track complete
pages and not individual tuples? If the visibillity map tracks at
page-level the benefit would fall on slim tables where you squeeze
200 tuples into each page
On 2011-03-17 15:02, Robert Haas wrote:
On Thu, Mar 17, 2011 at 4:17 AM, Jesper Kroghjes...@krogh.cc wrote:
Is it obvious that the visibillity map bits should track complete
pages and not individual tuples? If the visibillity map tracks at
page-level the benefit would fall on slim tables where
On Thu, Mar 17, 2011 at 4:02 PM, Jesper Krogh jes...@krogh.cc wrote:
On the 1 bit per page the best case would be 341 times better than above
reducing the size of the visibiility map on a 10GB table to around 152KB
which
is extremely small (and thus also awesome) But the consequenses of a
On Mar 14, 2011, at 2:36 PM, Robert Haas wrote:
I'm not quite sure how we'd decide whether to do a really lazy
vacuum or the kind we do now. The case where this approach wins big
is when there are few or no dead tuples. In that case, we do a lot of
work looking at the indexes and we don't
On Wed, Mar 16, 2011 at 6:36 PM, Jim Nasby j...@nasby.net wrote:
One way to look at this is that any system will have a limit on how quickly
it can vacuum everything. If it's having trouble dedicating enough IO to
vacuum, then autovac is going to have a long list of tables that it wants to
For historical reasons, what we now think of as VACUUM is referred to
in some portions of the code as lazy vacuum, to distinguish it from
pre-9.0 VACUUM FULL. As I understand it, VACUUM works like this:
- Scan the relation, accumulating a list of tuples to kill.
- When you get to the end of the
Robert Haas robertmh...@gmail.com writes:
I'm not quite sure how we'd decide whether to do a really lazy
vacuum or the kind we do now. The case where this approach wins big
is when there are few or no dead tuples. In that case, we do a lot of
work looking at the indexes and we don't get much
On Mon, Mar 14, 2011 at 4:18 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
I'm not quite sure how we'd decide whether to do a really lazy
vacuum or the kind we do now. The case where this approach wins big
is when there are few or no dead tuples. In that
On Mon, Mar 14, 2011 at 8:33 PM, Robert Haas robertmh...@gmail.com wrote:
I'm not sure about that either, although I'm not sure of the reverse
either. But before I invest any time in it, do you have any other
good ideas for addressing the it stinks to scan the entire index
every time we
Robert Haas robertmh...@gmail.com writes:
On Mon, Mar 14, 2011 at 4:18 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Um, if there are *no* dead tuples then we don't look at the indexes
anyway ...
But you do still have to scan the heap twice.
Seems like that should be fixable ... is the second pass
On Mon, Mar 14, 2011 at 7:40 PM, Greg Stark gsst...@mit.edu wrote:
On Mon, Mar 14, 2011 at 8:33 PM, Robert Haas robertmh...@gmail.com wrote:
I'm not sure about that either, although I'm not sure of the reverse
either. But before I invest any time in it, do you have any other
good ideas for
On Mon, Mar 14, 2011 at 8:00 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
On Mon, Mar 14, 2011 at 4:18 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Um, if there are *no* dead tuples then we don't look at the indexes
anyway ...
But you do still have to scan the
21 matches
Mail list logo