Benedikt Grundmann wrote:
> For what's worth here are the numbers on one of our biggest
> databases (same system as I posted about separately wrt
> seq_scan_cost vs random_page_cost).
That's would be a 88.4% hit rate on the summarized data.
-Kevin
--
Sent via pgsql-hackers mailing list (pg
For what's worth here are the numbers on one of our biggest databases
(same system as I posted about separately wrt seq_scan_cost vs
random_page_cost).
0053 1001
00BA 1009
0055 1001
00B9 1020
0054 983
00BB 1010
0056 1001
00BC 1019
0069 0
00BD 1009
006A 224
00BE 1018
006B 1009
00BF 1008
006C 1008
On Thu, Jan 5, 2012 at 6:15 AM, Florian Pflug wrote:
> On 64-bit machines at least, we could simply mmap() the stable parts of the
> CLOG into the backend address space, and access it without any locking at all.
True. I think this could be done, but it would take some fairly
careful thought and
On Thu, Jan 5, 2012 at 5:15 AM, Florian Pflug wrote:
> On Jan4, 2012, at 21:27 , Robert Haas wrote:
>> I think the first thing we need to look at is increasing the number of
>> CLOG buffers.
>
> What became of the idea to treat the stable (i.e. earlier than the oldest
> active xid) and the unstabl
On Jan4, 2012, at 21:27 , Robert Haas wrote:
> I think the first thing we need to look at is increasing the number of
> CLOG buffers.
What became of the idea to treat the stable (i.e. earlier than the oldest
active xid) and the unstable (i.e. the rest) parts of the CLOG differently.
On 64-bit mac
On Wed, Jan 4, 2012 at 4:02 PM, Kevin Grittner
wrote:
> Robert Haas wrote:
>
>> 2. The CLOG code isn't designed to manage a large number of
>> buffers, so adding more might cause a performance regression on
>> small systems.
>>
>> On Nate Boley's 32-core system, running pgbench at scale factor
>>
On Jan 4, 2012, at 2:02 PM, Kevin Grittner wrote:
> Jim Nasby wrote:
>> Here's output from our largest OLTP system... not sure exactly how
>> to interpret it, so I'm just providing the raw data. This spans
>> almost exactly 1 month.
>
> Those number wind up meaning that 18% of the 256-byte blocks
Robert Haas wrote:
> 2. The CLOG code isn't designed to manage a large number of
> buffers, so adding more might cause a performance regression on
> small systems.
>
> On Nate Boley's 32-core system, running pgbench at scale factor
> 100, the optimal number of buffers seems to be around 32. I'
On Wed, Jan 4, 2012 at 3:02 PM, Kevin Grittner
wrote:
> Jim Nasby wrote:
>> Here's output from our largest OLTP system... not sure exactly how
>> to interpret it, so I'm just providing the raw data. This spans
>> almost exactly 1 month.
>
> Those number wind up meaning that 18% of the 256-byte bl
Jim Nasby wrote:
> Here's output from our largest OLTP system... not sure exactly how
> to interpret it, so I'm just providing the raw data. This spans
> almost exactly 1 month.
Those number wind up meaning that 18% of the 256-byte blocks (1024
transactions each) were all commits. Yikes. Tha
On Dec 23, 2011, at 2:23 PM, Kevin Grittner wrote:
> Jeff Janes wrote:
>
>> Could we get some major OLTP users to post their CLOG for
>> analysis? I wouldn't think there would be much
>> security/propietary issues with CLOG data.
>
> FWIW, I got the raw numbers to do my quick check using this
On Wed, Dec 28, 2011 at 8:45 AM, Greg Stark wrote:
> On Tue, Dec 27, 2011 at 10:43 PM, Merlin Moncure wrote:
>> I bet if you kept a judicious number of
>> clog pages in each local process with some smart invalidation you
>> could cover enough cases that scribbling the bits down would become
>> u
On Tue, Dec 27, 2011 at 10:43 PM, Merlin Moncure wrote:
> I bet if you kept a judicious number of
> clog pages in each local process with some smart invalidation you
> could cover enough cases that scribbling the bits down would become
> unnecessary.
I don't understand how any cache can complete
On Tue, 2011-12-27 at 16:43 -0600, Merlin Moncure wrote:
> On Tue, Dec 27, 2011 at 1:24 PM, Jeff Davis wrote:
> > 3. Attack hint bits problem.
>
> A large number of problems would go away if the current hint bit
> system could be replaced with something that did not require writing
> to the tuple
On Tue, Dec 27, 2011 at 1:24 PM, Jeff Davis wrote:
> 3. Attack hint bits problem.
A large number of problems would go away if the current hint bit
system could be replaced with something that did not require writing
to the tuple itself. FWIW, moving the bits around seems like a
non-starter -- yo
On Thu, 2011-12-22 at 03:50 -0600, Kevin Grittner wrote:
> Now, on to the separate-but-related topic of double-write. That
> absolutely requires some form of checksum or CRC to detect torn
> pages, in order for the technique to work at all. Adding a CRC
> without double-write would work fine if y
On Thu, Dec 22, 2011 at 9:58 PM, Simon Riggs wrote:
> On Thu, Dec 22, 2011 at 9:50 AM, Kevin Grittner
> wrote:
>
>> Simon, does it sound like I understand your proposal?
>
> Yes, thanks for restating.
I've implemented that proposal, posting patch on a separate thread.
--
Simon Riggs
Jeff Janes writes:
> I had a perhaps crazier idea. Aren't CLOG pages older than global xmin
> effectively read only? Could backends that need these bypass locking
> and shared memory altogether?
Hmm ... once they've been written out from the SLRU arena, yes. In fact
you don't need to go back as
Jeff Janes wrote:
> Could we get some major OLTP users to post their CLOG for
> analysis? I wouldn't think there would be much
> security/propietary issues with CLOG data.
FWIW, I got the raw numbers to do my quick check using this Ruby
script (put together for me by Peter Brant). If it is o
Tom Lane wrote:
> Robert Haas writes:
>> An obvious problem is that, if the abort rate is significantly
>> different from zero, and especially if the aborts are randomly
>> mixed in with commits rather than clustered together in small
>> portions of the XID space, the CLOG rollup data would becom
On 12/23/11, Robert Haas wrote:
> On Fri, Dec 23, 2011 at 11:14 AM, Kevin Grittner
> wrote:
>> Thoughts?
>
> Those are good thoughts.
>
> Here's another random idea, which might be completely nuts. Maybe we
> could consider some kind of summarization of CLOG data, based on the
> idea that most t
On Fri, Dec 23, 2011 at 12:42 PM, Tom Lane wrote:
> Robert Haas writes:
>> An obvious problem is that, if the abort rate is significantly
>> different from zero, and especially if the aborts are randomly mixed
>> in with commits rather than clustered together in small portions of
>> the XID space
Robert Haas writes:
> An obvious problem is that, if the abort rate is significantly
> different from zero, and especially if the aborts are randomly mixed
> in with commits rather than clustered together in small portions of
> the XID space, the CLOG rollup data would become useless.
Yeah, I'm a
On Fri, Dec 23, 2011 at 11:14 AM, Kevin Grittner
wrote:
> Thoughts?
Those are good thoughts.
Here's another random idea, which might be completely nuts. Maybe we
could consider some kind of summarization of CLOG data, based on the
idea that most transactions commit. We introduce the idea of a
"Kevin Grittner" wrote:
>> I would suggest you examine how to have an array of N bgwriters,
>> then just slot the code for hinting into the bgwriter. That way a
>> bgwriter can set hints, calc CRC and write pages in sequence on a
>> particular block. The hinting needs to be synchronised with the
Simon Riggs wrote:
> It could work that way, but I seriously doubt that a technique
> only mentioned in dispatches one month before the last CF is
> likely to become trustable code within one month. We've been
> discussing CRCs for years, so assembling the puzzle seems much
> easier, when all th
On Thu, Dec 22, 2011 at 9:50 AM, Kevin Grittner
wrote:
> Simon, does it sound like I understand your proposal?
Yes, thanks for restating.
> Now, on to the separate-but-related topic of double-write. That
> absolutely requires some form of checksum or CRC to detect torn
> pages, in order for th
On Thu, Dec 22, 2011 at 3:04 PM, Robert Haas wrote:
> On Thu, Dec 22, 2011 at 1:50 PM, Jignesh Shah wrote:
>> In the double write implementation, every checkpoint write is double
>> writed,
>
> Unless I'm quite thoroughly confused, which is possible, the double
> write will need to happen the fir
On Thu, Dec 22, 2011 at 1:50 PM, Jignesh Shah wrote:
> In the double write implementation, every checkpoint write is double
> writed,
Unless I'm quite thoroughly confused, which is possible, the double
write will need to happen the first time a buffer is written following
each checkpoint. Which
On Thu, Dec 22, 2011 at 11:16 AM, Kevin Grittner
wrote:
> Jignesh Shah wrote:
>
>> When we use Doublewrite with checksums, we can safely disable
>> full_page_write causing a HUGE reduction to the WAL traffic
>> without loss of reliatbility due to a write fault since there are
>> two writes always
Jignesh Shah wrote:
> When we use Doublewrite with checksums, we can safely disable
> full_page_write causing a HUGE reduction to the WAL traffic
> without loss of reliatbility due to a write fault since there are
> two writes always. (Implementation detail discussable).
The "always" there sur
On Thu, Dec 22, 2011 at 4:00 AM, Jesper Krogh wrote:
> On 2011-12-22 09:42, Florian Weimer wrote:
>>
>> * David Fetter:
>>
>>> The issue is that double writes needs a checksum to work by itself,
>>> and page checksums more broadly work better when there are double
>>> writes, obviating the need to
Simon Riggs wrote:
> So overall, I do now think its still possible to add an optional
> checksum in the 9.2 release and am willing to pursue it unless
> there are technical objections.
Just to restate Simon's proposal, to make sure I'm understanding it,
we would support a new page header forma
On 2011-12-22 09:42, Florian Weimer wrote:
* David Fetter:
The issue is that double writes needs a checksum to work by itself,
and page checksums more broadly work better when there are double
writes, obviating the need to have full_page_writes on.
How desirable is it to disable full_page_writ
On Thu, Dec 22, 2011 at 8:42 AM, Florian Weimer wrote:
> * David Fetter:
>
>> The issue is that double writes needs a checksum to work by itself,
>> and page checksums more broadly work better when there are double
>> writes, obviating the need to have full_page_writes on.
>
> How desirable is it
On Thu, Dec 22, 2011 at 7:44 AM, Heikki Linnakangas
wrote:
> On 22.12.2011 01:43, Tom Lane wrote:
>>
>> A "utility to bump the page version" is equally a whole lot easier said
>> than done, given that the new version has more overhead space and thus
>> less payload space than the old. What does i
* David Fetter:
> The issue is that double writes needs a checksum to work by itself,
> and page checksums more broadly work better when there are double
> writes, obviating the need to have full_page_writes on.
How desirable is it to disable full_page_writes? Doesn't it cut down
recovery time s
On 22.12.2011 01:43, Tom Lane wrote:
A "utility to bump the page version" is equally a whole lot easier said
than done, given that the new version has more overhead space and thus
less payload space than the old. What does it do when the old page is
too full to be converted? "Move some data som
On Thu, Dec 22, 2011 at 12:06 AM, Simon Riggs wrote:
>> Having two different page formats running around in the system at the
>> same time is far from free; in the worst case it means that every single
>> piece of code that touches pages has to know about and be prepared to
>> cope with both vers
On Wed, Dec 21, 2011 at 04:18:33PM -0800, Rob Wultsch wrote:
> On Wed, Dec 21, 2011 at 1:59 PM, David Fetter wrote:
> > One of the things VMware is working on is double writes, per
> > previous discussions of how, for example, InnoDB does things.
>
> The world is moving to flash, and the lifetime
On Wed, Dec 21, 2011 at 7:06 PM, Simon Riggs wrote:
> My feeling is it probably depends upon how different the formats are,
> so given we are discussing a 4 byte addition to the header, it might
> be doable.
I agree. When thinking back on Zoltan's patches, it's worth
remembering that he had a nu
On Wed, Dec 21, 2011 at 1:59 PM, David Fetter wrote:
> One of the things VMware is working on is double writes, per previous
> discussions of how, for example, InnoDB does things.
The world is moving to flash, and the lifetime of flash is measured
writes. Potentially doubling the number of writes
On Wed, Dec 21, 2011 at 11:43 PM, Tom Lane wrote:
> It seems like you've forgotten all of the previous discussion of how
> we'd manage a page format version change.
Maybe I've had too much caffeine. It's certainly late here.
> Having two different page formats running around in the system at th
Simon Riggs writes:
> We don't need to use any flag bits at all. We add
> PG_PAGE_LAYOUT_VERSION to the control file, so that CRC checking
> becomes an initdb option. All new pages can be created with
> PG_PAGE_LAYOUT_VERSION from the control file. All existing pages must
> be either the layout ve
David Fetter writes:
> There's a separate issue we'd like to get clear on, which is whether
> it would be OK to make a new PG_PAGE_LAYOUT_VERSION.
If you're not going to provide pg_upgrade support, I think there is no
chance of getting a new page layout accepted. The people who might want
CRC su
On Wed, Dec 21, 2011 at 10:19 PM, Kevin Grittner
wrote:
> Alvaro Herrera wrote:
>
>> If you get away with a new page format, let's make sure and
>> coordinate so that we can add more info into the header. One
>> thing I wanted was to have an ID struct on each file, so that you
>> know what DB/re
Alvaro Herrera wrote:
> If you get away with a new page format, let's make sure and
> coordinate so that we can add more info into the header. One
> thing I wanted was to have an ID struct on each file, so that you
> know what DB/relation/segment the file corresponds to. So the
> first page's
Excerpts from David Fetter's message of mié dic 21 18:59:13 -0300 2011:
> If not, we'll have to do some extra work on the patch as described
> below. Thanks to Kevin Grittner for coming up with this :)
>
> - Use a header bit to say whether we've got a checksum on the page.
> We're using 3/16
48 matches
Mail list logo