On 9/21/07, Merlin Moncure <[EMAIL PROTECTED]> wrote:
> Well, my first round of results are so far not showing the big gains I
> saw with hot in some of the earlier patches...so far, it looks
> approximately to be a wash although with the reduced need to vacuum.
let me correct myself here. I did
Robert Treat <[EMAIL PROTECTED]> writes:
> Just curious, but does this apply to subtransactions that are the result of
> plpgsql try...exception blocks?
Only if they changed the database; else they won't have XIDs.
regards, tom lane
---(end of bro
On Friday 21 September 2007 13:02, Tom Lane wrote:
> I wrote:
> > Dunno about "more general", but your idea reduces the runtime of this
> > example by about 50% (22.2s to 10.5s) for me. I'm worried though that
> > it would be a net negative in more typical situations, especially if
> > you've got
"Heikki Linnakangas" <[EMAIL PROTECTED]> writes:
> I noted that most callers of TransactionIdIsInProgress in tqual.c
> already call TransactionIdIsCurrentTransactionId before
> TransactionIdIsInProgress. In those cases we could just skip the test
> for our own xids altogether, if it's worth code ma
"Heikki Linnakangas" <[EMAIL PROTECTED]> writes:
> We've already checked that the xmin is our own transaction id, so we
> check if the xmax is an aborted subtransaction of our own transaction. A
> TransactionIdDidAbort call seems like an awfully expensive way to check
> that. We could call Transact
On 9/21/07, Tom Lane <[EMAIL PROTECTED]> wrote:
>
>
>
> I'm also starting to come around to liking the page-header-xid field
> a bit more. I suggest that it could replace the "page is prunable"
> flag bit altogether --- to mark the page prunable, you must store
> some appropriate xid into the head
"Heikki Linnakangas" <[EMAIL PROTECTED]> writes:
> Yeah. I played with this a bit more, and came up with a couple of other
> micro-optimizations:
> 1. Instead of pallocing and pfreeing a new array in
> TransactionIdIsInProgress, we could just malloc the array once and reuse
> it. That palloc/pfree
Tom Lane wrote:
> Actually ... the only way that TransactionIdIsCurrentTransactionId can
> take a meaningful amount of time is if you've got lots of
> subtransactions, and in that case your own subxids cache has certainly
> overflowed, which is likely to force TransactionIdIsInProgress into the
> "
Merlin Moncure wrote:
> Well, my first round of results are so far not showing the big gains I
> saw with hot in some of the earlier patches...so far, it looks
> approximately to be a wash although with the reduced need to vacuum.
> i'll test some more when things settle down.
Oh... Which version
On 9/21/07, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:
> Merlin Moncure wrote:
> > pre hot:
> > run 1: 3617.641 ms
> > run 2: 5195.215 ms
> > run 3: 6760.449 ms
> > after vacuum:
> > run 1: 4171.362 ms
> > run 2: 5513.317 ms
> > run 3: 6884.125 ms
> > post hot:
> > run 1: Time: 7286.292 ms
> > r
I wrote:
> Dunno about "more general", but your idea reduces the runtime of this
> example by about 50% (22.2s to 10.5s) for me. I'm worried though that
> it would be a net negative in more typical situations, especially if
> you've got a lot of open subtransactions.
Actually ... the only way tha
Tom Lane wrote:
> "Heikki Linnakangas" <[EMAIL PROTECTED]> writes:
>> If you look at the callgraph, you'll see that those
>> LWLockAcquire/Release calls are coming from HeapTupleSatisfiesVacuum ->
>> TransactionIdIsInProgress, which keeps trashing the ProcArrayLock. A
>> "if(TransactionIdIsCurrentT
"Heikki Linnakangas" <[EMAIL PROTECTED]> writes:
> If you look at the callgraph, you'll see that those
> LWLockAcquire/Release calls are coming from HeapTupleSatisfiesVacuum ->
> TransactionIdIsInProgress, which keeps trashing the ProcArrayLock. A
> "if(TransactionIdIsCurrentTransactionId(xid)) ret
Tom Lane wrote:
> I don't much like the idea of adding an xid to the page header --- for
> one thing, *which* xid would you put there, and what would you test it
> against?
I was thinking that you would put the smallest in-progress xmax on the
page there, and you would test it against OldestXmin.
"Pavan Deolasee" <[EMAIL PROTECTED]> writes:
> On 9/21/07, Tom Lane <[EMAIL PROTECTED]> wrote:
>>
>> so this example is getting past the heuristic tests in
>> heap_page_prune_opt almost every time. Why is that? Too tired to poke
>> at it more tonight.
>>
> I guess you already know the answer no
Tom Lane wrote:
> "Heikki Linnakangas" <[EMAIL PROTECTED]> writes:
>> Bruce Momjian wrote:
>>> This might be a simplistic question but if the page is +90% full and
>>> there is a long-lived transaction, isn't Postgres going to try pruning
>>> on each page read access?
>
>> Yes :(
>
> It shouldn't
On 9/21/07, Tom Lane <[EMAIL PROTECTED]> wrote:
>
>
> It shouldn't, though --- the hint bit should get cleared on the first
> try. I think I probably broke something in the last round of revisions
> to heap_page_prune_opt, but haven't looked yet ...
We set the hint bit (prunable) again when we
"Heikki Linnakangas" <[EMAIL PROTECTED]> writes:
> Bruce Momjian wrote:
>> This might be a simplistic question but if the page is +90% full and
>> there is a long-lived transaction, isn't Postgres going to try pruning
>> on each page read access?
> Yes :(
It shouldn't, though --- the hint bit sho
On 9/21/07, Tom Lane <[EMAIL PROTECTED]> wrote:
>
>
> so this example is getting past the heuristic tests in
> heap_page_prune_opt almost every time. Why is that? Too tired to poke
> at it more tonight.
>
>
I guess you already know the answer now, but anyways: Since we are
updating a single tuple
Merlin Moncure wrote:
> On 9/20/07, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:
>> Yeah. I'm doing some micro-benchmarking, and the attached test case is
>> much slower with HOT. It's spending a lot of time trying to prune, only
>> to find out that it can't.
>>
>> Instead of/in addition to avoidi
Merlin Moncure wrote:
> On 9/20/07, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:
>> Yeah. I'm doing some micro-benchmarking, and the attached test case is
>> much slower with HOT. It's spending a lot of time trying to prune, only
>> to find out that it can't.
>>
>> Instead of/in addition to avoidi
Tom Lane wrote:
> "Heikki Linnakangas" <[EMAIL PROTECTED]> writes:
>> Tom Lane wrote:
>>> I'd still like to think about whether we
>>> can be smarter about when to invoke pruning, but that's a small enough
>>> issue that the patch can go in without it.
>
>> Yeah. I'm doing some micro-benchmarking,
Bruce Momjian wrote:
> This might be a simplistic question but if the page is +90% full and
> there is a long-lived transaction, isn't Postgres going to try pruning
> on each page read access?
Yes :(. That's why we earlier talked about stored the xid of the oldest
deleted tuple on the page in the
On 9/21/07, Bruce Momjian <[EMAIL PROTECTED]> wrote:
>
>
> This might be a simplistic question but if the page is +90% full and
> there is a long-lived transaction, isn't Postgres going to try pruning
> on each page read access?
>
>
The way it stands today, yes. Thats one reason why we are seeing
t
Heikki Linnakangas wrote:
> Tom Lane wrote:
> > I've committed the HOT patch.
>
> Thanks, much easier to work with it now that it's in.
>
> > I'd still like to think about whether we
> > can be smarter about when to invoke pruning, but that's a small enough
> > issue that the patch can go in wit
On 9/21/07, Tom Lane <[EMAIL PROTECTED]> wrote:
>
>
>
> but control never gets that far because neither xmin nor xmax is
> committed yet. We could fix that, probably, by considering the
> xmin=xmax case in the xmin-in-progress case further up; but the
> HEAP_UPDATED exclusion is still a problem.
"Pavan Deolasee" <[EMAIL PROTECTED]> writes:
> On 9/21/07, Tom Lane <[EMAIL PROTECTED]> wrote:
>> Shouldn't we be able to prune rows that have been inserted and deleted
>> by the same transaction? I'd have hoped to see this example use only
>> one heap page ...
>>
> Not before the transaction com
On 9/21/07, Tom Lane <[EMAIL PROTECTED]> wrote:
>
>
> Shouldn't we be able to prune rows that have been inserted and deleted
> by the same transaction? I'd have hoped to see this example use only
> one heap page ...
>
>
Not before the transaction commits ? In the test, we update a single tuple
100
I wrote:
> ... so basically it's all about the locking. Maybe the problem is that with
> HOT we lock the buffer too often? heap_page_prune_opt is designed to
> not take the buffer lock unless there's a good probability of needing
> to prune, but maybe that's not working as intended.
Indeed it se
"Heikki Linnakangas" <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> I'd still like to think about whether we
>> can be smarter about when to invoke pruning, but that's a small enough
>> issue that the patch can go in without it.
> Yeah. I'm doing some micro-benchmarking, and the attached test ca
On 9/20/07, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:
> Tom Lane wrote:
> > I've committed the HOT patch.
>
> Thanks, much easier to work with it now that it's in.
>
> > I'd still like to think about whether we
> > can be smarter about when to invoke pruning, but that's a small enough
> > issu
Tom Lane wrote:
> I've committed the HOT patch.
Thanks, much easier to work with it now that it's in.
> I'd still like to think about whether we
> can be smarter about when to invoke pruning, but that's a small enough
> issue that the patch can go in without it.
Yeah. I'm doing some micro-bench
I've committed the HOT patch. I'd still like to think about whether we
can be smarter about when to invoke pruning, but that's a small enough
issue that the patch can go in without it.
regards, tom lane
---(end of broadcast)
33 matches
Mail list logo