Re: [PATCHES] AllocSetReset improvement
Patch applied. Thanks. (The first if == NULL test was already in CVS). --- a_ogawa wrote: Tom Lane [EMAIL PROTECTED] writes: a_ogawa [EMAIL PROTECTED] writes: It is a reasonable idea. However, the majority part of MemSet was not able to be avoided by this idea. Because the per-tuple contexts are used at the early stage of executor. Drat. Well, what about changing that? We could introduce additional contexts or change the startup behavior so that the ones that are frequently reset don't have any data in them unless you are working with pass-by-ref values inside the inner loop. That might be possible. However, I think that we should change only aset.c about this article. I thought further: We can check whether context was used from the last reset even when blocks list is not empty. Please see attached patch. The effect of the patch that I measured is as follows: o Execution time that executed the SQL ten times. (1)Linux(CPU: Pentium III, Compiler option: -O2) - original: 24.960s - patched : 23.114s (2)Linux(CPU: Pentium 4, Compiler option: -O2) - original: 8.730s - patched : 7.962s (3)Solaris(CPU: Ultra SPARC III, Compiler option: -O2) - original: 37.0s - patched : 33.7s regards, --- Atsushi Ogawa [ Attachment, skipping... ] ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [PATCHES] AllocSetReset improvement
Tom Lane wrote: Bruce Momjian pgman@candle.pha.pa.us writes: Patch applied. Thanks. (The first if == NULL test was already in CVS). The first if == NULL test was the only part I wanted to apply ... I do not think this patch is a performance win in general. Attached is the part I backed out of CVS. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 Index: aset.c === RCS file: /cvsroot/pgsql/src/backend/utils/mmgr/aset.c,v retrieving revision 1.60 retrieving revision 1.61 diff -c -c -r1.60 -r1.61 *** aset.c 14 May 2005 20:29:13 - 1.60 --- aset.c 4 Jun 2005 20:14:12 - 1.61 *** *** 399,404 --- 399,415 if (block == NULL) return; + /* +* When blocks list has only keeper block and freeptr of the block +* is initial value, the context is not used from last reset. +*/ + if (block == set-keeper block-next == NULL) + { + char *datastart = ((char *) block) + ALLOC_BLOCKHDRSZ; + if (block-freeptr == datastart) + return; + } + /* Clear chunk freelists */ MemSetAligned(set-freelist, 0, sizeof(set-freelist)); ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [PATCHES] AllocSetReset improvement
Tom Lane wrote: Bruce Momjian pgman@candle.pha.pa.us writes: Patch applied. Thanks. (The first if == NULL test was already in CVS). The first if == NULL test was the only part I wanted to apply ... I do not think this patch is a performance win in general. OK, patch reverted. a_ogawa, would you run tests with just the part I reverted to see if it is a win. Thanks. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [PATCHES] AllocSetReset improvement
Your patch has been added to the PostgreSQL unapplied patches list at: http://momjian.postgresql.org/cgi-bin/pgpatches It will be applied as soon as one of the PostgreSQL committers reviews and approves it. --- a_ogawa wrote: Tom Lane [EMAIL PROTECTED] writes: a_ogawa [EMAIL PROTECTED] writes: It is a reasonable idea. However, the majority part of MemSet was not able to be avoided by this idea. Because the per-tuple contexts are used at the early stage of executor. Drat. Well, what about changing that? We could introduce additional contexts or change the startup behavior so that the ones that are frequently reset don't have any data in them unless you are working with pass-by-ref values inside the inner loop. That might be possible. However, I think that we should change only aset.c about this article. I thought further: We can check whether context was used from the last reset even when blocks list is not empty. Please see attached patch. The effect of the patch that I measured is as follows: o Execution time that executed the SQL ten times. (1)Linux(CPU: Pentium III, Compiler option: -O2) - original: 24.960s - patched : 23.114s (2)Linux(CPU: Pentium 4, Compiler option: -O2) - original: 8.730s - patched : 7.962s (3)Solaris(CPU: Ultra SPARC III, Compiler option: -O2) - original: 37.0s - patched : 33.7s regards, --- Atsushi Ogawa [ Attachment, skipping... ] ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [PATCHES] AllocSetReset improvement
On Thu, 2005-05-12 at 11:26 -0400, Tom Lane wrote: I have another idea though: in the case you are looking at, I think that the context in question never gets any allocations at all, which means its blocks list stays null. We could move the MemSet inside the if (blocks) test --- if there are no blocks allocated to the context, it surely hasn't got any chunks either, so the MemSet is unnecessary. Good point. There's same MemSet in AllocSetDelete() too. Karel -- Karel Zak [EMAIL PROTECTED] ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [PATCHES] AllocSetReset improvement
Tom Lane [EMAIL PROTECTED] writes: a_ogawa [EMAIL PROTECTED] writes: It is a reasonable idea. However, the majority part of MemSet was not able to be avoided by this idea. Because the per-tuple contexts are used at the early stage of executor. Drat. Well, what about changing that? We could introduce additional contexts or change the startup behavior so that the ones that are frequently reset don't have any data in them unless you are working with pass-by-ref values inside the inner loop. That might be possible. However, I think that we should change only aset.c about this article. I thought further: We can check whether context was used from the last reset even when blocks list is not empty. Please see attached patch. The effect of the patch that I measured is as follows: o Execution time that executed the SQL ten times. (1)Linux(CPU: Pentium III, Compiler option: -O2) - original: 24.960s - patched : 23.114s (2)Linux(CPU: Pentium 4, Compiler option: -O2) - original: 8.730s - patched : 7.962s (3)Solaris(CPU: Ultra SPARC III, Compiler option: -O2) - original: 37.0s - patched : 33.7s regards, --- Atsushi Ogawa aset.c.patch Description: Binary data ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [PATCHES] AllocSetReset improvement
Tom Lane [EMAIL PROTECTED] writes: And I'm worried about adding even a small amount of overhead to palloc/pfree --- on the vast majority of the profiles I look at, those are more expensive than AllocSetReset. I don't worry about palloc. Because overhead increases only when malloc is executed in AllocSetAlloc. But I'm wooried about pfree, too. However, when palloc/pfree was executed many times, I did not see a bad influence. In most of the tests I've looked at, palloc/pfree are executed far more often than AllocSetReset, and so adding even one instruction there to sometimes save a little work in AllocSetReset is a bad tradeoff. You can't optimize to make just one test case look good. I agree. I give up adding instruction to palloc/pfree. I have another idea though: in the case you are looking at, I think that the context in question never gets any allocations at all, which means its blocks list stays null. We could move the MemSet inside the "if (blocks)" test --- if there are no blocks allocated to the context, it surely hasn't got any chunks either, so the MemSet is unnecessary. That should give us most of the speedup without any extra cost in palloc/pfree. It is a reasonable idea. However, the majority part of MemSet was not able to be avoided by this idea. Because the per-tuple contexts are used at the early stage of executor. function that calls numbercontextset-blocks MemoryContextReset of calls addressis null - execTuplesMatch(execGrouping.c:65) 450x836dd28 false agg_fill_hash_table(nodeAgg.c:924) 500x836dd28 false ExecHashJoin(nodeHashjoin.c:108) 510x836dec0 false ExecHashJoin(nodeHashjoin.c:217) 500x836dec0 false ExecHashGetHashValue(nodeHash.c:669) 550x836dec0 false ExecScanHashBucket(nodeHash.c:785) 500x836dec0 false ExecScan(execScan.c:86) 570x836df48 true I am considering another idea: I think that we can change behavior of the context by switching the method table of context. An simple version of AllocSetAlloc and AllocSetReset is made. These API can be accelerated instead of using neither a freelist nor the blocks (The keeper excludes it). When the context is initialized and reset, these new API is set to the method table. And, when a freelist or a new block is needed, the method table is switched to normal API. This can be executed by doing the pfree/repalloc hook. As a result, overhead of pfree becomes only once about one context. I think that this idea is effective in context that executes repeatedly reset after small allocations such as per-tuple context. And I think that overhead given to the context that executes a lot of palloc/pfree is a very few. An attached patch is a draft of that implementation. Test and comment on the source code are insufficient yet. regards, --- Atsushi Ogawa aset.c.patch Description: Binary data ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [PATCHES] AllocSetReset improvement
a_ogawa [EMAIL PROTECTED] writes: It is a reasonable idea. However, the majority part of MemSet was not able to be avoided by this idea. Because the per-tuple contexts are used at the early stage of executor. Drat. Well, what about changing that? We could introduce additional contexts or change the startup behavior so that the ones that are frequently reset don't have any data in them unless you are working with pass-by-ref values inside the inner loop. I am considering another idea: I think that we can change behavior of the context by switching the method table of context. That's a possible solution but it feels a bit klugy somehow. I can't quite articulate what is bothering me ... will think more. regards, tom lane ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [PATCHES] AllocSetReset improvement
Tom Lane [EMAIL PROTECTED] writes: a_ogawa [EMAIL PROTECTED] writes: In SQL that executes aggregation, AllocSetReset is called many times and spend a lot of cycles. This patch saves the cycles spent by AllocSetReset. Hmm. It doesn't seem like this could be a big win overall. It's not possible to save a whole lot of cycles inside AllocSetReset, because if there isn't anything for it to do, it should fall through pretty quickly anyway. I thought that I was able to avoid MemSet in AllocSetReset. MemSet(set-freelist, 0, sizeof(set-freelist)); My profile result in previous mail is as follows: % cumulative selfself total time seconds secondscalls s/call s/call name 9.20 3.063.06 38500155 0.00 0.00 AllocSetReset Therefore, sizeof(set-freelist) * (number of calls) = 44 bytes * 38500155 = 1615 Mbytes. And I'm worried about adding even a small amount of overhead to palloc/pfree --- on the vast majority of the profiles I look at, those are more expensive than AllocSetReset. I don't worry about palloc. Because overhead increases only when malloc is executed in AllocSetAlloc. But I'm wooried about pfree, too. However, when palloc/pfree was executed many times, I did not see a bad influence. It is a result of executing 'select * from accounts' 20 times as follows. original code: Each sample counts as 0.01 seconds. % cumulative self self total time seconds secondscalls s/call s/call name 6.79 4.03 4.03 9599 0.00 0.00 appendBinaryStringInfo 6.57 7.93 3.90 50005879 0.00 0.00 AllocSetAlloc 5.63 11.27 3.34 1000 0.00 0.00 printtup 5.61 14.60 3.33 1000 0.00 0.00 slot_deform_tuple 5.36 17.78 3.18 50001421 0.00 0.00 AllocSetFree patched code: Each sample counts as 0.01 seconds. % cumulative self self total time seconds secondscalls s/call s/call name 8.07 4.78 4.78 9599 0.00 0.00 appendBinaryStringInfo 7.23 9.06 4.28 50005879 0.00 0.00 AllocSetAlloc 5.40 12.26 3.20 1000 0.00 0.00 printtup 5.20 15.34 3.08 1000 0.00 0.00 slot_deform_tuple 5.13 18.38 3.04 50001421 0.00 0.00 AllocSetFree I think that it is difficult to measure the influence that this patch gives palloc/pfree. I duplicated your test case to see where the reset calls were coming from, and got this: (Does this match your profile? I only ran the query 5 times not 10.) I'm sorry. My profile in previous mail were 11 times not 10. And your profile and my profile are match. This shows that the majority of the resets are coming from the hashjoin code not the aggregation code. You are right. I measured where MemoryContextReset had been called. (The SQL was executed once) filename(line)function number of calls - execGrouping.c(65)execTuplesMatch 45 execScan.c(86)ExecScan 57 nodeAgg.c(924)agg_fill_hash_table 50 nodeAgg.c(979)agg_retrieve_hash_table5 nodeHash.c(669) ExecHashGetHashValue 55 nodeHash.c(785) ExecScanHashBucket50 nodeHashjoin.c(108) ExecHashJoin 51 nodeHashjoin.c(217) ExecHashJoin 50 - Total 3500013 Many are the one from hashjoin. Other is the one from grouping, table/index scan, and aggregation by hash. And I measured the number of times that was able to avoid MemSet in AllocSetReset. avoided MemSet 358 executed MemSet7 --- Total3500015 (The execution time of AllocSetReset is more twice than MemoryContextReset because there is MemoryContextResetAndDeleteChildren in PostgresMain) regards, --- Atsushi Ogawa ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [PATCHES] AllocSetReset improvement
a_ogawa [EMAIL PROTECTED] writes: Tom Lane [EMAIL PROTECTED] writes: Hmm. It doesn't seem like this could be a big win overall. It's not possible to save a whole lot of cycles inside AllocSetReset, because if there isn't anything for it to do, it should fall through pretty quickly anyway. I thought that I was able to avoid MemSet in AllocSetReset. MemSet(set-freelist, 0, sizeof(set-freelist)); True, but the memset is only zeroing a dozen or so pointers, so there's not *that* much to it. It might be worth changing it to MemSetAligned --- should be safe on any architecture I can think of, and would save at least a few instructions. And I'm worried about adding even a small amount of overhead to palloc/pfree --- on the vast majority of the profiles I look at, those are more expensive than AllocSetReset. I don't worry about palloc. Because overhead increases only when malloc is executed in AllocSetAlloc. But I'm wooried about pfree, too. However, when palloc/pfree was executed many times, I did not see a bad influence. In most of the tests I've looked at, palloc/pfree are executed far more often than AllocSetReset, and so adding even one instruction there to sometimes save a little work in AllocSetReset is a bad tradeoff. You can't optimize to make just one test case look good. I have another idea though: in the case you are looking at, I think that the context in question never gets any allocations at all, which means its blocks list stays null. We could move the MemSet inside the if (blocks) test --- if there are no blocks allocated to the context, it surely hasn't got any chunks either, so the MemSet is unnecessary. That should give us most of the speedup without any extra cost in palloc/pfree. regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
[PATCHES] AllocSetReset improvement
In SQL that executes aggregation, AllocSetReset is called many times and spend a lot of cycles. This patch saves the cycles spent by AllocSetReset. An idea of the patch is to add a flag to AllocSetContext. This flag shows whether AllocSetReset should work. The effect of the patch that I measured is as follows: o Data for test was created by 'pgbench -i -s 5'. o Test SQL: select b.bid, sum(a.abalance), avg(a.abalance) from accounts a, branches b where a.bid = b.bid group by b.bid; o I measured time that executed the SQL ten times. (1)Linux(CPU: Pentium III, Compiler option: -O2) - original: 31.310s - patched : 28.812s (2)Linux(CPU: Pentium 4, Compiler option: -O2) - original: 8.953s - patched : 7.753s (3)Solaris(CPU: Ultra SPARC III, Compiler option: -O2) - original: 41.8s - patched : 38.6s o gprof result(Linux, Compiler option: -O2 -pg -DLINUX_PROFILE) - original Each sample counts as 0.01 seconds. % cumulative selfself total time seconds secondscalls s/call s/call name 9.20 3.063.06 38500155 0.00 0.00 AllocSetReset 8.99 6.052.99 27500055 0.00 0.00 slot_deform_tuple 7.40 8.512.46 4400 0.00 0.00 slot_getattr 4.81 10.111.60 27500110 0.00 0.00 ExecEvalVar 3.64 11.321.21 38500143 0.00 0.00 MemoryContextReset 3.64 12.531.21 6007086 0.00 0.00 LWLockRelease 3.31 13.631.10 5500079 0.00 0.00 heapgettup - patched Each sample counts as 0.01 seconds. % cumulative selfself total time seconds secondscalls s/call s/call name 8.76 2.822.82 27500055 0.00 0.00 slot_deform_tuple 7.73 5.312.49 4400 0.00 0.00 slot_getattr 4.72 6.831.52 27500110 0.00 0.00 ExecEvalVar 4.32 8.221.39 5500011 0.00 0.00 ExecHashJoin 4.28 9.601.38 6007086 0.00 0.00 LWLockRelease 4.04 10.901.30 38500143 0.00 0.00 MemoryContextReset 3.63 12.071.17 5500079 0.00 0.00 heapgettup 3.04 13.050.98 5499989 0.00 0.00 ExecMakeFunctionResultNoSets 2.67 13.910.86 5500110 0.00 0.00 ExecProject 2.61 14.750.84 1100 0.00 0.00 advance_transition_function 2.55 15.570.82 38500155 0.00 0.00 AllocSetReset regards, --- Atsushi Ogawa aset.c.patch Description: Binary data ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [PATCHES] AllocSetReset improvement
a_ogawa [EMAIL PROTECTED] writes: In SQL that executes aggregation, AllocSetReset is called many times and spend a lot of cycles. This patch saves the cycles spent by AllocSetReset. Hmm. It doesn't seem like this could be a big win overall. It's not possible to save a whole lot of cycles inside AllocSetReset, because if there isn't anything for it to do, it should fall through pretty quickly anyway. And I'm worried about adding even a small amount of overhead to palloc/pfree --- on the vast majority of the profiles I look at, those are more expensive than AllocSetReset. I duplicated your test case to see where the reset calls were coming from, and got this: 0.000.00 25/17500065 agg_retrieve_hash_table [453] 0.290.10 2499975/17500065 execTuplesMatch [33] 0.290.10 250/17500065 agg_fill_hash_table cycle 3 [22] 0.290.10 250/17500065 ExecScanHashBucket [32] 0.290.10 2500025/17500065 ExecHashGetHashValue [53] 0.290.10 2500035/17500065 ExecScan [52] 0.580.20 505/17500065 ExecHashJoin cycle 3 [15] [30] 5.82.040.70 17500065 MemoryContextReset [30] 0.700.00 17500065/17500065 MemoryContextResetChildren [56] (Does this match your profile? I only ran the query 5 times not 10.) This shows that the majority of the resets are coming from the hashjoin code not the aggregation code. I wonder if we could reduce the number of reset calls in that logic. I think I threw in a few extras in the recent hashjoin revisions, feeling that they were cheap enough that I didn't need to think too hard about whether any were unnecessary. I think what you've really proven here is that that was a wrong guess ... regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly