On Fri, Nov 16, 2012 at 7:33 PM, Merlin Moncure <mmonc...@gmail.com> wrote:
> On Fri, Nov 16, 2012 at 4:32 AM, Amit Kapila <amit.kap...@huawei.com> > wrote: > > On Thursday, November 15, 2012 10:19 PM Merlin Moncure wrote: > >> On Thu, Nov 15, 2012 at 10:25 AM, Amit Kapila <amit.kap...@huawei.com> > >> wrote: > > > >> > >> Sure, although in scans we are using ring buffer as well so in > >> practical sense the results are pretty close. > >> > >> > b. Considering sometimes people want VACUUM to run when system is not > >> busy, > >> > the chances of generating more overall I/O in system can be > >> > more. > >> > >> It's hard to imagine overall i/o load increasing. However, longer > >> vacuum times should be considered. A bigger issue is that we are > >> missing an early opportunity to set the all visible bit. vacuum is > >> doing: > >> > >> if (all_visible) > >> { > >> TransactionId xmin; > >> > >> if (!(tuple.t_data->t_infomask & HEAP_XMIN_COMMITTED)) > >> { > >> all_visible = false; > >> break; > >> } > >> > >> assuming the cache is working and vacuum rolls around after a scan, > >> you lost the opportunity to set all_visible flag where previously it > >> would have been set, thereby dismantling the positive effect of an > >> index only scan. AFAICT, this is the only case where vaccum is > >> directly interfacing with hint bits. This could be construed as a > >> violation of heapam API? Maybe if that's an issue we could proxy that > >> check to a heaptuple/tqual.c maintained function (in the same manner > >> as HeapTupleSetHintBits) so that the cache bit could be uniformly > >> checked. > > > > I think we need to think of some tests to check if Vacuum or any other > > impact has not been created due to this change. > > I will devise tests during review of this patch. > > However if you have more ideas then share the same which will make tests > of > > this patch more strong. > > For functional/performance test of this patch, one of my colleague Hari > Babu > > will also work along with me on it. > > Thanks. So far, Atri ran some quick n dirty tests to see if there > were any regressions. He benched a large scan followed by vacuum. So > far, results are inconclusive so better testing methodologies will > definitely be greatly appreciated. One of the challenges with working > in this part of the code is that it's quite difficult to make changes > without impacting at least some workloads. > > merlin > Thanks a ton Amit and your colleague Hari for volunteering to review the patch. I ran some benchmarks and came up with the following results: With our code atri@atri-Aspire-4740:~/postgresql-9.2.1/contrib/pgbench$ ./pgbench atri1 -n -f bench2.sql -c 8 -T 300 transaction type: Custom query scaling factor: 1 query mode: simple number of clients: 8 number of threads: 1 duration: 300 s number of transactions actually processed: 412 tps = 1.366142 (including connections establishing) tps = 1.366227 (excluding connections establishing) Without our code atri@atri-Aspire-4740:~/postgresql-9.2.1/contrib/pgbench$ ./pgbench atri1 -n -f bench2.sql -c 8 -T 300 transaction type: Custom query scaling factor: 1 query mode: simple number of clients: 8 number of threads: 1 duration: 300 s number of transactions actually processed: 378 tps = 1.244333 (including connections establishing) tps = 1.244447 (excluding connections establishing) The SQL file is attached. Please let us know if you need any more details. Atri -- Regards, Atri *l'apprenant*
bench2.sql
Description: Binary data
-- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers