On Mon, Sep 8, 2014 at 10:12 PM, Merlin Moncure <mmonc...@gmail.com> wrote:
>
> On Fri, Sep 5, 2014 at 6:47 AM, Amit Kapila <amit.kapil...@gmail.com>
wrote:
> > Client Count/Patch_Ver (tps) 8 16 32 64 128
> > HEAD 58614 107370 140717 104357 65010
> > Patch 60092 113564 165014 213848 216065
> >
> > This data is median of 3 runs, detailed report is attached with mail.
> > I have not repeated the test for all configurations, as there is no
> > major change in design/algorithm which can effect performance.
> > Mark has already taken tpc-b data which ensures that there is
> > no problem with it, however I will also take it once with latest
version.
>
> Well, these numbers are pretty much amazing.  Question: It seems
> there's obviously quite a bit of contention left;  do you think
> there's still a significant amount of time in the clock sweep, or is
> the primary bottleneck the buffer mapping locks?

I think contention around clock sweep is very minimal and for buffer
mapping locks it has reduced significantly (you can refer upthread
some of LWLOCK stat data, I have posted after this patch), but
might be we can get more out of it by improving hash table
concurrency.  However apart from both of the above, the next thing
I have seen in profiles was palloc (at least that is what I remember
when I had done some profiling few months back during the
development of this patch).  It seems to me at that time a totally
different optimisation, so I left it for another patch.
Another point is that the m/c on which I am doing performance
test has 16 cores (64 hardware threads), so not sure how much
scaling we can expect.


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

Reply via email to