Re: [PATCH v3 00/20] Speculative page faults
On Wed, Oct 04, 2017 at 08:50:49AM +0200, Laurent Dufour wrote: > On 25/09/2017 18:27, Alexei Starovoitov wrote: > > On Mon, Sep 18, 2017 at 12:15 AM, Laurent Dufour > >wrote: > >> Despite the unprovable lockdep warning raised by Sergey, I didn't get any > >> feedback on this series. > >> > >> Is there a chance to get it moved upstream ? > > > > what is the status ? > > We're eagerly looking forward for this set to land, > > since we have several use cases for tracing that > > will build on top of this set as discussed at Plumbers. > > Hi Alexei, > > Based on Plumber's note [1], it sounds that the use case is tied to the BPF > tracing where a call tp find_vma() call will be made on a process's context > to fetch user space's symbols. > > Am I right ? > Is the find_vma() call made in the context of the process owning the mm > struct ? Hi Laurent, we're thinking about several use cases on top of your work. First one is translation of user address to file_handle where we need to do find_vma() from preempt_disabled context of bpf program. My understanding that srcu should solve that nicely. Second is making probe_read() to try harder when address is causing minor fault. We're thinking that find_vma() followed by some new light weight filemap_access() that doesn't sleep will do the trick. In both cases the program will be accessing current->mm
Re: [PATCH v3 00/20] Speculative page faults
On 25/09/2017 18:27, Alexei Starovoitov wrote: > On Mon, Sep 18, 2017 at 12:15 AM, Laurent Dufour >wrote: >> Despite the unprovable lockdep warning raised by Sergey, I didn't get any >> feedback on this series. >> >> Is there a chance to get it moved upstream ? > > what is the status ? > We're eagerly looking forward for this set to land, > since we have several use cases for tracing that > will build on top of this set as discussed at Plumbers. Hi Alexei, Based on Plumber's note [1], it sounds that the use case is tied to the BPF tracing where a call tp find_vma() call will be made on a process's context to fetch user space's symbols. Am I right ? Is the find_vma() call made in the context of the process owning the mm struct ? Thanks, Laurent. [1] https://etherpad.openstack.org/p/LPC2017_Tracing)
Re: [PATCH v3 00/20] Speculative page faults
On 03/10/2017 03:27, Michael Ellerman wrote: > Laurent Dufourwrites: > >> Hi Andrew, >> >> On 28/09/2017 22:38, Andrew Morton wrote: >>> On Thu, 28 Sep 2017 14:29:02 +0200 Laurent Dufour >>> wrote: >>> > Laurent's [0/n] provides some nice-looking performance benefits for > workloads which are chosen to show performance benefits(!) but, alas, > no quantitative testing results for workloads which we may suspect will > be harmed by the changes(?). Even things as simple as impact upon > single-threaded pagefault-intensive workloads and its effect upon > CONFIG_SMP=n .text size? I forgot to mention in my previous email the impact on the .text section. Here are the metrics I got : .text size UP SMP Delta 4.13-mmotm 8444201 8964137 6.16% '' +spf8452041 8971929 6.15% Delta 0.09% 0.09% No major impact as you could see. >>> >>> 8k text increase seems rather a lot actually. That's a lot more >>> userspace cacheclines that get evicted during a fault... >>> >>> Is the feature actually beneficial on uniprocessor? >> >> This is useless on uniprocessor, and I will disable it on x86 when !SMP >> by not defining __HAVE_ARCH_CALL_SPF. >> So the speculative page fault handler will not be built but the vm >> sequence counter and the SCRU stuff will still be there. I may also make >> it disabled through macro when __HAVE_ARCH_CALL_SPF is not defined, but >> this may obfuscated the code a bit... >> >> On ppc64, as this feature requires book3s, it can't be built without SMP >> support. > > Book3S doesn't force SMP, eg. PMAC is Book3S but can be built with SMP=n. > > It's true that POWERNV and PSERIES both force SMP, and those are the > platforms used on modern Book3S CPUs. Thanks Michael, I'll add a check on CONFIG_SMP on ppc too. Laurent.
Re: [PATCH v3 00/20] Speculative page faults
Laurent Dufourwrites: > Hi Andrew, > > On 28/09/2017 22:38, Andrew Morton wrote: >> On Thu, 28 Sep 2017 14:29:02 +0200 Laurent Dufour >> wrote: >> Laurent's [0/n] provides some nice-looking performance benefits for workloads which are chosen to show performance benefits(!) but, alas, no quantitative testing results for workloads which we may suspect will be harmed by the changes(?). Even things as simple as impact upon single-threaded pagefault-intensive workloads and its effect upon CONFIG_SMP=n .text size? >>> >>> I forgot to mention in my previous email the impact on the .text section. >>> >>> Here are the metrics I got : >>> >>> .text size UP SMP Delta >>> 4.13-mmotm 8444201 8964137 6.16% >>> '' +spf 8452041 8971929 6.15% >>> Delta 0.09% 0.09% >>> >>> No major impact as you could see. >> >> 8k text increase seems rather a lot actually. That's a lot more >> userspace cacheclines that get evicted during a fault... >> >> Is the feature actually beneficial on uniprocessor? > > This is useless on uniprocessor, and I will disable it on x86 when !SMP > by not defining __HAVE_ARCH_CALL_SPF. > So the speculative page fault handler will not be built but the vm > sequence counter and the SCRU stuff will still be there. I may also make > it disabled through macro when __HAVE_ARCH_CALL_SPF is not defined, but > this may obfuscated the code a bit... > > On ppc64, as this feature requires book3s, it can't be built without SMP > support. Book3S doesn't force SMP, eg. PMAC is Book3S but can be built with SMP=n. It's true that POWERNV and PSERIES both force SMP, and those are the platforms used on modern Book3S CPUs. cheers
Re: [PATCH v3 00/20] Speculative page faults
Hi Andrew, On 28/09/2017 22:38, Andrew Morton wrote: On Thu, 28 Sep 2017 14:29:02 +0200 Laurent Dufourwrote: Laurent's [0/n] provides some nice-looking performance benefits for workloads which are chosen to show performance benefits(!) but, alas, no quantitative testing results for workloads which we may suspect will be harmed by the changes(?). Even things as simple as impact upon single-threaded pagefault-intensive workloads and its effect upon CONFIG_SMP=n .text size? I forgot to mention in my previous email the impact on the .text section. Here are the metrics I got : .text size UP SMP Delta 4.13-mmotm 8444201 8964137 6.16% '' +spf 8452041 8971929 6.15% Delta 0.09% 0.09% No major impact as you could see. 8k text increase seems rather a lot actually. That's a lot more userspace cacheclines that get evicted during a fault... Is the feature actually beneficial on uniprocessor? This is useless on uniprocessor, and I will disable it on x86 when !SMP by not defining __HAVE_ARCH_CALL_SPF. So the speculative page fault handler will not be built but the vm sequence counter and the SCRU stuff will still be there. I may also make it disabled through macro when __HAVE_ARCH_CALL_SPF is not defined, but this may obfuscated the code a bit... On ppc64, as this feature requires book3s, it can't be built without SMP support. I rebuild the code on my x86 guest with the following patch applied: --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -260,7 +260,7 @@ enum page_cache_mode { /* * Advertise that we call the Speculative Page Fault handler. */ -#ifdef CONFIG_X86_64 +#if defined(CONFIG_X86_64) && defined(CONFIG_SMP) #define __HAVE_ARCH_CALL_SPF #endif And this time I got the following size on UP : UP 4.13-mmotm 8444201 '' +spf 8447945 (previously 8452041) +3744 If I disable all the vm_sequence operations and the SRCU stuff this would lead to 0. Thanks, Laurent.
Re: [PATCH v3 00/20] Speculative page faults
On Thu, 28 Sep 2017 14:29:02 +0200 Laurent Dufourwrote: > > Laurent's [0/n] provides some nice-looking performance benefits for > > workloads which are chosen to show performance benefits(!) but, alas, > > no quantitative testing results for workloads which we may suspect will > > be harmed by the changes(?). Even things as simple as impact upon > > single-threaded pagefault-intensive workloads and its effect upon > > CONFIG_SMP=n .text size? > > I forgot to mention in my previous email the impact on the .text section. > > Here are the metrics I got : > > .text sizeUP SMP Delta > 4.13-mmotm8444201 8964137 6.16% > '' +spf 8452041 8971929 6.15% > Delta 0.09% 0.09% > > No major impact as you could see. 8k text increase seems rather a lot actually. That's a lot more userspace cacheclines that get evicted during a fault... Is the feature actually beneficial on uniprocessor?
Re: [PATCH v3 00/20] Speculative page faults
Hi Andrew, On 26/09/2017 01:34, Andrew Morton wrote: On Mon, 25 Sep 2017 09:27:43 -0700 Alexei Starovoitovwrote: On Mon, Sep 18, 2017 at 12:15 AM, Laurent Dufour wrote: Despite the unprovable lockdep warning raised by Sergey, I didn't get any feedback on this series. Is there a chance to get it moved upstream ? what is the status ? We're eagerly looking forward for this set to land, since we have several use cases for tracing that will build on top of this set as discussed at Plumbers. There has been sadly little review and testing so far :( I'll be taking a close look at it all over the next couple of weeks. One terribly important thing (especially for a patchset this large and intrusive) is the rationale for merging it: the justification, usually in the form of end-user benefit. Laurent's [0/n] provides some nice-looking performance benefits for workloads which are chosen to show performance benefits(!) but, alas, no quantitative testing results for workloads which we may suspect will be harmed by the changes(?). Even things as simple as impact upon single-threaded pagefault-intensive workloads and its effect upon CONFIG_SMP=n .text size? I forgot to mention in my previous email the impact on the .text section. Here are the metrics I got : .text size UP SMP Delta 4.13-mmotm 8444201 8964137 6.16% '' +spf 8452041 8971929 6.15% Delta 0.09% 0.09% No major impact as you could see. Thanks, Laurent If you have additional usecases then please, spell them out for us in full detail so we can better understand the benefits which this patchset provides.
Re: [PATCH v3 00/20] Speculative page faults
Hi, On 26/09/2017 01:34, Andrew Morton wrote: On Mon, 25 Sep 2017 09:27:43 -0700 Alexei Starovoitovwrote: On Mon, Sep 18, 2017 at 12:15 AM, Laurent Dufour wrote: Despite the unprovable lockdep warning raised by Sergey, I didn't get any feedback on this series. Is there a chance to get it moved upstream ? what is the status ? We're eagerly looking forward for this set to land, since we have several use cases for tracing that will build on top of this set as discussed at Plumbers. There has been sadly little review and testing so far :( I do agree and I could just encourage people to do so :/ I'll be taking a close look at it all over the next couple of weeks. Thanks Andrew for giving it a close look. One terribly important thing (especially for a patchset this large and intrusive) is the rationale for merging it: the justification, usually in the form of end-user benefit. The benefit is only for multi-threaded processes. But even on *small* systems with 16 CPUs, there is a real benefit. Laurent's [0/n] provides some nice-looking performance benefits for workloads which are chosen to show performance benefits(!) but, alas, no quantitative testing results for workloads which we may suspect will be harmed by the changes(?). I did test with kernbench, involving gcc/ld which are not multi-threaded, AFAIK, and I didn't see any impact. But if you know additional test I should give a try, please advise. Regarding ebizzy, it was designed to simulate web server's activity, so I guess there will be improvements when running real web servers. Even things as simple as impact upon single-threaded pagefault-intensive workloads and its effect upon CONFIG_SMP=n .text size? If you have additional usecases then please, spell them out for us in full detail so we can better understand the benefits which this patchset provides. The other use-case I'm aware of is on memory database, where performance improvements is really significant, as I mentioned in the header of my series. Cheers, Laurent.
Re: [PATCH v3 00/20] Speculative page faults
Hi Alexei, Le 25/09/2017 à 18:27, Alexei Starovoitov a écrit : On Mon, Sep 18, 2017 at 12:15 AM, Laurent Dufourwrote: Despite the unprovable lockdep warning raised by Sergey, I didn't get any feedback on this series. Is there a chance to get it moved upstream ? what is the status ? As mentioned by Andrew this lacks review and test, what about your Ack/Review/Tested-By ? We're eagerly looking forward for this set to land, since we have several use cases for tracing that will build on top of this set as discussed at Plumbers. Unfortunately I was not able to attend Plumbers this year, but I'll be pleased, as well as the MM mailing readers, to get your feedback and use cases of this series. Thanks, Laurent.
Re: [PATCH v3 00/20] Speculative page faults
On Mon, 25 Sep 2017 09:27:43 -0700 Alexei Starovoitovwrote: > On Mon, Sep 18, 2017 at 12:15 AM, Laurent Dufour > wrote: > > Despite the unprovable lockdep warning raised by Sergey, I didn't get any > > feedback on this series. > > > > Is there a chance to get it moved upstream ? > > what is the status ? > We're eagerly looking forward for this set to land, > since we have several use cases for tracing that > will build on top of this set as discussed at Plumbers. There has been sadly little review and testing so far :( I'll be taking a close look at it all over the next couple of weeks. One terribly important thing (especially for a patchset this large and intrusive) is the rationale for merging it: the justification, usually in the form of end-user benefit. Laurent's [0/n] provides some nice-looking performance benefits for workloads which are chosen to show performance benefits(!) but, alas, no quantitative testing results for workloads which we may suspect will be harmed by the changes(?). Even things as simple as impact upon single-threaded pagefault-intensive workloads and its effect upon CONFIG_SMP=n .text size? If you have additional usecases then please, spell them out for us in full detail so we can better understand the benefits which this patchset provides.
Re: [PATCH v3 00/20] Speculative page faults
On Mon, Sep 18, 2017 at 12:15 AM, Laurent Dufourwrote: > Despite the unprovable lockdep warning raised by Sergey, I didn't get any > feedback on this series. > > Is there a chance to get it moved upstream ? what is the status ? We're eagerly looking forward for this set to land, since we have several use cases for tracing that will build on top of this set as discussed at Plumbers.
Re: [PATCH v3 00/20] Speculative page faults
Despite the unprovable lockdep warning raised by Sergey, I didn't get any feedback on this series. Is there a chance to get it moved upstream ? Thanks, Laurent. On 08/09/2017 20:06, Laurent Dufour wrote: > This is a port on kernel 4.13 of the work done by Peter Zijlstra to > handle page fault without holding the mm semaphore [1]. > > The idea is to try to handle user space page faults without holding the > mmap_sem. This should allow better concurrency for massively threaded > process since the page fault handler will not wait for other threads memory > layout change to be done, assuming that this change is done in another part > of the process's memory space. This type page fault is named speculative > page fault. If the speculative page fault fails because of a concurrency is > detected or because underlying PMD or PTE tables are not yet allocating, it > is failing its processing and a classic page fault is then tried. > > The speculative page fault (SPF) has to look for the VMA matching the fault > address without holding the mmap_sem, so the VMA list is now managed using > SRCU allowing lockless walking. The only impact would be the deferred file > derefencing in the case of a file mapping, since the file pointer is > released once the SRCU cleaning is done. This patch relies on the change > done recently by Paul McKenney in SRCU which now runs a callback per CPU > instead of per SRCU structure [1]. > > The VMA's attributes checked during the speculative page fault processing > have to be protected against parallel changes. This is done by using a per > VMA sequence lock. This sequence lock allows the speculative page fault > handler to fast check for parallel changes in progress and to abort the > speculative page fault in that case. > > Once the VMA is found, the speculative page fault handler would check for > the VMA's attributes to verify that the page fault has to be handled > correctly or not. Thus the VMA is protected through a sequence lock which > allows fast detection of concurrent VMA changes. If such a change is > detected, the speculative page fault is aborted and a *classic* page fault > is tried. VMA sequence locks are added when VMA attributes which are > checked during the page fault are modified. > > When the PTE is fetched, the VMA is checked to see if it has been changed, > so once the page table is locked, the VMA is valid, so any other changes > leading to touching this PTE will need to lock the page table, so no > parallel change is possible at this time. > > Compared to the Peter's initial work, this series introduces a spin_trylock > when dealing with speculative page fault. This is required to avoid dead > lock when handling a page fault while a TLB invalidate is requested by an > other CPU holding the PTE. Another change due to a lock dependency issue > with mapping->i_mmap_rwsem. > > In addition some VMA field values which are used once the PTE is unlocked > at the end the page fault path are saved into the vm_fault structure to > used the values matching the VMA at the time the PTE was locked. > > This series only support VMA with no vm_ops define, so huge page and mapped > file are not managed with the speculative path. In addition transparent > huge page are not supported. Once this series will be accepted upstream > I'll extend the support to mapped files, and transparent huge pages. > > This series builds on top of v4.13.9-mm1 and is functional on x86 and > PowerPC. > > Tests have been made using a large commercial in-memory database on a > PowerPC system with 752 CPU using RFC v5 using a previous version of this > series. The results are very encouraging since the loading of the 2TB > database was faster by 14% with the speculative page fault. > > Using ebizzy test [3], which spreads a lot of threads, the result are good > when running on both a large or a small system. When using kernbench, the > result are quite similar which expected as not so much multithreaded > processes are involved. But there is no performance degradation neither > which is good. > > -- > Benchmarks results > > Note these test have been made on top of 4.13.0-mm1. > > Ebizzy: > --- > The test is counting the number of records per second it can manage, the > higher is the best. I run it like this 'ebizzy -mTRp'. To get consistent > result I repeated the test 100 times and measure the average result, mean > deviation, max and min. > > - 16 CPUs x86 VM > Records/s 4.13.0-mm1 4.13.0-mm1-spf delta > Average 13217.9065765.94+397.55% > Mean deviation690.37 2609.36 +277.97% > Max 16726 77675 +364.40% > Min 12194 616340 +405.45% > > - 80 CPUs Power 8 node: > Records/s 4.13.0-mm1 4.13.0-mm1-spf delta > Average 38175.4067635.5577.17% > Mean deviation600.09 2349.66
[PATCH v3 00/20] Speculative page faults
This is a port on kernel 4.13 of the work done by Peter Zijlstra to handle page fault without holding the mm semaphore [1]. The idea is to try to handle user space page faults without holding the mmap_sem. This should allow better concurrency for massively threaded process since the page fault handler will not wait for other threads memory layout change to be done, assuming that this change is done in another part of the process's memory space. This type page fault is named speculative page fault. If the speculative page fault fails because of a concurrency is detected or because underlying PMD or PTE tables are not yet allocating, it is failing its processing and a classic page fault is then tried. The speculative page fault (SPF) has to look for the VMA matching the fault address without holding the mmap_sem, so the VMA list is now managed using SRCU allowing lockless walking. The only impact would be the deferred file derefencing in the case of a file mapping, since the file pointer is released once the SRCU cleaning is done. This patch relies on the change done recently by Paul McKenney in SRCU which now runs a callback per CPU instead of per SRCU structure [1]. The VMA's attributes checked during the speculative page fault processing have to be protected against parallel changes. This is done by using a per VMA sequence lock. This sequence lock allows the speculative page fault handler to fast check for parallel changes in progress and to abort the speculative page fault in that case. Once the VMA is found, the speculative page fault handler would check for the VMA's attributes to verify that the page fault has to be handled correctly or not. Thus the VMA is protected through a sequence lock which allows fast detection of concurrent VMA changes. If such a change is detected, the speculative page fault is aborted and a *classic* page fault is tried. VMA sequence locks are added when VMA attributes which are checked during the page fault are modified. When the PTE is fetched, the VMA is checked to see if it has been changed, so once the page table is locked, the VMA is valid, so any other changes leading to touching this PTE will need to lock the page table, so no parallel change is possible at this time. Compared to the Peter's initial work, this series introduces a spin_trylock when dealing with speculative page fault. This is required to avoid dead lock when handling a page fault while a TLB invalidate is requested by an other CPU holding the PTE. Another change due to a lock dependency issue with mapping->i_mmap_rwsem. In addition some VMA field values which are used once the PTE is unlocked at the end the page fault path are saved into the vm_fault structure to used the values matching the VMA at the time the PTE was locked. This series only support VMA with no vm_ops define, so huge page and mapped file are not managed with the speculative path. In addition transparent huge page are not supported. Once this series will be accepted upstream I'll extend the support to mapped files, and transparent huge pages. This series builds on top of v4.13.9-mm1 and is functional on x86 and PowerPC. Tests have been made using a large commercial in-memory database on a PowerPC system with 752 CPU using RFC v5 using a previous version of this series. The results are very encouraging since the loading of the 2TB database was faster by 14% with the speculative page fault. Using ebizzy test [3], which spreads a lot of threads, the result are good when running on both a large or a small system. When using kernbench, the result are quite similar which expected as not so much multithreaded processes are involved. But there is no performance degradation neither which is good. -- Benchmarks results Note these test have been made on top of 4.13.0-mm1. Ebizzy: --- The test is counting the number of records per second it can manage, the higher is the best. I run it like this 'ebizzy -mTRp'. To get consistent result I repeated the test 100 times and measure the average result, mean deviation, max and min. - 16 CPUs x86 VM Records/s 4.13.0-mm1 4.13.0-mm1-spf delta Average 13217.9065765.94+397.55% Mean deviation 690.37 2609.36 +277.97% Max 16726 77675 +364.40% Min 12194 616340 +405.45% - 80 CPUs Power 8 node: Records/s 4.13.0-mm1 4.13.0-mm1-spf delta Average 38175.4067635.5577.17% Mean deviation 600.09 2349.66 291.55% Max 39563 74292 87.78% Min 35846 62657 74.79% The number of record per second is far better with the speculative page fault. The mean deviation is higher with the speculative page fault, may be because sometime the fault are not handled in a speculative way leading to more variation. The numbers for the x86 guest are really
Re: [PATCH v3 00/20] Speculative page faults
On 08/09/2017 19:32, Laurent Dufour wrote: > This is a port on kernel 4.13 of the work done by Peter Zijlstra to > handle page fault without holding the mm semaphore [1]. Sorry for the noise, I got trouble sending the whole series through this email. I will try again. Cheers, Laurent.
[PATCH v3 00/20] Speculative page faults
This is a port on kernel 4.13 of the work done by Peter Zijlstra to handle page fault without holding the mm semaphore [1]. The idea is to try to handle user space page faults without holding the mmap_sem. This should allow better concurrency for massively threaded process since the page fault handler will not wait for other threads memory layout change to be done, assuming that this change is done in another part of the process's memory space. This type page fault is named speculative page fault. If the speculative page fault fails because of a concurrency is detected or because underlying PMD or PTE tables are not yet allocating, it is failing its processing and a classic page fault is then tried. The speculative page fault (SPF) has to look for the VMA matching the fault address without holding the mmap_sem, so the VMA list is now managed using SRCU allowing lockless walking. The only impact would be the deferred file derefencing in the case of a file mapping, since the file pointer is released once the SRCU cleaning is done. This patch relies on the change done recently by Paul McKenney in SRCU which now runs a callback per CPU instead of per SRCU structure [1]. The VMA's attributes checked during the speculative page fault processing have to be protected against parallel changes. This is done by using a per VMA sequence lock. This sequence lock allows the speculative page fault handler to fast check for parallel changes in progress and to abort the speculative page fault in that case. Once the VMA is found, the speculative page fault handler would check for the VMA's attributes to verify that the page fault has to be handled correctly or not. Thus the VMA is protected through a sequence lock which allows fast detection of concurrent VMA changes. If such a change is detected, the speculative page fault is aborted and a *classic* page fault is tried. VMA sequence locks are added when VMA attributes which are checked during the page fault are modified. When the PTE is fetched, the VMA is checked to see if it has been changed, so once the page table is locked, the VMA is valid, so any other changes leading to touching this PTE will need to lock the page table, so no parallel change is possible at this time. Compared to the Peter's initial work, this series introduces a spin_trylock when dealing with speculative page fault. This is required to avoid dead lock when handling a page fault while a TLB invalidate is requested by an other CPU holding the PTE. Another change due to a lock dependency issue with mapping->i_mmap_rwsem. In addition some VMA field values which are used once the PTE is unlocked at the end the page fault path are saved into the vm_fault structure to used the values matching the VMA at the time the PTE was locked. This series only support VMA with no vm_ops define, so huge page and mapped file are not managed with the speculative path. In addition transparent huge page are not supported. Once this series will be accepted upstream I'll extend the support to mapped files, and transparent huge pages. This series builds on top of v4.13.9-mm1 and is functional on x86 and PowerPC. Tests have been made using a large commercial in-memory database on a PowerPC system with 752 CPU using RFC v5 using a previous version of this series. The results are very encouraging since the loading of the 2TB database was faster by 14% with the speculative page fault. Using ebizzy test [3], which spreads a lot of threads, the result are good when running on both a large or a small system. When using kernbench, the result are quite similar which expected as not so much multithreaded processes are involved. But there is no performance degradation neither which is good. -- Benchmarks results Note these test have been made on top of 4.13.0-mm1. Ebizzy: --- The test is counting the number of records per second it can manage, the higher is the best. I run it like this 'ebizzy -mTRp'. To get consistent result I repeated the test 100 times and measure the average result, mean deviation, max and min. - 16 CPUs x86 VM Records/s 4.13.0-mm1 4.13.0-mm1-spf delta Average 13217.9065765.94+397.55% Mean deviation 690.37 2609.36 +277.97% Max 16726 77675 +364.40% Min 12194 616340 +405.45% - 80 CPUs Power 8 node: Records/s 4.13.0-mm1 4.13.0-mm1-spf delta Average 38175.4067635.5577.17% Mean deviation 600.09 2349.66 291.55% Max 39563 74292 87.78% Min 35846 62657 74.79% The number of record per second is far better with the speculative page fault. The mean deviation is higher with the speculative page fault, may be because sometime the fault are not handled in a speculative way leading to more variation. The numbers for the x86 guest are really