On Sun, 22 Dec 2019 21:14:44 +0000
Andrew Doran <a...@netbsd.org> wrote:

> Hi,
> 
> Anyone interested in taking a look?  This solves the problem we have
> with uvm_fpageqlock.  Here's the code, and the blurb is below:
> 
>       http://www.netbsd.org/~ad/2019/allocator.diff
> 
...
> 
> Results:
> 
>       This from a "make -j96" kernel build, with a !DIAGNOSTIC,
> GENERIC kernel on the system mentioned above.  System time is the most
>       interesting here.  With NUMA disabled in the BIOS:
> 
>              74.55 real      1635.13 user       725.04 sys
> before 72.66 real      1653.86 user       593.19 sys  after
> 
>       With NUMA enabled in the BIOS & the allocator:
> 
>              76.81 real      1690.27 user       797.56 sys
> before 71.10 real      1632.42 user       603.41 sys  after
> 
>       Lock contention before (no NUMA):
> 
>       Total%  Count   Time/ms          Lock          
>       ------ ------- --------- ----------------------
>        99.80 36756212 182656.88 uvm_fpageqlock       
> 
>       Lock contention after (no NUMA):
> 
>       Total%  Count   Time/ms          Lock
>       ------ ------- --------- ----------------------
>        20.21  196928    132.50 uvm_freelist_locks+40  <all>
>        18.72  180522    122.74 uvm_freelist_locks     <all>

Hi Andrew,

I read my way through the patch... very impressive.
Currently I have a patched kernel running and see almost no lock
contention with uvm_freelist_locks, the system is a ryzen 2700.
It seems to have enough cores to make the problem with the old allocator
to appear (no as prominent as with larger machines especially as with
NUMA machines I guess).
The new allocator gets configured with one bucket and 16 colors.
Looks very good to me.

Thanks,
Lars

Reply via email to