Re: sys_ptrace_lwpstatus.c (Was: CVS commit: src/sys)

2019-12-27 Thread Kamil Rytarowski
On 26.12.2019 15:11, Valery Ushakov wrote:
> On Thu, Dec 26, 2019 at 08:52:39 +, Kamil Rytarowski wrote:
> 
>> Module Name: src
>> Committed By:kamil
>> Date:Thu Dec 26 08:52:39 UTC 2019
>>
>> Modified Files:
>>  src/sys/kern: files.kern sys_ptrace_common.c
>>  src/sys/sys: ptrace.h
>> Added Files:
>>  src/sys/kern: sys_ptrace_lwpstatus.c
>>
>> Log Message:
>> Put ptrace_read_lwpstatus() and process_read_lwpstatus() to a new file
>>
>> Fixes "no PTRACE" kernel build, in particular zaurus kernel=INSTALL_C700.
> 
> This is counterintuitive when a sys_ptrace* file with ptrace_*
> functions does not depend on options ptrace.  That seems to be a
> strong indication the functions and the file are misnamed.
> 
> filekern/sys_ptrace.c   ptrace
> filekern/sys_ptrace_common.cptrace
> filekern/sys_ptrace_lwpstatus.c kern
> 
> -uwe
> 

I will refactor this.



signature.asc
Description: OpenPGP digital signature


Re: A new page allocator for UVM

2019-12-27 Thread lars Reichardt
On Sun, 22 Dec 2019 21:14:44 +
Andrew Doran  wrote:

> Hi,
> 
> Anyone interested in taking a look?  This solves the problem we have
> with uvm_fpageqlock.  Here's the code, and the blurb is below:
> 
>   http://www.netbsd.org/~ad/2019/allocator.diff
> 
...
> 
> Results:
> 
>   This from a "make -j96" kernel build, with a !DIAGNOSTIC,
> GENERIC kernel on the system mentioned above.  System time is the most
>   interesting here.  With NUMA disabled in the BIOS:
> 
>  74.55 real  1635.13 user   725.04 sys
> before 72.66 real  1653.86 user   593.19 sys  after
> 
>   With NUMA enabled in the BIOS & the allocator:
> 
>  76.81 real  1690.27 user   797.56 sys
> before 71.10 real  1632.42 user   603.41 sys  after
> 
>   Lock contention before (no NUMA):
> 
>   Total%  Count   Time/ms  Lock  
>   -- --- - --
>99.80 36756212 182656.88 uvm_fpageqlock   
> 
>   Lock contention after (no NUMA):
> 
>   Total%  Count   Time/ms  Lock
>   -- --- - --
>20.21  196928132.50 uvm_freelist_locks+40  
>18.72  180522122.74 uvm_freelist_locks 

Hi Andrew,

I read my way through the patch... very impressive.
Currently I have a patched kernel running and see almost no lock
contention with uvm_freelist_locks, the system is a ryzen 2700.
It seems to have enough cores to make the problem with the old allocator
to appear (no as prominent as with larger machines especially as with
NUMA machines I guess).
The new allocator gets configured with one bucket and 16 colors.
Looks very good to me.

Thanks,
Lars