Hi,

Le 27 mars 2026 02:28:59 GMT+02:00, Samuel Thibault <[email protected]> a 
écrit :
>Michael Kelly, le jeu. 26 mars 2026 17:53:27 +0000, a ecrit:
>> On 25/03/2026 22:57, Samuel Thibault wrote:
>> > Samuel Thibault, le sam. 14 mars 2026 23:13:55 +0100, a ecrit:
>> > > That's possible. For some memory-hungry packages I see several GB of
>> > > swap getting consumed, and little useful CPU time is spent, until the
>> > > build manages to finish, or times out.
>> > I had a look at the vmstat. The highest memory segment had all its
>> > inactive pages swapped out, but all the other segments had almost only
>> > inactive pages still in. I fixed the pageing out in gnumach, to select
>> > inactive pages from all segments before looking at activate pages. I
>> > believe that can help a lot when the last segment is small (and thus all
>> > the swapping happens there with almost only active pages...)
>> 
>> i've looked at the changes you've made and it seems to me that it should
>> improve the performance. Perhaps if the improvement is significant the issue
>> with the vm_map locking might become less common to give us more time to fix
>> it. I'd be very interested in an update once you've assessed the new code in
>> action.
>
>It's hard to tell. I have tried to run the mypy build, it's still quite
>slow, but it seems faster. Possibly its working set is simply really
>large.
>
>I guess it'd be simpler to just test with synthetic benchmarks which
>exhibit simple memory access patterns.
>
>But, yes, improving performance by lowering the swapping will lead to
>less triggering of swap hangs :)
>
>(which conversely is bad knews for reproducing it to be able to fix it)

The way it can be reproduce rapidely is using smp on rocksdb 
<https://github.com/gyfleury/rocksdb.git> for example or compling hurd. make 
-j5 will assert in protid-make.c about failed allocation memory with memory RAM 
6G. I test it with défaut pager enabled
>
>Samuel
>

Reply via email to