On Wed, May 23, 2012 at 07:37:19PM -0400, Christos Zoulas wrote:
> Hello,
>
> This is a new resource limit to prevent users from exhausting kernel
> resources that lwps use.
>
> - The limit is per uid
> - The default is 1024 per user unless the architecture overrides it
> - The kernel is never pr
On Fri, Jun 01, 2012 at 12:03:13PM +0200, Edgar Fu? wrote:
> > How about using fss for it instead.
> Well, the point is not that I primarily don't want the atimes to reflect
> the backup access. I primarily want to save the time spent on the update.
> A find is aproximately twice as fast with n
chris...@zoulas.com (Christos Zoulas) wrote:
> Hello,
>
> This is a new resource limit to prevent users from exhausting kernel
> resources that lwps use.
>
> ...
>
> comments?
>
Few comments on the patch:
> Index: kern/init_main.c
>
On Sun, Jun 03, 2012 at 02:58:15PM +0200, Martin Husemann wrote:
> It seems there is confusion wether a vmpsace vm_map.size element is
> measured in bytes or pages. The uvm code seems to treat it as bytes,
> so I guess we should apply something like this patch?
All the uses I found (building amd64
So, some further investigation shows that on sparc (but not sparc64)
the vmspace for userland processes is pretty huge - the t_mincore test
already has a vmspace vm->vm_map.size value of 9118 pages before it even
does the first mmap. Given the low resource limits cited upthread (~3500
locked pages)
It seems there is confusion wether a vmpsace vm_map.size element is
measured in bytes or pages. The uvm code seems to treat it as bytes,
so I guess we should apply something like this patch?
Note the correct usage a few lines below the patched one...
Martin
Index: kern_proc.c
===