There are different level of responsaiblity.

0. setrlimit should be allowed to set any limit the super user wants.
1. physical addressing to 64 bit boundary
2. hardware pratical limit

Anyway using arbitraty limits like openbsd does for stack is really
bad design. Using arbitrary limit for data is also bad design
descision.

I understand the facts, but ....

Thanks.

On Fri, Aug 10, 2012 at 11:18 PM, Geoff Steckel <g...@oat.com> wrote:
> On 08/10/2012 06:33 PM, Vijay Sankar wrote:
>>
>> Quoting Friedrich Locke <friedrich.lo...@gmail.com>:
>>
>>> Hi,
>>>
>>> i have setted my system resources for a given user via login.conf, but
>>> after the user login the ulimit -a returns different values.
>>>
>>> Here is my login.conf entry:
>>>
>>> general:\
>>>         :coredumpsize=infinity:\
>>>         :cputime=infinity:\
>>>         :datasize=infinity:\
>>>         :filesize=infinity:\
>>>         :stacksize=infinity:\
>>>         :maxproc=infinity:\
>>>         :memorylocked=infinity:\
>>>         :memoryuse=infinity:\
>>>         :openfiles=infinity:\
>>>         :vmemoryuse=infinity:\
>>>         :auth=krb5-or-pwd:\
>>>         :ignorenologin:\
>>>         :localcipher=blowfish,6:\
>>>         :ypcipher=old:\
>>>         :priority=-5:\
>>>         :ftp-chroot:\
>>>         :tc=default:
>>>
>>> But when i log in, what i get for ulimit is:
>>>
>>> sioux@gustav$ ulimit -a
>>> time(cpu-seconds)    unlimited
>>> file(blocks)         unlimited
>>> coredump(blocks)     unlimited
>>> data(kbytes)         8388608
>>> stack(kbytes)        32768
>>> lockedmem(kbytes)    unlimited
>>> memory(kbytes)       unlimited
>>> nofiles(descriptors) 7030
>>> processes            1310
>>>
>>>
>>> My doubt is why data and stack limits are not infinity ?
>>>
>>> Thanks in advance.
>>>
>>>
>>
>> I think this could be because the developers do not want datasize or stack
>> to be unlimited :)
>>
>> I do recall reading somewhere in the lists that the maximum amount of
>> virtual memory that can be allocated by a process using malloc is 8GB and is
>> set by MAXDSIZ (in vmparam.h). Hopefully I am not giving you a totally silly
>> answer and someone more knowledgeable will answer your question correctly.
>>
>>
>> Vijay Sankar, M.Eng., P.Eng.
>> ForeTell Technologies Limited
>> vsan...@foretell.ca
>>
>> ---------------------------------------------
>> This message was sent using ForeTell-POST 4.9
>
> There's a physical limit to memory available on a machine: RAM + swap space.
> It is impossible to grant a program more memory than that.
> Using swap space incurs a very large performance decrease.
> It is advantageous to all users of a system for programs to request
> as little memory as is reasonable.
>
> For various reasons, an arbitrary upper limit is set even if more
> RAM and swap space is available. The OS architects set that limit
> based on their best judgement of the tradeoff between program size
> and overall system utility. Avoiding paging of code and stack avoids
> truly painful bad performance.
>
> Also, the kernel is mapped into the address space of all user programs.
> On 32-bit machines, this limits the maximum possible user program address
> space to about 2G. On 64-bit machines the constraint depends on how many
> bits of address space are actually implemented and other considerations.
>
> Limiting stack space to 32M is fairly reasonable - recursion to huge depth
> may be theoretically wonderful but rarely useful in real life, and
> allocating multi-megabyte stack variables is likely a programming error.
> One might argue for a somewhat larger maximum for truly twisty programs,
> but I haven't seen many programs that need over 8M. When main memory
> becomes 10X larger and CPU speeds 10X higher changing the limit might
> be considered.
>
> The above is from observing various OSes externals and internals.
> The kernel group obviously know far more about how the limits are chosen.
>
> Geoff Steckel

Reply via email to