It's an interesting approach, but I don't know whether it makes sense. I don't
think that
sysconf(_SC_ARG_MAX) actually implies any guarantee that args that large will
always
work. Even separate resource limits wouldn't guarantee that. The best they
could do would
be guarantee that RLIMIT_ARG_MAX (poorly named IMO, should just be RLIMIT_ARG)
either
be less than RLIMIT_STACK, or guarantee that they were independent and
unrelated (two
truly separate areas, which AFAIK isn't done anywhere now). After all, the
system could
run out of VM for reasons having nothing to do with the process in question
that was attempting
an exec family call.
I think the args are "supposed" to be read-only, but in practice, some progs
(like sendmail)
rewrite them so that e.g. BSD style ps (with appropriate privileges) can see
what processing
phase a particular instance of the program is in. If one wanted to break such
programs,
having two separate areas would let the args be enforced via memory protection
as read-only.
In practice, I don't think worrying about the arg limit is of much use except
to programs that
generate execs with args+environ size significantly greater than their own,
i.e. shell interpreters,
perl, xargs, find (with the + terminator) and so on. Esp. for xargs (and find
... -exec ... {} +),
using a high value (close to the system max, allowing for room for the
environment, that it is
ambigous whether or not the pointers and such as well as the actual strings are
included in the
limit, and so on) lets it reduce the number of exec calls it makes and so may
increase efficiency,
but otherwise, not cutting it quite so close as long as the limit exceeded some
minimum, would
probably be more sensible IMO. The ancient value for NCARGS (the predecessor
of the current
arg limits) was 5120; the Solaris values are roughly 1MB for 32-bit processes
or 2MB for 64-bit
processes. Even the ancient value was longer than any command line a
reasonable person would
type (or could, given a cooked tty mode), and even the modern value could be
exceeded by a
wildcard if the expansion mechanism permitted that; so interactive use isn't an
issue IMO;
the only issue is programs like xargs that are perhaps more efficient if they
generate less
commands with more args.
The quoted message mentioned possible different approaches to the semantics
(but not syntax)
of even their own proposed new interface. From that message alone, this looks
like it's merely
an early concept, not yet part of even the Linux mainstream (if that isn't an
oxymoron,
unless one defines mainstream purely on the basis of numbers rather than
standards as well).
And last but not least, unless you believe system administrators should
discipline stupid users
that attempt to use unreasonable wildcards, I just don't see that treating this
as a _resource_
control issue is valid; perhaps a new proposed standard call like sysconf, but
returning values that
were possibly lower and could change within the lifespan of a process, would be
a more clear
cut answer to the problem (if it really _is_ a problem) that the quoted message
described.
So it might be interesting to look at how difficult it would be to do something
similar in
Solaris (in case it becomes fully adopted by Linux, and either Linux
compatibility in this is
deemed very important, or it is also proposed as a cross-platform standard).
But I don't think
I'd want to see it actually incorporated in Solaris until one of those
rationales was presented.
This message posted from opensolaris.org
_______________________________________________
opensolaris-code mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/opensolaris-code