Kevin Ellsperman writes:
> I need to be able to set the ulimit for nofile - the maximum number of open
> files.  It defaults to 1024 and WebSphere V5 needs this value set
> significantly higher.  I can change this value from the command line as
> root, I cannot set this value in /etc/security/limits.conf.  Anything I
> specify in limits.conf just gets ignored.  This is especially crucial for
> us because we do not run WebSphere as root, but as a non-root user.  Has
> anybody been able to change this value on a permanent basis for a user?

The configuration file /etc/security/limits.conf is only used if the
method you use to start the user session uses the PAM module
pam_limits. In SLES8, for example, the default configurations are
such that login and sshd use pam_limits but su doesn't. Look in
/etc/pam.d/sshd and /etc/pam.d/login and you'll see that the last
line of each is
    session required pam_limits.so
which is what sets the resource limits based on
/etc/security/limits.conf. If your WebSphere start up script uses
su to get from root to the non-root user (or if it does its own
setgroups/setgid/setuid stuff) then nothing will be looking at
limits.conf.

Another thing to note is that pam_limits will fail to grant the
session at all if the attempt to set the chosen limits fails.
In particular (as I've just found out by testing), if you put lines
in limits.conf which have "foo hard nofiles 11000" then you will
no longer be able to log in to username foo by ssh because the ssh
daemon itself has inherited the default limit of 1024 from its
parent shell and so can't increase its child's limit beyond its own.
Similarly, if you add the line
    session required pam_limits.so
to /etc/pam.d/su then you will not be able to su to a username which
has a limit higher than 1024 for nofiles configured in limits.conf.
The answer for sshd is to start the daemon off with a higher limit
of its own, e.g. add lines to /etc/sysconfig/ssh (which in SLES8
anyway gets sourced at sshd startup time):
    ulimit -h -n 20000
    ulimit -s -n 20000
to set the process' hard and soft open files limits to 20000 before
the sshd itself gets execed. For su, you're going to have to set the
limits before the su which means it's probably not worth using
limits.conf at all: if you have to raise the limits before su'ing
then you might as well set them to the right values to start with
and not bother using pam_limits and limits.conf. In other words, just
edit the startup script for WebSphere (or an /etc/sysconfig file if
it's nicely behaved enough to source one) to set the limits higher
before it starts up, using ulimit commands as above for bash. Note
that the exact syntax is shell-dependent since such commands are
necessarily shell builtins (it's no good calling out to a separate
program because the rlimits are inherited only by children and so
your own shell wouldn't have its own limits changed). For Bourne
flavoured shells, ulimit is what you want; for csh flavoured shells
you'd use "limit" with a different syntax (not that you'd ever be
writing scripts in csh of course, but just fyi for interactive use).

The sysctl fs.file-max (equivalently /proc/sys/fs/file-max) is a
system-wide limit which you may want to raise too if you think it's
in danger of being reached. For SLES8 (at least), it appears to be
9830 by default which is rather more than the per-user value of
1024 that you're hitting first but still may be worth increasing if
there are going to be a number of processes all wanting more than a
couple of thousand or so open files.

--Malcolm

--
Malcolm Beattie <[EMAIL PROTECTED]>
Linux Technical Consultant
IBM EMEA Enterprise Server Group...
...from home, speaking only for myself

Reply via email to