Also, disk space and RAM are aplenty...

Is there any way to tell *which* resource is unavailable?
Brian Chabot


On Mon, Mar 10, 2014 at 10:19 AM, Brian Chabot <br...@brianchabot.org> wrote:
> [user1@cent6.4box ~]$ ipcs -m
>
> ------ Shared Memory Segments --------
> key        shmid      owner      perms      bytes      nattch     status
> 0x6c000803 98304      zabbix     600        995952     5
>
> [user1@cent6.4box ~]$ ipcs -s
>
> ------ Semaphore Arrays --------
> key        semid      owner      perms      nsems
> 0x00000000 0          root       600        1
> 0x00000000 65537      root       600        1
> 0x00000000 131074     root       600        1
> 0x7a000803 262147     zabbix     600        10
>
> [user1@cent6.4box ~]$
>
>
> Nothing is jumping out at me here...
>
>
> Brian
>
> Brian Chabot
>
>
> On Mon, Mar 10, 2014 at 10:15 AM, Bruce Dawson <j...@codemeta.com> wrote:
>> Check shared memory and semaphores. Its probable that some other
>> application is swallowing the resource sudo needs. This is a common
>> method of DOS attacks and 'bot nets.
>>
>> --Bruce
>>
>> On Mon, 2014-03-10 at 10:05 -0400, Brian Chabot wrote:
>>> I'm trying to su to a user on a CentOS 6.4 x86_64 box and get the
>>> error in the subject:
>>>
>>> [user1@cent6.4box ~]$ sudo su - user2
>>> su: cannot set user id: Resource temporarily unavailable
>>> [user1@cent6.4box ~]$
>>>
>>> The limits.conf file has the following entries:
>>> *                                         soft   nofile          100000
>>> *                                         hard   nofile          100000
>>> *                                         soft   nproc           8192
>>> *                                         hard   nproc           32767
>>>
>>> The current usage for pengine is:
>>> [user1@cent6.4box ~]$ ps -eLF | grep user2 | wc -l
>>> 1108
>>> [user1@cent6.4box ~]$ lsof | grep user2  | wc -l
>>> 1558
>>> [user1@cent6.4box ~]$
>>>
>>> While these are the majority of the processes and files in use on the
>>> system, they are nowhere near the limits.
>>>
>>> I even increased the limits 10-fold and that has not worked.
>>>
>>> I'm kind of lost here.  Usually the error indicates files or processes
>>> over the limit but here... not so much.
>>>
>>> Any ideas?
>>>
>>>
>>>
>>> Brian Chabot
>>> _______________________________________________
>>> gnhlug-discuss mailing list
>>> gnhlug-discuss@mail.gnhlug.org
>>> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>>
>>
>> _______________________________________________
>> gnhlug-discuss mailing list
>> gnhlug-discuss@mail.gnhlug.org
>> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
_______________________________________________
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/

Reply via email to