On 2 February 2013 20:49, Amar Akshat <amar.aks...@gmail.com> wrote:
> Hi,
>
> Other day I was writing a small pro-active system monitoring script in
> Ruby, and I forgot to close my IO pipe for pgrep command, every time I
> checked my system status.
> So after a day, there were more than 32,000 zombie pgrep processes. I could
> only run bash commands and nothing else.
>
> I could only find out number of processes due to bash-completion in /proc/
> directory.
> So I had to reboot my system.
>
> However my concern is, in a case like this, is there a way we could, find,
> kill the processes by just using bash utilities.? I tried Googling it, and
> found a couple of answers, but I am sure you would have run into such
> situations before.

Hi Akshat, in such situations, I generally recommend using pkill
(perhaps with -9), and _patience_ to kill all the instances of that
particular process. It may not get triggered instantaneously but
definitely does the job. You might want to keep doing it until you
have killed the source which is forking every second. 'ps aux | grep '
also comes in handy, but again takes some time, but gives you a sense
of how many processes are left to kill.

I have faced similar situations as these, where I have managed without
a reboot. The thing which you would want to keep in mind is if there
are other users using the process, make sure you are not killing
those, and watch out for the critical ones (for example ssh, network
etc. services).


--
Chirag Anand
http://atvariance.in

_______________________________________________
Ilugd mailing list
Ilugd@lists.linux-delhi.org
http://frodo.hserus.net/mailman/listinfo/ilugd

Reply via email to