Hello,

sorry for the late reply, moving Date Centers tends to keep one busy.

I looked at the PR and while it works and certainly is an improvement, it
wouldn't help me in my case much.
Biggest issue being fuser and its exponential slowdown and the RA still
uses this.

What I did was to recklessly force my crap code into a script:
---
#/bin/bash
lsof -n |grep $1 |grep DIR| awk '{print $2}'
---

And call that instead of fuser as well as removing all kill logging by
default (determining the number pids isn't free either). 

With that in place it can deal with 10k processes to kill in less than 10
seconds.

Regards,

Christian

On Tue, 24 Oct 2017 09:07:50 +0200 Dejan Muhamedagic wrote:

> On Tue, Oct 24, 2017 at 08:59:17AM +0200, Dejan Muhamedagic wrote:
> > [...]
> > I just made a pull request:
> > 
> > https://github.com/ClusterLabs/resource-agents/pull/1042  
> 
> NB: It is completely untested!
> 
> > It would be great if you could test it!
> > 
> > Cheers,
> > 
> > Dejan
> >   
> > > Regards,
> > > 
> > > Christian
> > >   
> > > > > Maybe we can even come up with a way
> > > > > to both "pretty print" and kill fast?    
> > > > 
> > > > My best guess right now is no ;-) But we could log nicely for the
> > > > usual case of a small number of stray processes ... maybe
> > > > something like this:
> > > > 
> > > >         i=""
> > > >         get_pids | tr '\n' ' ' | fold -s |
> > > >         while read procs; do
> > > >                 if [ -z "$i" ]; then
> > > >                         killnlog $procs
> > > >                         i="nolog"
> > > >                 else
> > > >                         justkill $procs
> > > >                 fi
> > > >         done
> > > > 
> > > > Cheers,
> > > > 
> > > > Dejan
> > > >   
> > > > > -- 
> > > > > : Lars Ellenberg
> > > > > : LINBIT | Keeping the Digital World Running
> > > > > : DRBD -- Heartbeat -- Corosync -- Pacemaker
> > > > > : R&D, Integration, Ops, Consulting, Support
> > > > > 
> > > > > DRBD® and LINBIT® are registered trademarks of LINBIT
> > > > > 
> > > > > _______________________________________________
> > > > > Users mailing list: Users@clusterlabs.org
> > > > > http://lists.clusterlabs.org/mailman/listinfo/users
> > > > > 
> > > > > Project Home: http://www.clusterlabs.org
> > > > > Getting started: 
> > > > > http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > > > > Bugs: http://bugs.clusterlabs.org    
> > > > 
> > > > _______________________________________________
> > > > Users mailing list: Users@clusterlabs.org
> > > > http://lists.clusterlabs.org/mailman/listinfo/users
> > > > 
> > > > Project Home: http://www.clusterlabs.org
> > > > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > > > Bugs: http://bugs.clusterlabs.org
> > > >   
> > > 
> > > 
> > > -- 
> > > Christian Balzer        Network/Systems Engineer                
> > > ch...@gol.com     Rakuten Communications  
> > 
> > _______________________________________________
> > Users mailing list: Users@clusterlabs.org
> > http://lists.clusterlabs.org/mailman/listinfo/users
> > 
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org  
> 
> _______________________________________________
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 


-- 
Christian Balzer        Network/Systems Engineer                
ch...@gol.com           Rakuten Communications

_______________________________________________
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to