----- Original Message -----
> From: "Dayton Jones" <jones.day...@gmail.com>
> To: "puppet-users" <puppet-users@googlegroups.com>
> Sent: Thursday, 3 November, 2016 18:13:01
> Subject: [Puppet Users] What is the limit of nodes mcollective and activemq 
> can maintain?

> I've seen posts of the "800 node wall" with mcollective/activemq, but
> nothing recent
> (http://ramblings.narrabilis.com/books/masteringpuppet/mcollective).

There isn't really a good rule of thumb, the oreilley book has some good
guidance but really it's a bit hit and miss at your scale.  With the way
that Java GC is I doubt it will be solid.  I am working on using NATS.io
instead of ActiveMQ and at least one user here reported success but ofcourse
I have no idea how this will behave at your scale - I do suspect better
than what you have with ActiveMQ though.

Philosophically I do not think such huge collectives make a whole lot of
sense, it's hard for a human to really consider the impact of actions at
that scale and it's perhaps worth making several actual seperate loose
standing mcollectives rather than making this giant 30k one. Further while
mcollective makes effort to have a data model and display model that makes
sense even on large scale, I doubt you can really comprehend the output 
at such scale when there is variance.

Above though depends of course on your use case and what you're doing but
I am very weary of such giant ones.  Perhaps you can elaborate?

Better use the mcollective-users lists.


> 
> Is there a logical limit of nodes that a collective can contain?  I'm
> working in an environment that currently has about 27,000 nodes - they are
> broken up into separate collectives, but some collectives have several
> thousand nodes (up to 9,000) and growing.  Running "mco inventory --lc"
> will most times report back in the 27k range, but more and more that number
> is significantly less (with some collectives not even showing up) - in the
> hundreds instead of 10's of thousands...
> 
> Stopping and restarting the activemq brokers, will "fix" this most of the
> time.
> 
> Running puppet 3.6.2, mcollective-puppet 1.7.2, and activemq 5.9.1
> 
> Currently have 7 collectives configured, each collective has either one or
> two brokers, but the "main" broker (and the largest collective) has 3
> brokers (master +2 slaves)
> 
> 
>~]$ mco inventory --lc --dt=120
> 
>   Collective                     Nodes
>   ==========                     =====
>   col5_mcollective               136
>   col4_mcollective               282
>   col2_mcollective                1276
>   col7_mcollective               3059
>   col6_mcollective                3451
>   col3_mcollective               6744
>   col1_mcollective                12115
>   mcollective                    27064
> 
>                     Total nodes: 27064
> 
> 
> ~]$ mco inventory --lc --dt=120
> 
>
> 
> Collective                     Nodes
>   ==========                     =====
>   col5_mcollective               138
>   col4_mcollective               284
>   col7_mcollective               3062
>   col6_mcollective                3433
>   mcollective                    6918
> 
>                     Total nodes: 6918
> 
> --
> You received this message because you are subscribed to the Google Groups
> "Puppet Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email
> to puppet-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/puppet-users/f075a078-1dee-49a8-bb5f-56fcc2dee5bb%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/1203200876.549781.1478193759030.JavaMail.zimbra%40devco.net.
For more options, visit https://groups.google.com/d/optout.

Reply via email to