I'm wondering if that puppetdb instance's queue would grow if it wasn't also 
doing normal agent runs.

Maybe pause puppet agent runs until puppetdb is caught up? Puppetdb may not be 
happy doing its regular work plus this cleanup. You could stop the puppetserver 
service(s) for the cheap way to accomplish this.

Another option would be to build another puppetdb (backended with your existing 
postgresql) and have the puppetserver instances use that, to let your existing 
puppetdb chew through things. It sounds like postgresql is not the bottleneck. 
I found that breaking puppetdb and postgresql apart onto separate hosts lowered 
my intermittent puppetdb queue backlog.

If neither resolves things, it sounds like building a fresh set of 
puppetdb+postgresql hosts and pointing some puppetservers at it will resolve 
the question of whether your kahadb queues will grow with the refactored fact 
or if your issues are just from the backlog of the changeover. This depends how 
attached you are to the existing data. (Not so much here, agent runs will 
refill it.)

On Wed, Jul 05, 2017 at 06:38:36AM -0700, Peter Krawetzky wrote:
>    So after a change from the module owner who's fact's were very very large,
>    the java CPU has been reduced significantly and running much better.
>     However, now that the facts have changed for every single node, the DB is
>    doing a significant amount of work to clean things up.  And the KahaDB
>    queue is still growing out of control. 
> 
>    At this point it might be a better option to stop the puppetdb server,
>    shutdown postgresql, delete the data directory (after copying pg_hba.conf
>    and postgresql.conf to /tmp), init a new db, copy those 2 files from /tmp
>    back to their original spot, start postgresql and start puppetdb allowing
>    it to create everything it needs from scratch.  Any opinions?
> 
>    On Wednesday, June 28, 2017 at 12:25:57 PM UTC-4, Peter Krawetzky wrote:
> 
>      Last Sunday we hit a wall on our 3.0.2 puppetdb server.  The cpu spiked
>      and the KahaDB logs started to grow eventually almost filling a
>      filesystem.  I stopped the service, removed the mq directory per a
>      troubleshooting guide, and restarted.  After several minutes the same
>      symptoms began again and I have not been able to come up with a puppetdb
>      or postgresql config to fix this.
>      We tried turning off storeconfig in the puppet.conf file on our puppet
>      master servers but that doesn't appear to have resolved the problem.  I
>      also can't find a good explanation as to what this parameter really does
>      or does not do even in the puppet server documentation.  Anyone have a
>      better insight into this?
>      Also is there a way to just turn off puppetdb?
>      I've attached a file that is a snapshot of the puppetdb dashboard.
>      Anyone experience anything like this?
> 
>    --
>    You received this message because you are subscribed to the Google Groups
>    "Puppet Users" group.
>    To unsubscribe from this group and stop receiving emails from it, send an
>    email to [1]puppet-users+unsubscr...@googlegroups.com.
>    To view this discussion on the web visit
>    
> [2]https://groups.google.com/d/msgid/puppet-users/42d78acf-727e-406f-a2c1-f6253121991b%40googlegroups.com.
>    For more options, visit [3]https://groups.google.com/d/optout.
> 
> References
> 
>    Visible links
>    1. mailto:puppet-users+unsubscr...@googlegroups.com
>    2. 
> https://groups.google.com/d/msgid/puppet-users/42d78acf-727e-406f-a2c1-f6253121991b%40googlegroups.com?utm_medium=email&utm_source=footer
>    3. https://groups.google.com/d/optout

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/20170705135915.qrkwme4mczdkk4us%40iniquitous.heresiarch.ca.
For more options, visit https://groups.google.com/d/optout.

Reply via email to