So we saw similar and with only the default maximum 4 puppetserver jruby 
instances would often get 5 or 6 clients connecting at once which in turn 
led to blocking and then a queue building as more clients connected. We 
would check port 8140 and often see over 80 established connections.

Now that we have doubled the max-active-instances to 8 and increased the 
JVM heap size to 4GB the concurrent connections are able to be handled and 
a queue no longer builds so puppet runs are much quicker and the server 
does not get so bogged down.

I hope this helps.

On Thursday, February 6, 2020 at 10:51:42 AM UTC, Martijn Grendelman wrote:
>
> Hi,
>
> A question about Puppetserver performance.
>
> For quite a while now, our primary Puppet server is suffering from severe 
> slowness and high CPU usage. We have tried to tweak its settings, giving it 
> more memory (Xmx = 6 GB at the moment) and toying with the 
> 'max-active-instances' setting to no avail. The server has 8 virtual cores 
> and 12 GB memory in total, to run Pupperserver, PuppetDB and PostgreSQL.
>
> Notably, after a restart, the performance is acceptable for a while 
> (several hours, up to a almost day), but then it plummets again.
>
> We figured that the server was just unable to cope with the load (we had 
> over 270 nodes talking to it in 30 min intervals), so we added a second 
> master that now takes more than half of that load (150 nodes). That did not 
> make any difference at all for the primary server. The secondary server 
> however, has no trouble at all dealing with the load we gave it.
>
> In the graph below, that displays catalog compilation times for both 
> servers, you can see the new master in green. It has very constant high 
> performance. The old master is in yellow. After a restart, the compile 
> times are good (not great) for a while.The first dip represents ca. 4 
> hours, the second dip was 18 hours. At some point, the catalog compilation 
> times sky-rocket, as does the server load. 10 seconds in the graph below 
> corresponds to a server load of around 2, while 40 seconds corresponds to a 
> server load of around 5. It's the Puppetserver process using the CPU.
>
> The second server, the green line, has a consistent server load of around 
> 1, with 4 GB memory (2 GB for the Puppetserver JVM) and 2 cores (it's an 
> EC2 t3.medium).
>
>
>
> If I have 110 nodes, doing two runs per hour, that each take 30 seconds to 
> run, I would still have a concurrency of less than 2, so Puppet causing a 
> consistent load of 5 seems strange. My first thought would be that it's 
> garbage collection or something like that, but the server plenty of memory 
> (OS cache has 2GB).
>
> Any ideas on what makes the Puppetserver starting using so much CPU? What 
> can we try to keep it down?
>
> Thanks,
> Martijn Grendelman
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/f2d5182f-cf48-4612-9806-e7a29c9cb7c2%40googlegroups.com.

Reply via email to