[Puppet Users] Wash 0.20.1 now available (rollup since 0.18.0)

2020-02-06 Thread Puppet Product Updates
Wash 0.20.1 is now available. We've done several small releases that
enhanced external plugins. The result is the release of a Wash plugin for
Bolt  that enables exploring
targets in your Bolt inventory via Wash.

See https://github.com/puppetlabs/wash/releases for recent Wash release
notes.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/CABy1mMKrHyxdPOTJf5KmFkWAVa87dfWSyy5cea8uavBaQmPvrw%40mail.gmail.com.


[Puppet Users] PDK 1.16.0 now available

2020-02-06 Thread Puppet Product Updates
Hello!

The Puppet Developer Experience team is pleased to announce the latest
release of the Puppet Development Kit (PDK), version 1.16.0.

Highlights from the 1.16.0 release include:

 - Added a new "use_litmus" setting for auto-generated Travis CI
configurations to make it easier to adopt Puppet Litmus
 in your module CI pipelines.
- PDK will now correctly place new files based on the root of your module
even if you invoke `pdk new` from within a subdirectory of your module.
- To ensure that modules are compatible with all Puppet Masters regardless
of their locale, `pdk module build` will now reject files that contain
non-ASCII characters in their name.

Reminder: As of PDK 1.14.1, use of the PDK with Ruby versions prior to
2.4.0 is now deprecated and a warning will be issued. PDK 1.16.0 is still
fully functional back to Ruby 2.1.9 however we are projecting a PDK 2.0.0
release in early 2020 which will eliminate support for Ruby < 2.4.0.

You can review the full release notes at:
https://puppet.com/docs/pdk/1.x/release_notes_pdk.html#release-notes-pdk-x.16

To install or upgrade to this new version, use your platform's package
manager (see https://puppet.com/docs/pdk/1.x/pdk_install.html) or download
the packages directly for Windows, macOS, and Linux platforms at
https://puppet.com/download-puppet-development-kit.

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/CAJEWz_vum_0xrR-E8xE6M69_ERrRKb9X9ym3t%3Du7QC8xaW%2BzBg%40mail.gmail.com.


Re: [Puppet Users] Re: Puppetserver performance plummeting a few hours after startup

2020-02-06 Thread Josh Cooper
On Thu, Feb 6, 2020 at 9:04 AM Justin Stoller  wrote:

> Yvan your issue sounds like https://tickets.puppetlabs.com/browse/PUP-3647,
> do you know if that is fixed now, or has regressed since then?
>

To add to what Justin said, the state file is only supposed to be
loaded/updated by the puppet agent process. It's possible the file is
accidentally getting loaded in puppetserver, or maybe your puppet agent is
an older version and doesn't have the fix?

In PUP-3647, the statettl puppet setting controls how long to keep entries
in the state file. It defaults to 32 days. You may want to set that to a
lower value like "1d", though see the configuration reference page about
how that affects scheduled resources. Also note setting it to 0 disables
pruning entirely so the file can grow unbounded, which honestly seems
wrong. I'd expect 0 to never cache...

Josh

-- 
Josh Cooper | Software Engineer
j...@puppet.com

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/CA%2Bu97u%3DNXnNh1tMoJ%2Bhdv25EEGdt_0S1cBvqV9pAAYP8uCKdAA%40mail.gmail.com.


Re: [Puppet Users] Re: Puppetserver performance plummeting a few hours after startup

2020-02-06 Thread Justin Stoller
Yvan your issue sounds like https://tickets.puppetlabs.com/browse/PUP-3647,
do you know if that is fixed now, or has regressed since then?

Your issue does sound like a CodeCache or Metaspace issue.

One tunable you didn't mention was "max-active-instances" I've found a
bunch of folks that turned that very low to combat leaky code in 5.x or
4.x, despite it causing Puppet & the ruby runtime to be reloaded
frequently. In 6.x that loading became much more expensive so small values
of "max-active-instances" can be very detrimental to performance (and
contribute to excessive Metaspace/CodeCache usage).

This is also assuming that your servers are both 6.x and both at the same
version. Can you confirm that? There are recent improvements in Server
performance that could contribute (though probably not completely explain)
the difference in performance your seeing if your new Server is the latest
version and your old server hasn't been upgraded in a few months.

HTH,
Justin



On Thu, Feb 6, 2020 at 8:43 AM KevinR  wrote:

> Hi Martijn,
>
> it sounds like you have a sub-optimal combination of:
>
>- The amount of JRubies
>- The total amount of java heap memory for puppetserver
>- The size of your code base
>
> This typically causes the kind of problems you're experiencing. What's
> happening in a nutshell is that puppet is loading so much code in memory
> that is starts running out of it and starts performing garbage collection
> more and more aggressively. At the end, 95% of all cpu cycles are spent on
> garbage collection and you don't have any cpu cycles left over to actually
> do work like compile catalogs...
>
> To understand how Puppet loads code into memory:
>
> Your code base is:  ( [ size of your control-repo ] + [ size of all the
> modules from the Puppetfile ] )  x  [ the amount of puppet code
> environments]
> So let's say:
>
>- your control repo is 5MB in size
>- all modules together are 95MB in size
>- you have 4 code environments: development, testing, acceptance and
>production
>
> That's 100MB of code to load in memory, per environment. For 4
> environments, that's 400MB.
> A different way to get this amount directly is to run *du -h
> /etc/puppetlabs/code/environments* on the puppet master and look at the
> size reported for */etc/puppetlabs/code/environments*
>
> Now every JRuby will load that entire code base into memory. So if you
> have 4 JRubies, that's 1600MB of java heap memory that's actually needed.
> You can imagine what problems will happen if there isn't this much heap
> memory configured...
>
> If you're using the defaults, Puppet will create the same amount of
> JRubies as the number of cpu cores on your master, minus 1, with a maximum
> of 4 JRubies for the system.
> If you override the defaults, you can specify any number of JRubies you
> want with the max-active-instances setting.
>
> So by default a 2-cpu puppet master will create 1 JRuby, a 4-cpu puppet
> master will create 3 JRubies, an 8-cpu puppet master will create 4 JRubies.
>
> So now you know how to determine the amount of java heap memory you need
> to configure, which you can do by configuring the -Xmx and -Xms options in
> the JAVA_ARGS section of the puppetserver startup command.
> Then finally make sure the host has enough physical memory available to
> provide this increased amount of java heap memory.
>
> Once enough java heap memory is provided, you'll see the cpu usage stay
> stable.
>
> Kind regards,
>
> Kevin Reeuwijk
>
> Principal Sales Engineer @ Puppet
>
> On Thursday, February 6, 2020 at 11:51:42 AM UTC+1, Martijn Grendelman
> wrote:
>>
>> Hi,
>>
>> A question about Puppetserver performance.
>>
>> For quite a while now, our primary Puppet server is suffering from severe
>> slowness and high CPU usage. We have tried to tweak its settings, giving it
>> more memory (Xmx = 6 GB at the moment) and toying with the
>> 'max-active-instances' setting to no avail. The server has 8 virtual cores
>> and 12 GB memory in total, to run Pupperserver, PuppetDB and PostgreSQL.
>>
>> Notably, after a restart, the performance is acceptable for a while
>> (several hours, up to a almost day), but then it plummets again.
>>
>> We figured that the server was just unable to cope with the load (we had
>> over 270 nodes talking to it in 30 min intervals), so we added a second
>> master that now takes more than half of that load (150 nodes). That did not
>> make any difference at all for the primary server. The secondary server
>> however, has no trouble at all dealing with the load we gave it.
>>
>> In the graph below, that displays catalog compilation times for both
>> servers, you can see the new master in green. It has very constant high
>> performance. The old master is in yellow. After a restart, the compile
>> times are good (not great) for a while.The first dip represents ca. 4
>> hours, the second dip was 18 hours. At some point, the catalog compilation
>> times sky-rocket, as does the server load. 

[Puppet Users] Re: Puppetserver performance plummeting a few hours after startup

2020-02-06 Thread KevinR
Hi Martijn,

it sounds like you have a sub-optimal combination of:

   - The amount of JRubies
   - The total amount of java heap memory for puppetserver
   - The size of your code base

This typically causes the kind of problems you're experiencing. What's 
happening in a nutshell is that puppet is loading so much code in memory 
that is starts running out of it and starts performing garbage collection 
more and more aggressively. At the end, 95% of all cpu cycles are spent on 
garbage collection and you don't have any cpu cycles left over to actually 
do work like compile catalogs...

To understand how Puppet loads code into memory:

Your code base is:  ( [ size of your control-repo ] + [ size of all the 
modules from the Puppetfile ] )  x  [ the amount of puppet code 
environments]
So let's say:

   - your control repo is 5MB in size
   - all modules together are 95MB in size
   - you have 4 code environments: development, testing, acceptance and 
   production

That's 100MB of code to load in memory, per environment. For 4 
environments, that's 400MB.
A different way to get this amount directly is to run *du -h 
/etc/puppetlabs/code/environments* on the puppet master and look at the 
size reported for */etc/puppetlabs/code/environments*

Now every JRuby will load that entire code base into memory. So if you have 
4 JRubies, that's 1600MB of java heap memory that's actually needed. You 
can imagine what problems will happen if there isn't this much heap memory 
configured...

If you're using the defaults, Puppet will create the same amount of JRubies 
as the number of cpu cores on your master, minus 1, with a maximum of 4 
JRubies for the system.
If you override the defaults, you can specify any number of JRubies you 
want with the max-active-instances setting.

So by default a 2-cpu puppet master will create 1 JRuby, a 4-cpu puppet 
master will create 3 JRubies, an 8-cpu puppet master will create 4 JRubies.

So now you know how to determine the amount of java heap memory you need to 
configure, which you can do by configuring the -Xmx and -Xms options in 
the JAVA_ARGS section of the puppetserver startup command.
Then finally make sure the host has enough physical memory available to 
provide this increased amount of java heap memory.

Once enough java heap memory is provided, you'll see the cpu usage stay 
stable.

Kind regards,

Kevin Reeuwijk

Principal Sales Engineer @ Puppet

On Thursday, February 6, 2020 at 11:51:42 AM UTC+1, Martijn Grendelman 
wrote:
>
> Hi,
>
> A question about Puppetserver performance.
>
> For quite a while now, our primary Puppet server is suffering from severe 
> slowness and high CPU usage. We have tried to tweak its settings, giving it 
> more memory (Xmx = 6 GB at the moment) and toying with the 
> 'max-active-instances' setting to no avail. The server has 8 virtual cores 
> and 12 GB memory in total, to run Pupperserver, PuppetDB and PostgreSQL.
>
> Notably, after a restart, the performance is acceptable for a while 
> (several hours, up to a almost day), but then it plummets again.
>
> We figured that the server was just unable to cope with the load (we had 
> over 270 nodes talking to it in 30 min intervals), so we added a second 
> master that now takes more than half of that load (150 nodes). That did not 
> make any difference at all for the primary server. The secondary server 
> however, has no trouble at all dealing with the load we gave it.
>
> In the graph below, that displays catalog compilation times for both 
> servers, you can see the new master in green. It has very constant high 
> performance. The old master is in yellow. After a restart, the compile 
> times are good (not great) for a while.The first dip represents ca. 4 
> hours, the second dip was 18 hours. At some point, the catalog compilation 
> times sky-rocket, as does the server load. 10 seconds in the graph below 
> corresponds to a server load of around 2, while 40 seconds corresponds to a 
> server load of around 5. It's the Puppetserver process using the CPU.
>
> The second server, the green line, has a consistent server load of around 
> 1, with 4 GB memory (2 GB for the Puppetserver JVM) and 2 cores (it's an 
> EC2 t3.medium).
>
>
>
> If I have 110 nodes, doing two runs per hour, that each take 30 seconds to 
> run, I would still have a concurrency of less than 2, so Puppet causing a 
> consistent load of 5 seems strange. My first thought would be that it's 
> garbage collection or something like that, but the server plenty of memory 
> (OS cache has 2GB).
>
> Any ideas on what makes the Puppetserver starting using so much CPU? What 
> can we try to keep it down?
>
> Thanks,
> Martijn Grendelman
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 

[Puppet Users] Re: Puppetserver performance plummeting a few hours after startup

2020-02-06 Thread Matthias Baur
Hey Martijn,

this sounds similar to what we experienced after upgrading to Puppetserver 
6. Are you running Puppetserver 6? If so, please check the CodeCache 
settings.
See 
https://puppet.com/docs/puppetserver/6.3/tuning_guide.html#potential-java-args-settings
 
for further information.

Regards,
Matthias

Am Donnerstag, 6. Februar 2020 11:51:42 UTC+1 schrieb Martijn Grendelman:
>
> Hi,
>
> A question about Puppetserver performance.
>
> For quite a while now, our primary Puppet server is suffering from severe 
> slowness and high CPU usage. We have tried to tweak its settings, giving it 
> more memory (Xmx = 6 GB at the moment) and toying with the 
> 'max-active-instances' setting to no avail. The server has 8 virtual cores 
> and 12 GB memory in total, to run Pupperserver, PuppetDB and PostgreSQL.
>
> Notably, after a restart, the performance is acceptable for a while 
> (several hours, up to a almost day), but then it plummets again.
>
> We figured that the server was just unable to cope with the load (we had 
> over 270 nodes talking to it in 30 min intervals), so we added a second 
> master that now takes more than half of that load (150 nodes). That did not 
> make any difference at all for the primary server. The secondary server 
> however, has no trouble at all dealing with the load we gave it.
>
> In the graph below, that displays catalog compilation times for both 
> servers, you can see the new master in green. It has very constant high 
> performance. The old master is in yellow. After a restart, the compile 
> times are good (not great) for a while.The first dip represents ca. 4 
> hours, the second dip was 18 hours. At some point, the catalog compilation 
> times sky-rocket, as does the server load. 10 seconds in the graph below 
> corresponds to a server load of around 2, while 40 seconds corresponds to a 
> server load of around 5. It's the Puppetserver process using the CPU.
>
> The second server, the green line, has a consistent server load of around 
> 1, with 4 GB memory (2 GB for the Puppetserver JVM) and 2 cores (it's an 
> EC2 t3.medium).
>
>
>
> If I have 110 nodes, doing two runs per hour, that each take 30 seconds to 
> run, I would still have a concurrency of less than 2, so Puppet causing a 
> consistent load of 5 seems strange. My first thought would be that it's 
> garbage collection or something like that, but the server plenty of memory 
> (OS cache has 2GB).
>
> Any ideas on what makes the Puppetserver starting using so much CPU? What 
> can we try to keep it down?
>
> Thanks,
> Martijn Grendelman
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/44261473-6035-4804-ac8f-dd092462d8d3%40googlegroups.com.


Re: [Puppet Users] Puppetserver performance plummeting a few hours after startup

2020-02-06 Thread Yvan Broccard
Hi,

I add the same issue a couple of months ago. The catalog compilation time
of the puppet server was taking more and more time for no reason. I added
more Puppet server to take the load, but the load on that actual "first"
server could not go back to a lower execution time.

After some troubleshooting, I've noticed that there was one file growing
and growing and that took all the resources of the server :

/opt/puppetlabs/puppet/cache/state/state.yaml

This file was getting huuuge, filled with report file entries.
I truncated this file and then voilĂ , everything was back in good shape.

I then added this to my puppetserver manifest :

  # let's remove growing state.yaml if it becomes too big
  tidy {'/opt/puppetlabs/puppet/cache/state/state.yaml':
size => '10m',
  }

Have a look if you suffer the same issue ...

Cheers

Yvan B

On Thu, Feb 6, 2020 at 11:51 AM Martijn Grendelman <
martijngrendel...@gmail.com> wrote:

> Hi,
>
> A question about Puppetserver performance.
>
> For quite a while now, our primary Puppet server is suffering from severe
> slowness and high CPU usage. We have tried to tweak its settings, giving it
> more memory (Xmx = 6 GB at the moment) and toying with the
> 'max-active-instances' setting to no avail. The server has 8 virtual cores
> and 12 GB memory in total, to run Pupperserver, PuppetDB and PostgreSQL.
>
> Notably, after a restart, the performance is acceptable for a while
> (several hours, up to a almost day), but then it plummets again.
>
> We figured that the server was just unable to cope with the load (we had
> over 270 nodes talking to it in 30 min intervals), so we added a second
> master that now takes more than half of that load (150 nodes). That did not
> make any difference at all for the primary server. The secondary server
> however, has no trouble at all dealing with the load we gave it.
>
> In the graph below, that displays catalog compilation times for both
> servers, you can see the new master in green. It has very constant high
> performance. The old master is in yellow. After a restart, the compile
> times are good (not great) for a while.The first dip represents ca. 4
> hours, the second dip was 18 hours. At some point, the catalog compilation
> times sky-rocket, as does the server load. 10 seconds in the graph below
> corresponds to a server load of around 2, while 40 seconds corresponds to a
> server load of around 5. It's the Puppetserver process using the CPU.
>
> The second server, the green line, has a consistent server load of around
> 1, with 4 GB memory (2 GB for the Puppetserver JVM) and 2 cores (it's an
> EC2 t3.medium).
>
>
>
> If I have 110 nodes, doing two runs per hour, that each take 30 seconds to
> run, I would still have a concurrency of less than 2, so Puppet causing a
> consistent load of 5 seems strange. My first thought would be that it's
> garbage collection or something like that, but the server plenty of memory
> (OS cache has 2GB).
>
> Any ideas on what makes the Puppetserver starting using so much CPU? What
> can we try to keep it down?
>
> Thanks,
> Martijn Grendelman
>
> --
> You received this message because you are subscribed to the Google Groups
> "Puppet Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to puppet-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/puppet-users/0f6e7373-404a-45fd-8bc7-5daed3fa67f3%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/CAMuuXv38QvpA6vfpYE5wyfLyxZT296ASV41j519e9Lf4GqBC1A%40mail.gmail.com.


[Puppet Users] Puppetserver performance plummeting a few hours after startup

2020-02-06 Thread Martijn Grendelman
Hi,

A question about Puppetserver performance.

For quite a while now, our primary Puppet server is suffering from severe 
slowness and high CPU usage. We have tried to tweak its settings, giving it 
more memory (Xmx = 6 GB at the moment) and toying with the 
'max-active-instances' setting to no avail. The server has 8 virtual cores 
and 12 GB memory in total, to run Pupperserver, PuppetDB and PostgreSQL.

Notably, after a restart, the performance is acceptable for a while 
(several hours, up to a almost day), but then it plummets again.

We figured that the server was just unable to cope with the load (we had 
over 270 nodes talking to it in 30 min intervals), so we added a second 
master that now takes more than half of that load (150 nodes). That did not 
make any difference at all for the primary server. The secondary server 
however, has no trouble at all dealing with the load we gave it.

In the graph below, that displays catalog compilation times for both 
servers, you can see the new master in green. It has very constant high 
performance. The old master is in yellow. After a restart, the compile 
times are good (not great) for a while.The first dip represents ca. 4 
hours, the second dip was 18 hours. At some point, the catalog compilation 
times sky-rocket, as does the server load. 10 seconds in the graph below 
corresponds to a server load of around 2, while 40 seconds corresponds to a 
server load of around 5. It's the Puppetserver process using the CPU.

The second server, the green line, has a consistent server load of around 
1, with 4 GB memory (2 GB for the Puppetserver JVM) and 2 cores (it's an 
EC2 t3.medium).



If I have 110 nodes, doing two runs per hour, that each take 30 seconds to 
run, I would still have a concurrency of less than 2, so Puppet causing a 
consistent load of 5 seems strange. My first thought would be that it's 
garbage collection or something like that, but the server plenty of memory 
(OS cache has 2GB).

Any ideas on what makes the Puppetserver starting using so much CPU? What 
can we try to keep it down?

Thanks,
Martijn Grendelman

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/0f6e7373-404a-45fd-8bc7-5daed3fa67f3%40googlegroups.com.