Re: [Puppet Users] Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: no parameter named 'quick_check'

2020-07-17 Thread Peter Krawetzky
Ok I figured out the curl command but I get this error: [root@mypuppetserver private_keys]# curl -v --header "Content-Type: application/json" --cert /etc/puppetlabs/puppet/ssl/certs/mypuppetserver.mydomain.com.pem --key /etc/puppetlabs/puppet/ssl/private_keys/mypuppetserver.mydomain.com.pem

Re: [Puppet Users] Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: no parameter named 'quick_check'

2020-07-17 Thread Peter Krawetzky
r/latest/admin-api/v1/environment-cache.html > > HTH, > Justin > > On Thu, Jul 16, 2020 at 10:52 AM Peter Krawetzky > wrote: > >> I've reviewed sever 500 error posts in here but the answers seem to >> differ based on the situation. >> >> >>

[Puppet Users] Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: no parameter named 'quick_check'

2020-07-16 Thread Peter Krawetzky
I've reviewed sever 500 error posts in here but the answers seem to differ based on the situation. One of our developers modified code to include a parameter available in httpfile 0.1.9 called quick_check. We have two installation of puppetserver one in lab domain and one in production

[Puppet Users] PuppetDB Using Puppetlabs Postgresql Module on Linux

2019-12-17 Thread Peter Krawetzky
I was looking through the documentation and couldn't find my answer. I want to use both the PuppetDB and Postgresql supported modules to install and manage both. I don't want to use the default database directory "/var/lib/postgresql/..." but want to specify my own. What do I use to point

[Puppet Users] Re: Puppet Log Directory Permissions

2019-06-06 Thread Peter Krawetzky
Interesting, thanks! On Tuesday, June 4, 2019 at 1:59:07 PM UTC-4, Peter Krawetzky wrote: > > I want to be able to ingest the puppet servers logs into splunk but the > owner of the directory is puppet:puppet and the permissions are > /var/log/puppetlabs/puppet rwxr-x---. Sin

[Puppet Users] Puppet Log Directory Permissions

2019-06-04 Thread Peter Krawetzky
I want to be able to ingest the puppet servers logs into splunk but the owner of the directory is puppet:puppet and the permissions are /var/log/puppetlabs/puppet rwxr-x---. Since other has no access, the splunk service will not be able to read the log files. Can I just change the

[Puppet Users] Re: Latest version of lookup_http not in rubygems.org

2019-02-21 Thread Peter Krawetzky
Yeah it looks like I did get this in reverse but it doesn't explain why an SSL connection is not working to couchdb. On Tuesday, February 19, 2019 at 4:23:26 PM UTC-5, Peter Krawetzky wrote: > > I'm trying to an SSL connection from puppetserver to a couchdb no-sql > database for hie

[Puppet Users] Latest version of lookup_http not in rubygems.org

2019-02-19 Thread Peter Krawetzky
I'm trying to an SSL connection from puppetserver to a couchdb no-sql database for hiera lookup data. I have both hiera-http and lookup_http installed however the version of lookup_http.rb file that gets installed from running the puppetserver gem install command is 1.0.3. The version I want

[Puppet Users] Fresh install of Opensource puppetdb on RHEL 7.3 using Postgresql 9.6 pg_log errors

2018-10-02 Thread Peter Krawetzky
Just installed a new copy of postgresql 9.6 on a server that was running 9.4. Upgraded puppetdb to 5.2.4 on the same server. After startup the pg_log file has been throwing the following error: ERROR: canceling autovacuum task I suspect puppetdb is holding a lock but not sure where. It also

[Puppet Users] Re: puppetserver-acces.log

2017-10-02 Thread Peter Krawetzky
So I recycled the puppetserver service and it now appears the log is back to normal size over the course of time. I'm guessing something happened to cause puppetserver to dump more than it should have. On Monday, October 2, 2017 at 10:24:19 AM UTC-4, Peter Krawetzky wrote: > > We had

[Puppet Users] Re: puppetserver-acces.log

2017-10-02 Thread Peter Krawetzky
Since I don't have a setting in the file, it defaults to info. Unless there is a bug. On Monday, October 2, 2017 at 10:24:19 AM UTC-4, Peter Krawetzky wrote: > > We had an odd situation happen earlier this morning. Puppet server > version 2.1.1 on RHEL7. > > I have 4 puppet

[Puppet Users] puppetserver-acces.log

2017-10-02 Thread Peter Krawetzky
We had an odd situation happen earlier this morning. Puppet server version 2.1.1 on RHEL7. I have 4 puppet servers behind a load balancing F5 server. One of our puppet servers puppetserver-access.log grew (over 2TB's) to the point that it almost filled /var which for a server is not good. I

[Puppet Users] PuppetDB Upgrade Question from 2.1.1-1 to 2.7.2-1

2017-09-21 Thread Peter Krawetzky
I'm doing a minor upgrade from 2.1.1-1 to 2.7.2-1 and was wondering if the size of the database makes a difference in how long the upgrade takes? It's currently managing approximately 3200+ nodes in production. Testing in our lab environment did not run long as we only manage about 500 nodes

[Puppet Users] Upgraded puppet server and hiera is not working

2017-07-21 Thread Peter Krawetzky
I upgraded the puppet server from 2.1.1-1 to 2.7.2-1 and at the same time the puppet agent was upgraded from 1.2.2-1 to 1.10.4-1. I read several different posts on this forum and others but I can't seem to get hiera 5 to work properly. I tried a couple of different hiera.yaml config files yet

[Puppet Users] Re: PuppetDB Curl Queries

2017-07-11 Thread Peter Krawetzky
Isn't that for the PE version? we are using open source. On Tuesday, July 11, 2017 at 11:48:35 AM UTC-4, Peter Krawetzky wrote: > > Using CURL to query PuppetDB has got to be the most time consuming thing > I've ever done. It took me almost 3 hours one day to create a CURL query

[Puppet Users] PuppetDB Curl Queries

2017-07-11 Thread Peter Krawetzky
Using CURL to query PuppetDB has got to be the most time consuming thing I've ever done. It took me almost 3 hours one day to create a CURL query that I ended up creating in a SQL statement in 10 minutes once I figured out the database structure. Does anyone have: 1. A documented list of

[Puppet Users] Re: PuppetDB low catalog-duplication rate Puppet DB 4.3.0

2017-07-10 Thread Peter Krawetzky
Yes I am on V4 and the query just didn't return any results - no errors so I assume I am using the correct curl command. Thanks On Wednesday, June 28, 2017 at 2:11:17 PM UTC-4, Mike Sharpton wrote: > > Hey all, > > I am hoping there is someone else in the same boat as I am. We are > running

[Puppet Users] Re: Compare node fact runs

2017-07-10 Thread Peter Krawetzky
Do you have a link to those posts Mike? On Thursday, July 6, 2017 at 12:54:37 PM UTC-4, Peter Krawetzky wrote: > > I'm seeing a lot of replace facts in the puppetdb server log. I googled > but can't find anything solid. > > Is there a way to compare facts for a node between ru

[Puppet Users] Puppet Minor Upgrade

2017-07-07 Thread Peter Krawetzky
I need a clarification on a comment in the puppet upgrade doci. Does this mean (last sentence below) I can upgrade the puppetdb servers before the puppetservers and puppet agent? It's the "nodes" comment that has me confused. I take that as it can go before anything. A minor upgrade is an

[Puppet Users] Re: PuppetDB low catalog-duplication rate Puppet DB 4.3.0

2017-07-07 Thread Peter Krawetzky
So I went to run the curl command listed below and it came back with nothing. So I used pgadmin to look at the catalogs table and it's completely empty. The system has been running for almost 24 hours after dropping/creating the postgresql database. Any idea why the catalog table would be

[Puppet Users] Re: PuppetDB - High CPU Large number of KahaDB files and very little work going to postgresql

2017-07-06 Thread Peter Krawetzky
, 2017 at 12:25:57 PM UTC-4, Peter Krawetzky wrote: > > Last Sunday we hit a wall on our 3.0.2 puppetdb server. The cpu spiked > and the KahaDB logs started to grow eventually almost filling a > filesystem. I stopped the service, removed the mq directory per a > troubl

[Puppet Users] Compare node fact runs

2017-07-06 Thread Peter Krawetzky
I'm seeing a lot of replace facts in the puppetdb server log. I googled but can't find anything solid. Is there a way to compare facts for a node between runs? Our agents run hourly. We are using open source PuppetDB 3.0.2. Thanks. -- You received this message because you are subscribed

[Puppet Users] Re: PuppetDB - High CPU Large number of KahaDB files and very little work going to postgresql

2017-07-06 Thread Peter Krawetzky
have to drop/create the DB. On Wednesday, June 28, 2017 at 12:25:57 PM UTC-4, Peter Krawetzky wrote: > > Last Sunday we hit a wall on our 3.0.2 puppetdb server. The cpu spiked > and the KahaDB logs started to grow eventually almost filling a > filesystem. I stopped the service, re

[Puppet Users] Re: PuppetDB - High CPU Large number of KahaDB files and very little work going to postgresql

2017-07-05 Thread Peter Krawetzky
Chris that is this my take on historical data as well. We have processes that export the data to a data warehouse for consumption by other apps. Missing some won't kill that process, like the data never existed. On Wednesday, June 28, 2017 at 12:25:57 PM UTC-4, Peter Krawetzky wrote: > >

[Puppet Users] Re: PuppetDB - High CPU Large number of KahaDB files and very little work going to postgresql

2017-07-05 Thread Peter Krawetzky
postgresql and start puppetdb allowing it to create everything it needs from scratch. Any opinions? On Wednesday, June 28, 2017 at 12:25:57 PM UTC-4, Peter Krawetzky wrote: > > Last Sunday we hit a wall on our 3.0.2 puppetdb server. The cpu spiked > and the KahaDB logs started to grow e

[Puppet Users] Re: PuppetDB - High CPU Large number of KahaDB files and very little work going to postgresql

2017-06-30 Thread Peter Krawetzky
What is the actual definition of store_usage? It's not very specific. Does it limit the number of KahaDB logs? If so what happens when that limit is reached? On Wednesday, June 28, 2017 at 12:25:57 PM UTC-4, Peter Krawetzky wrote: > > Last Sunday we hit a wall on our 3.0.2 puppetdb

[Puppet Users] Re: PuppetDB - High CPU Large number of KahaDB files and very little work going to postgresql

2017-06-30 Thread Peter Krawetzky
PM UTC-4, Peter Krawetzky wrote: > > Last Sunday we hit a wall on our 3.0.2 puppetdb server. The cpu spiked > and the KahaDB logs started to grow eventually almost filling a > filesystem. I stopped the service, removed the mq directory per a > troubleshooting guide, and re

[Puppet Users] Re: PuppetDB - High CPU Large number of KahaDB files and very little work going to postgresql

2017-06-30 Thread Peter Krawetzky
So if I'm reading this correctly, the userlist#~(number) represents the value of the userlist fact? If that is the case, the size of the userlist fact is 228k each and every time puppet agent runs with approximately 3300 nodes. On Wednesday, June 28, 2017 at 12:25:57 PM UTC-4, Peter Krawetzky

[Puppet Users] Re: PuppetDB - High CPU Large number of KahaDB files and very little work going to postgresql

2017-06-30 Thread Peter Krawetzky
82470', $220 = '537', $221 = '109', $222 = '67891543', $223 = '537', $224 = '706', $225 = '68711507' On Wednesday, June 28, 2017 at 12:25:57 PM UTC-4, Peter Krawetzky wrote: > > Last Sunday we hit a wall on our 3.0.2 puppetdb server. The cpu spiked > and the KahaDB logs started to g

[Puppet Users] Re: PuppetDB - High CPU Large number of KahaDB files and very little work going to postgresql

2017-06-30 Thread Peter Krawetzky
lete message from client < 2017-06-30 07:48:02.343 EDT >LOG: incomplete message from client < 2017-06-30 07:48:04.957 EDT >LOG: incomplete message from client < 2017-06-30 07:48:05.256 EDT >LOG: incomplete message from client On Wednesday, June 28, 2017 at 12:25:57 PM UTC-4,

[Puppet Users] Re: PuppetDB - High CPU Large number of KahaDB files and very little work going to postgresql

2017-06-30 Thread Peter Krawetzky
hat to 32. On Wednesday, June 28, 2017 at 12:25:57 PM UTC-4, Peter Krawetzky wrote: > > Last Sunday we hit a wall on our 3.0.2 puppetdb server. The cpu spiked > and the KahaDB logs started to grow eventually almost filling a > filesystem. I stopped the service, removed the mq d

[Puppet Users] Open Source PuppetDB Code

2017-06-29 Thread Peter Krawetzky
I did a little searching on github but couldn't find it. Does anyone know where the source code is for the PuppetDB server? I'm really looking for the source code that contains the DML (insert, select, update, delete). Thanks. -- You received this message because you are subscribed to the

[Puppet Users] Re: PuppetDB - High CPU Large number of KahaDB files and very little work going to postgresql

2017-06-28 Thread Peter Krawetzky
I looked at both documents and the second one references the scheduler log files filling up. Mine are actually in the KahaDB directory. On Wednesday, June 28, 2017 at 12:25:57 PM UTC-4, Peter Krawetzky wrote: > > Last Sunday we hit a wall on our 3.0.2 puppetdb server. The cpu

[Puppet Users] Re: PuppetDB - High CPU Large number of KahaDB files and very little work going to postgresql

2017-06-28 Thread Peter Krawetzky
Hi Mike, thanks for the reply. I'll look at the doci and see what they say but somehow I suspected that. And thanks for how to disable puppetdb. On Wednesday, June 28, 2017 at 12:25:57 PM UTC-4, Peter Krawetzky wrote: > > Last Sunday we hit a wall on our 3.0.2 puppetdb server. The cpu

[Puppet Users] PuppetDB - High CPU Large number of KahaDB files and very little work going to postgresql

2017-06-28 Thread Peter Krawetzky
Last Sunday we hit a wall on our 3.0.2 puppetdb server. The cpu spiked and the KahaDB logs started to grow eventually almost filling a filesystem. I stopped the service, removed the mq directory per a troubleshooting guide, and restarted. After several minutes the same symptoms began again

[Puppet Users] Running multile MySQL Instances on the same server

2013-05-17 Thread Peter Krawetzky
Was wondering if someone has implemented the management of multiple MySQL instances using puppet on the same server? Essentially we want to use the same MySQL binaries but implement multiple distinct MySQL instances connecting via a specific port number. Puppet Forge has a great MySQL