Re: [foreman-users] Re: PLP0034: The distributor indicated a failed response when publishing repository

2017-08-22 Thread Tony Coffman
Is it possible we have a regression on this issue? I know it was fixed back 
in September 2016 but I ran into what looks like the same issue with 
Katello 3.4.4 today.

I pulled in a new version of Puppet stdlib a week or two ago when I 
published a new version of some Content Views.

Today, I promoted those CVs to an environment that needed to be synched to 
a capsule and I got this

PLP0034: The distributor 
3-C7-Staging-puppet-61a82fb3-e03e-4866-8903-b671fe1bd9d7 indicated a failed 
response when publishing repository 
3-C7-Staging-puppet-61a82fb3-e03e-4866-8903-b671fe1bd9d7.


The detailed error info from the capsule synch task looks like a duplicate 
unit name again. 

error:
code: PLP0034
data:
  distributor_id: 3-C7-Staging-puppet-61a82fb3-e03e-4866-8903-b671fe1bd9d7
  repo_id: 3-C7-Staging-puppet-61a82fb3-e03e-4866-8903-b671fe1bd9d7
  summary: duplicate unit names
description: The distributor 
3-C7-Staging-puppet-61a82fb3-e03e-4866-8903-b671fe1bd9d7
  indicated a failed response when publishing repository 
3-C7-Staging-puppet-61a82fb3-e03e-4866-8903-b671fe1bd9d7.
sub_errors: []

-- 
You received this message because you are subscribed to the Google Groups 
"Foreman users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to foreman-users+unsubscr...@googlegroups.com.
To post to this group, send email to foreman-users@googlegroups.com.
Visit this group at https://groups.google.com/group/foreman-users.
For more options, visit https://groups.google.com/d/optout.


Re: [foreman-users] No facts in Json input for hooks after foreman discovery

2017-08-22 Thread lohit . valleru
Thank you for letting me know, that its the expected behavior.

I will use the api instead to get the facts.


On Tuesday, August 22, 2017 at 8:28:09 AM UTC-4, Lukas Zapletal wrote:
>
> Hello, 
>
> facts are not reported via hooks. 
>
> LZ 
>
> On Mon, Aug 21, 2017 at 4:37 PM,   
> wrote: 
> > Hello, 
> > 
> > The issue is that -> i dont see facts in input json after foreman 
> discovers 
> > a VM/baremetal. 
> > 
> > This is the workflow that i am trying with hooks and foreman discovery 
> > 
> > VM/baremetal gets discovered -> JSON input to hooks -> Hooks use the 
> facts 
> > from JSON input to add more facts from CMDB -> Facts get uploaded to 
> > foreman. 
> > 
> > I have created the following hooks: 
> > 
> > 
> /usr/share/foreman/config/hooks/host/discovered/after_create/10-logger.py 
> > 
> > ls /usr/share/foreman-community/hooks/ 
> > functions.py   functions.pyc  __init__.py 
> > 
> > The scripts just get the input json and try to output the json to temp 
> > directory, to help me understand the structure. 
> > However - I dont see facts or any other useful information during input 
> > json. 
> > 
> > { 
> > "id": 22, 
> > "name": "mac00163e5426c9", 
> > "last_compile": null, 
> > "last_report": null, 
> > "updated_at": "2017-08-18T20:13:44.058Z", 
> > "created_at": "2017-08-18T20:13:44.058Z", 
> > "root_pass": null, 
> > "architecture_id": null, 
> > "operatingsystem_id": null, 
> > "environment_id": null, 
> > "ptable_id": null, 
> > "medium_id": null, 
> > "build": false, 
> > "comment": null, 
> > "disk": null, 
> > "installed_at": null, 
> > "model_id": null, 
> > "hostgroup_id": null, 
> > "owner_id": null, 
> > "owner_type": null, 
> > "enabled": true, 
> > "puppet_ca_proxy_id": null, 
> > "managed": false, 
> > "use_image": null, 
> > "image_file": null, 
> > "uuid": null, 
> > "compute_resource_id": null, 
> > "puppet_proxy_id": null, 
> > "certname": null, 
> > "image_id": null, 
> > "organization_id": null, 
> > "location_id": null, 
> > "otp": null, 
> > "realm_id": null, 
> > "compute_profile_id": null, 
> > "provision_method": null, 
> > "grub_pass": "", 
> > "global_status": 0, 
> > "lookup_value_matcher": null, 
> > "pxe_loader": null, 
> > "discovery_rule_id": null 
> > } 
> > 
> > 
> > The workaround that i will have to use to get facts is: 
> > 1. Extract the mac address from the name -> reformat it -> use that to 
> query 
> > the id of the host with discovery api 
> > 2. Use the id to query the facts of that host. 
> > 
> > Do i have to follow the above workaround to get facts of the discovered 
> > system? or am i missing something? 
> > It would be so much easier, if i could just facts in the input json to 
> the 
> > hook. 
> > 
> > I am using the following versions foreman and its plugins on CentOS7: 
> > 
> > tfm-rubygem-foreman_setup-5.0.0-1.fm1_13.el7.noarch 
> > foreman-release-1.15.3-1.el7.noarch 
> > foreman-installer-1.15.3-1.el7.noarch 
> > foreman-libvirt-1.15.3-1.el7.noarch 
> > foreman-postgresql-1.15.3-1.el7.noarch 
> > tfm-rubygem-foreman_hooks-0.3.14-1.fm1_15.el7.noarch 
> > foreman-selinux-1.15.3-1.el7.noarch 
> > foreman-debug-1.15.3-1.el7.noarch 
> > foreman-release-scl-3-1.el7.noarch 
> > tfm-rubygem-hammer_cli_foreman-0.10.2-1.el7.noarch 
> > tfm-rubygem-foreman_discovery-9.1.1-1.fm1_15.el7.noarch 
> > foreman-cli-1.15.3-1.el7.noarch 
> > tfm-rubygem-foreman_memcache-0.0.6-1.fm1_15.el7.noarch 
> > foreman-proxy-1.15.3-1.el7.noarch 
> > foreman-1.15.3-1.el7.noarch 
> > 
> > Thanks, 
> > Lohit 
> > 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> Groups 
> > "Foreman users" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an 
> > email to foreman-user...@googlegroups.com . 
> > To post to this group, send email to forema...@googlegroups.com 
> . 
> > Visit this group at https://groups.google.com/group/foreman-users. 
> > For more options, visit https://groups.google.com/d/optout. 
>
>
>
> -- 
> Later, 
>   Lukas @lzap Zapletal 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Foreman users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to foreman-users+unsubscr...@googlegroups.com.
To post to this group, send email to foreman-users@googlegroups.com.
Visit this group at https://groups.google.com/group/foreman-users.
For more options, visit https://groups.google.com/d/optout.


Re: [foreman-users] [katello 2.4] passenger status broken

2017-08-22 Thread Eric D Helms
The patch from the issue designed to fix this (
https://github.com/theforeman/foreman-packaging/blob/rpm/develop/rubygem-passenger/rubygem-passenger-4.0.18-tmpdir.patch)
is still present so you can't entirely judge the versioning as to whats
available as a fix. What is the error you see?

On Mon, Aug 21, 2017 at 11:51 AM, Charlie Derwent <
shelltoesupers...@gmail.com> wrote:

> Apologies for resurrecting an old thread but this looks like this bug
> http://projects.theforeman.org/issues/8392 and it is still a problem in
> 1.15.3. It appears it was fixed in satellite https://access.redhat.com/
> errata/RHBA-2015:1911 (BZ - 1163380 - passenger-status broken on sat6 on
> rhel7) which is using ruby193-rubygem-passenger-4.0.18-20.el7sat.src.rpm
> while foreman is still running tfm-rubygem-passenger-4.0.18-
> 9.11.el7.x86_64.rpm.
>
> Apologies if i'm misreading the scl versioning.
>
> Thanks
> Charlie
>
> On Thursday, 6 October 2016 15:59:14 UTC+1, Edson Manners wrote:
>>
>> Unfortunately no. As I think you've seen in the forums someone gave me
>> some hints but they never helped either. It's still an issue for us as we
>> seem to regularly overrun katello's ability to process puppet requests and
>> need this to tune the Apache webserver. I'd be very interested in whatever
>> you found.
>>
>> On 10/6/2016 2:24 AM, Matthew Wilmott wrote:
>>
>> Did this ever get fixed?
>>
>> We use telegraf to monitor passenger-status and send to influx/grafana
>>
>> We have the same issue, regardless of how we call the passenger-status we
>> insists passenger isnt running...
>>
>> On Thursday, July 14, 2016 at 6:01:56 AM UTC+10, Edson Manners wrote:
>>>
>>> Thanks for the replies Eric. I tried those commands on both the current
>>> production server and a newly rebuilt test server using only the commands
>>> from the katello install page and got the following output:
>>>
>>> [root@katello ~]# scl enable tfm '/opt/theforeman/tfm/root/usr/
>>> bin/passenger-status'
>>> ERROR: Phusion Passenger doesn't seem to be running.
>>> [root@katello-test ~]# scl enable tfm '/opt/theforeman/tfm/root/usr/
>>> bin/passenger-status'
>>> ERROR: Phusion Passenger doesn't seem to be running.
>>>
>>> [root@katello ~]# /usr/sbin/passenger-status
>>> ERROR: Phusion Passenger doesn't seem to be running.
>>> [root@katello-test ~]# /usr/sbin/passenger-status
>>> ERROR: Phusion Passenger doesn't seem to be running.
>>>
>>> If you look closely you'll see that one machine is katello and the other
>>> is katello-test and they both behave the same.
>>>
>>>
>>>
>>> On Thursday, July 7, 2016 at 3:21:52 PM UTC-4, Eric Helms wrote:



 On Tue, Jul 5, 2016 at 10:55 AM, Edson Manners 
 wrote:

> I've been struggling with this issue for a while and finally feel the
> need to seek external help.
>
> We used to run Foreman 1.9.3 on RHEL 7.2. the passenger-status command
> was used to tune the puppetmaster when the server got overwhelmed.
>
> [root@foreman ~]# cat /etc/redhat-release
> Red Hat Enterprise Linux Server release 7.2 (Maipo)
> [root@foreman ~]# which passenger-status
> /usr/bin/passenger-status
> [root@foreman ~]# rpm -q --whatprovides /usr/bin/passenger-status
> rubygem-passenger-4.0.18-9.8.el7.x86_64
> [root@foreman ~]# /usr/bin/passenger-status
> Version : 4.0.18
> Date: 2016-07-05 10:44:15 -0400
> Instance: 3376
> --- General information ---
> Max pool size : 48
> Processes : 3
> Requests in top-level queue : 0
>
> --- Application groups ---
> /usr/share/foreman#default:
>   App root: /usr/share/foreman
>   Requests in queue: 0
>   * PID: 18170   Sessions: 0   Processed: 622 Uptime: 4h 24m
> 28s
> CPU: 0%  Memory  : 232MLast used: 6s ago
> 
> ...
>
>
> We've moved to katello 2.4 on CentOS 7.2 and now passenger-status no
> longer works out of the box.
>
> [root@katello-test emanners]# cat /etc/redhat-release
> CentOS Linux release 7.2.1511 (Core)
> [root@katello-test emanners]# which passenger-status
> /sbin/passenger-status
> [root@katello-test emanners]# rpm -q --whatprovides
> /usr/sbin/passenger-status
> passenger-4.0.53-4.el7.x86_64
> [root@katello-test emanners]# /usr/sbin/passenger-status
> ERROR: Phusion Passenger doesn't seem to be running.
>
> I've managed to find a few posts on Google (not Katello related) that
> suggest the reason is multiple copies of passenger on the host.
> [root@katello-test ~]# locate passenger-status
> /opt/theforeman/tfm/root/usr/bin/passenger-status
> /opt/theforeman/tfm/root/usr/share/gems/gems/passenger-4.0.1
> 8/bin/passenger-status
> /opt/theforeman/tfm/root/usr/share/man/man8/passenger-status.8.gz
>

 This set of passenger libraries are used for running the Foreman web
 application under 

Re: [foreman-users] Re: Katello 2.4.2 -> 3.0.0 upgrade broke katello-agent everywhere

2017-08-22 Thread Eric D Helms
Do new clients, cleanly registered face the same problem?

Unfortunately, you are on a version that is 4 versions behind latest and
that makes it harder for us to debug and provide support.

On Tue, Aug 22, 2017 at 12:10 AM, Nicholas Carter 
wrote:

> Good afternoon,
>
> Any news on this front? I'm seeing the same errors within my journal for
> gofer.
>
> On Wednesday, June 29, 2016 at 2:20:00 AM UTC-4, Nick Cammorato wrote:
>>
>> Hi Everyone,
>>
>> Having a few post-update blues and I'm out of things I can think of to
>> look at. For the most part everything went pretty well during the update.
>> However, while updating my capsules I noticed that A. posting package
>> profiles started to take forever and B. more importantly goferd seemed to
>> not be working:
>>
>> I noticed my pulp repos weren't syncing and I couldn't issue any
>> commands, so I hoped on a box on the same subnet as the katello server and
>> checked out the katello-agent.
>>
>> Jun 28 21:47:11 ipatest.internal systemd[1]: Starting Gofer Agent...
>>> Jun 28 21:47:11 ipatest.internal goferd[13681]: [INFO][Thread-1]
>>> gofer.rmi.store:114 - Using: /var/lib/gofer/messaging/pending/demo
>>> Jun 28 21:47:11 ipatest.internal goferd[13681]: [WARNING][MainThread]
>>> gofer.agent.plugin:639 - plugin:demo, DISABLED
>>> Jun 28 21:47:11 ipatest.internal goferd[13681]: [INFO][Thread-2]
>>> gofer.rmi.store:114 - Using: /var/lib/gofer/messaging/pendi
>>> ng/katelloplugin
>>> Jun 28 21:47:11 ipatest.internal goferd[13681]: [INFO][Thread-3]
>>> gofer.rmi.store:114 - Using: /var/lib/gofer/messaging/pendi
>>> ng/katelloplugin
>>> Jun 28 21:47:11 ipatest.internal goferd[13681]: [INFO][MainThread]
>>> gofer.agent.plugin:682 - plugin:katelloplugin loaded using:
>>> /usr/lib/gofer/plugins/katelloplugin.py
>>> Jun 28 21:47:11 ipatest.internal goferd[13681]: [INFO][MainThread]
>>> rhsm.connection:778 - Connection built: host=katello.internal port=443
>>> handler=/rhsm auth=identity_cert ca_dir=/etc/rhsm/ca/ verify
>>> Jun 28 21:47:17 ipatest.internal goferd[13681]: [INFO][MainThread]
>>> katelloplugin:177 - Using /etc/rhsm/ca/katello-default-ca.pem as the ca
>>> cert for qpid connection
>>> Jun 28 21:47:17 ipatest.internal goferd[13681]: [INFO][MainThread]
>>> rhsm.connection:778 - Connection built: host=katello.internal port=443
>>> handler=/rhsm auth=identity_cert ca_dir=/etc/rhsm/ca/ verify
>>> Jun 28 21:47:17 ipatest.internal goferd[13681]: Loaded plugins:
>>> fastestmirror, product-id
>>> Jun 28 21:47:17 ipatest.internal goferd[13681]: [INFO][MainThread]
>>> katelloplugin:352 - reporting: {'enabled_repos': {'repos': [{'baseurl': ['
>>> https://katello.internal/pulp/repos/Resilient_Systems/pro
>>> Jun 28 21:47:42 ipatest.internal goferd[13681]: [INFO][MainThread]
>>> gofer.agent.main:87 - agent started.
>>> Jun 28 21:47:42 ipatest.internal goferd[13681]: [INFO][worker-0]
>>> gofer.messaging.adapter.connect:28 - connecting:
>>> proton+amqps://katello.internal:5647
>>> Jun 28 21:47:42 ipatest.internal goferd[13681]: [INFO][worker-0]
>>> gofer.messaging.adapter.proton.connection:87 - open: URL:
>>> amqps://katello.internal:5647|SSL: ca: /etc/rhsm/ca/katello-default-c
>>> a.pem|
>>> Jun 28 21:47:42 ipatest.internal goferd[13681]: [INFO][worker-0]
>>> root:510 - connecting to katello.internal:5647...
>>> Jun 28 21:47:52 ipatest.internal goferd[13681]: [INFO][worker-0]
>>> root:559 - Disconnected
>>> Jun 28 21:47:52 ipatest.internal goferd[13681]: [ERROR][worker-0]
>>> gofer.messaging.adapter.connect:33 - connect:
>>> proton+amqps://katello.internal:5647, failed: Connection
>>> amqps://katello.internal.resi
>>> Jun 28 21:47:52 ipatest.internal goferd[13681]: [INFO][worker-0]
>>> gofer.messaging.adapter.connect:35 - retry in 10 seconds
>>> Jun 28 21:48:02 ipatest.internal goferd[13681]: [INFO][worker-0]
>>> gofer.messaging.adapter.connect:28 - connecting:
>>> proton+amqps://katello.internal:5647
>>> Jun 28 21:48:03 ipatest.internal goferd[13681]: [INFO][worker-0]
>>> gofer.messaging.adapter.proton.connection:87 - open: URL:
>>> amqps://katello.internal:5647|SSL: ca: /etc/rhsm/ca/katello-default-c
>>> a.pem|
>>> Jun 28 21:48:03 ipatest.internal goferd[13681]: [INFO][worker-0]
>>> root:510 - connecting to katello.internal:5647...
>>> Jun 28 21:48:13 ipatest.internal goferd[13681]: [INFO][worker-0]
>>> root:559 - Disconnected
>>> Jun 28 21:48:13 ipatest.internal goferd[13681]: [ERROR][worker-0]
>>> gofer.messaging.adapter.connect:33 - connect:
>>> proton+amqps://katello.internal:5647, failed: Connection
>>> amqps://katello.internal.resi
>>> Jun 28 21:48:13 ipatest.internal goferd[13681]: [INFO][worker-0]
>>> gofer.messaging.adapter.connect:35 - retry in 12 seconds
>>
>>
>> There's no firewall involved here and I am able to see a connection
>> establish and then go away on 5647 on both machines. They are able to
>> telnet to each other on 5647 as well, and I was able to repro with selinux
>> off on both and firewalld off.
>>
>> Curious, I hopped 

Re: [foreman-users] No facts in Json input for hooks after foreman discovery

2017-08-22 Thread Lukas Zapletal
Hello,

facts are not reported via hooks.

LZ

On Mon, Aug 21, 2017 at 4:37 PM,   wrote:
> Hello,
>
> The issue is that -> i dont see facts in input json after foreman discovers
> a VM/baremetal.
>
> This is the workflow that i am trying with hooks and foreman discovery
>
> VM/baremetal gets discovered -> JSON input to hooks -> Hooks use the facts
> from JSON input to add more facts from CMDB -> Facts get uploaded to
> foreman.
>
> I have created the following hooks:
>
> /usr/share/foreman/config/hooks/host/discovered/after_create/10-logger.py
>
> ls /usr/share/foreman-community/hooks/
> functions.py   functions.pyc  __init__.py
>
> The scripts just get the input json and try to output the json to temp
> directory, to help me understand the structure.
> However - I dont see facts or any other useful information during input
> json.
>
> {
> "id": 22,
> "name": "mac00163e5426c9",
> "last_compile": null,
> "last_report": null,
> "updated_at": "2017-08-18T20:13:44.058Z",
> "created_at": "2017-08-18T20:13:44.058Z",
> "root_pass": null,
> "architecture_id": null,
> "operatingsystem_id": null,
> "environment_id": null,
> "ptable_id": null,
> "medium_id": null,
> "build": false,
> "comment": null,
> "disk": null,
> "installed_at": null,
> "model_id": null,
> "hostgroup_id": null,
> "owner_id": null,
> "owner_type": null,
> "enabled": true,
> "puppet_ca_proxy_id": null,
> "managed": false,
> "use_image": null,
> "image_file": null,
> "uuid": null,
> "compute_resource_id": null,
> "puppet_proxy_id": null,
> "certname": null,
> "image_id": null,
> "organization_id": null,
> "location_id": null,
> "otp": null,
> "realm_id": null,
> "compute_profile_id": null,
> "provision_method": null,
> "grub_pass": "",
> "global_status": 0,
> "lookup_value_matcher": null,
> "pxe_loader": null,
> "discovery_rule_id": null
> }
>
>
> The workaround that i will have to use to get facts is:
> 1. Extract the mac address from the name -> reformat it -> use that to query
> the id of the host with discovery api
> 2. Use the id to query the facts of that host.
>
> Do i have to follow the above workaround to get facts of the discovered
> system? or am i missing something?
> It would be so much easier, if i could just facts in the input json to the
> hook.
>
> I am using the following versions foreman and its plugins on CentOS7:
>
> tfm-rubygem-foreman_setup-5.0.0-1.fm1_13.el7.noarch
> foreman-release-1.15.3-1.el7.noarch
> foreman-installer-1.15.3-1.el7.noarch
> foreman-libvirt-1.15.3-1.el7.noarch
> foreman-postgresql-1.15.3-1.el7.noarch
> tfm-rubygem-foreman_hooks-0.3.14-1.fm1_15.el7.noarch
> foreman-selinux-1.15.3-1.el7.noarch
> foreman-debug-1.15.3-1.el7.noarch
> foreman-release-scl-3-1.el7.noarch
> tfm-rubygem-hammer_cli_foreman-0.10.2-1.el7.noarch
> tfm-rubygem-foreman_discovery-9.1.1-1.fm1_15.el7.noarch
> foreman-cli-1.15.3-1.el7.noarch
> tfm-rubygem-foreman_memcache-0.0.6-1.fm1_15.el7.noarch
> foreman-proxy-1.15.3-1.el7.noarch
> foreman-1.15.3-1.el7.noarch
>
> Thanks,
> Lohit
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Foreman users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to foreman-users+unsubscr...@googlegroups.com.
> To post to this group, send email to foreman-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/foreman-users.
> For more options, visit https://groups.google.com/d/optout.



-- 
Later,
  Lukas @lzap Zapletal

-- 
You received this message because you are subscribed to the Google Groups 
"Foreman users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to foreman-users+unsubscr...@googlegroups.com.
To post to this group, send email to foreman-users@googlegroups.com.
Visit this group at https://groups.google.com/group/foreman-users.
For more options, visit https://groups.google.com/d/optout.