[foreman-users] Re: Katello 2.4.2 -> 3.0.0 upgrade broke katello-agent everywhere

2017-08-21 Thread Nicholas Carter
Good afternoon,

Any news on this front? I'm seeing the same errors within my journal for 
gofer.

On Wednesday, June 29, 2016 at 2:20:00 AM UTC-4, Nick Cammorato wrote:
>
> Hi Everyone,
>
> Having a few post-update blues and I'm out of things I can think of to 
> look at. For the most part everything went pretty well during the update. 
> However, while updating my capsules I noticed that A. posting package 
> profiles started to take forever and B. more importantly goferd seemed to 
> not be working:
>
> I noticed my pulp repos weren't syncing and I couldn't issue any commands, 
> so I hoped on a box on the same subnet as the katello server and checked 
> out the katello-agent.
>
> Jun 28 21:47:11 ipatest.internal systemd[1]: Starting Gofer Agent...
>> Jun 28 21:47:11 ipatest.internal goferd[13681]: [INFO][Thread-1] 
>> gofer.rmi.store:114 - Using: /var/lib/gofer/messaging/pending/demo
>> Jun 28 21:47:11 ipatest.internal goferd[13681]: [WARNING][MainThread] 
>> gofer.agent.plugin:639 - plugin:demo, DISABLED
>> Jun 28 21:47:11 ipatest.internal goferd[13681]: [INFO][Thread-2] 
>> gofer.rmi.store:114 - Using: /var/lib/gofer/messaging/pending/katelloplugin
>> Jun 28 21:47:11 ipatest.internal goferd[13681]: [INFO][Thread-3] 
>> gofer.rmi.store:114 - Using: /var/lib/gofer/messaging/pending/katelloplugin
>> Jun 28 21:47:11 ipatest.internal goferd[13681]: [INFO][MainThread] 
>> gofer.agent.plugin:682 - plugin:katelloplugin loaded using: 
>> /usr/lib/gofer/plugins/katelloplugin.py
>> Jun 28 21:47:11 ipatest.internal goferd[13681]: [INFO][MainThread] 
>> rhsm.connection:778 - Connection built: host=katello.internal port=443 
>> handler=/rhsm auth=identity_cert ca_dir=/etc/rhsm/ca/ verify
>> Jun 28 21:47:17 ipatest.internal goferd[13681]: [INFO][MainThread] 
>> katelloplugin:177 - Using /etc/rhsm/ca/katello-default-ca.pem as the ca 
>> cert for qpid connection
>> Jun 28 21:47:17 ipatest.internal goferd[13681]: [INFO][MainThread] 
>> rhsm.connection:778 - Connection built: host=katello.internal port=443 
>> handler=/rhsm auth=identity_cert ca_dir=/etc/rhsm/ca/ verify
>> Jun 28 21:47:17 ipatest.internal goferd[13681]: Loaded plugins: 
>> fastestmirror, product-id
>> Jun 28 21:47:17 ipatest.internal goferd[13681]: [INFO][MainThread] 
>> katelloplugin:352 - reporting: {'enabled_repos': {'repos': [{'baseurl': ['
>> https://katello.internal/pulp/repos/Resilient_Systems/pro
>> Jun 28 21:47:42 ipatest.internal goferd[13681]: [INFO][MainThread] 
>> gofer.agent.main:87 - agent started.
>> Jun 28 21:47:42 ipatest.internal goferd[13681]: [INFO][worker-0] 
>> gofer.messaging.adapter.connect:28 - connecting: 
>> proton+amqps://katello.internal:5647
>> Jun 28 21:47:42 ipatest.internal goferd[13681]: [INFO][worker-0] 
>> gofer.messaging.adapter.proton.connection:87 - open: URL: 
>> amqps://katello.internal:5647|SSL: ca: /etc/rhsm/ca/katello-default-ca.pem|
>> Jun 28 21:47:42 ipatest.internal goferd[13681]: [INFO][worker-0] root:510 
>> - connecting to katello.internal:5647...
>> Jun 28 21:47:52 ipatest.internal goferd[13681]: [INFO][worker-0] root:559 
>> - Disconnected
>> Jun 28 21:47:52 ipatest.internal goferd[13681]: [ERROR][worker-0] 
>> gofer.messaging.adapter.connect:33 - connect: 
>> proton+amqps://katello.internal:5647, failed: Connection 
>> amqps://katello.internal.resi
>> Jun 28 21:47:52 ipatest.internal goferd[13681]: [INFO][worker-0] 
>> gofer.messaging.adapter.connect:35 - retry in 10 seconds
>> Jun 28 21:48:02 ipatest.internal goferd[13681]: [INFO][worker-0] 
>> gofer.messaging.adapter.connect:28 - connecting: 
>> proton+amqps://katello.internal:5647
>> Jun 28 21:48:03 ipatest.internal goferd[13681]: [INFO][worker-0] 
>> gofer.messaging.adapter.proton.connection:87 - open: URL: 
>> amqps://katello.internal:5647|SSL: ca: /etc/rhsm/ca/katello-default-ca.pem|
>> Jun 28 21:48:03 ipatest.internal goferd[13681]: [INFO][worker-0] root:510 
>> - connecting to katello.internal:5647...
>> Jun 28 21:48:13 ipatest.internal goferd[13681]: [INFO][worker-0] root:559 
>> - Disconnected
>> Jun 28 21:48:13 ipatest.internal goferd[13681]: [ERROR][worker-0] 
>> gofer.messaging.adapter.connect:33 - connect: 
>> proton+amqps://katello.internal:5647, failed: Connection 
>> amqps://katello.internal.resi
>> Jun 28 21:48:13 ipatest.internal goferd[13681]: [INFO][worker-0] 
>> gofer.messaging.adapter.connect:35 - retry in 12 seconds
>
>
> There's no firewall involved here and I am able to see a connection 
> establish and then go away on 5647 on both machines. They are able to 
> telnet to each other on 5647 as well, and I was able to repro with selinux 
> off on both and firewalld off.
>
> Curious, I hopped over to katello.internal and enabled / looked at the 
> qdrouterd log:
>
>> Tue Jun 28 21:47:42 2016 SERVER (debug) Accepting incoming connection 
>> from ipatest.internal:55332 to 0.0.0.0:5647
>> Tue Jun 28 21:47:42 2016 SERVER (trace) Configuring SSL on incoming 
>> connection from  ipatest.internal:55332 to 0.0.0.0:5647
>> Tue Jun 28 

Re: [foreman-users] [katello 2.4] passenger status broken

2017-08-21 Thread Charlie Derwent
Apologies for resurrecting an old thread but this looks like this bug 
http://projects.theforeman.org/issues/8392 and it is still a problem in 
1.15.3. It appears it was fixed in satellite 
https://access.redhat.com/errata/RHBA-2015:1911 (BZ - 1163380 - 
passenger-status broken on sat6 on rhel7) which is using 
ruby193-rubygem-passenger-4.0.18-20.el7sat.src.rpm while foreman is still 
running tfm-rubygem-passenger-4.0.18-9.11.el7.x86_64.rpm. 

Apologies if i'm misreading the scl versioning.

Thanks
Charlie

On Thursday, 6 October 2016 15:59:14 UTC+1, Edson Manners wrote:
>
> Unfortunately no. As I think you've seen in the forums someone gave me 
> some hints but they never helped either. It's still an issue for us as we 
> seem to regularly overrun katello's ability to process puppet requests and 
> need this to tune the Apache webserver. I'd be very interested in whatever 
> you found.
>
> On 10/6/2016 2:24 AM, Matthew Wilmott wrote:
>
> Did this ever get fixed? 
>
> We use telegraf to monitor passenger-status and send to influx/grafana
>
> We have the same issue, regardless of how we call the passenger-status we 
> insists passenger isnt running...
>
> On Thursday, July 14, 2016 at 6:01:56 AM UTC+10, Edson Manners wrote: 
>>
>> Thanks for the replies Eric. I tried those commands on both the current 
>> production server and a newly rebuilt test server using only the commands 
>> from the katello install page and got the following output: 
>>
>> [root@katello ~]# scl enable tfm 
>> '/opt/theforeman/tfm/root/usr/bin/passenger-status'
>> ERROR: Phusion Passenger doesn't seem to be running.
>> [root@katello-test ~]# scl enable tfm 
>> '/opt/theforeman/tfm/root/usr/bin/passenger-status'
>> ERROR: Phusion Passenger doesn't seem to be running.
>>
>> [root@katello ~]# /usr/sbin/passenger-status 
>> ERROR: Phusion Passenger doesn't seem to be running.
>> [root@katello-test ~]# /usr/sbin/passenger-status 
>> ERROR: Phusion Passenger doesn't seem to be running.
>>
>> If you look closely you'll see that one machine is katello and the other 
>> is katello-test and they both behave the same. 
>>
>>
>>
>> On Thursday, July 7, 2016 at 3:21:52 PM UTC-4, Eric Helms wrote: 
>>>
>>>
>>>
>>> On Tue, Jul 5, 2016 at 10:55 AM, Edson Manners  
>>> wrote:
>>>
 I've been struggling with this issue for a while and finally feel the 
 need to seek external help. 

 We used to run Foreman 1.9.3 on RHEL 7.2. the passenger-status command 
 was used to tune the puppetmaster when the server got overwhelmed.

 [root@foreman ~]# cat /etc/redhat-release 
 Red Hat Enterprise Linux Server release 7.2 (Maipo)
 [root@foreman ~]# which passenger-status
 /usr/bin/passenger-status
 [root@foreman ~]# rpm -q --whatprovides /usr/bin/passenger-status
 rubygem-passenger-4.0.18-9.8.el7.x86_64
 [root@foreman ~]# /usr/bin/passenger-status 
 Version : 4.0.18
 Date: 2016-07-05 10:44:15 -0400
 Instance: 3376
 --- General information ---
 Max pool size : 48
 Processes : 3
 Requests in top-level queue : 0

 --- Application groups ---
 /usr/share/foreman#default:
   App root: /usr/share/foreman
   Requests in queue: 0
   * PID: 18170   Sessions: 0   Processed: 622 Uptime: 4h 24m 28s
 CPU: 0%  Memory  : 232MLast used: 6s ago
 
 ...


 We've moved to katello 2.4 on CentOS 7.2 and now passenger-status no 
 longer works out of the box.

 [root@katello-test emanners]# cat /etc/redhat-release
 CentOS Linux release 7.2.1511 (Core) 
 [root@katello-test emanners]# which passenger-status
 /sbin/passenger-status
 [root@katello-test emanners]# rpm -q --whatprovides 
 /usr/sbin/passenger-status
 passenger-4.0.53-4.el7.x86_64
 [root@katello-test emanners]# /usr/sbin/passenger-status
 ERROR: Phusion Passenger doesn't seem to be running.

 I've managed to find a few posts on Google (not Katello related) that 
 suggest the reason is multiple copies of passenger on the host.
 [root@katello-test ~]# locate passenger-status
 /opt/theforeman/tfm/root/usr/bin/passenger-status

 /opt/theforeman/tfm/root/usr/share/gems/gems/passenger-4.0.18/bin/passenger-status
 /opt/theforeman/tfm/root/usr/share/man/man8/passenger-status.8.gz

>>>
>>> This set of passenger libraries are used for running the Foreman web 
>>> application under Apache within the SCL. To run the passenger-status 
>>> command for the SCL, you'd need to enable the SCL and run it:
>>>
>>> scl enable tfm '/opt/theforeman/tfm/root/usr/bin/passenger-status'
>>>  
>>>
 /usr/sbin/passenger-status
 /usr/share/man/man8/passenger-status.8.gz

>>>
>>> This set of passenger libraries are used for running the puppetmaster 
>>> that is installed on the server by default which is running outside the SC 
>>> and should work like normal 

[foreman-users] No facts in Json input for hooks after foreman discovery

2017-08-21 Thread lohit . valleru
Hello,

The issue is that -> i dont see facts in input json after foreman discovers 
a VM/baremetal.

This is the workflow that i am trying with hooks and foreman discovery

VM/baremetal gets discovered -> JSON input to hooks -> Hooks use the facts 
from JSON input to add more facts from CMDB -> Facts get uploaded to 
foreman.

I have created the following hooks:

/usr/share/foreman/config/hooks/host/discovered/after_create/10-logger.py

ls /usr/share/foreman-community/hooks/
functions.py   functions.pyc  __init__.py

The scripts just get the input json and try to output the json to temp 
directory, to help me understand the structure.
However - I dont see facts or any other useful information during input 
json.

{
"id": 22,
"name": "mac00163e5426c9",
"last_compile": null,
"last_report": null,
"updated_at": "2017-08-18T20:13:44.058Z",
"created_at": "2017-08-18T20:13:44.058Z",
"root_pass": null,
"architecture_id": null,
"operatingsystem_id": null,
"environment_id": null,
"ptable_id": null,
"medium_id": null,
"build": false,
"comment": null,
"disk": null,
"installed_at": null,
"model_id": null,
"hostgroup_id": null,
"owner_id": null,
"owner_type": null,
"enabled": true,
"puppet_ca_proxy_id": null,
"managed": false,
"use_image": null,
"image_file": null,
"uuid": null,
"compute_resource_id": null,
"puppet_proxy_id": null,
"certname": null,
"image_id": null,
"organization_id": null,
"location_id": null,
"otp": null,
"realm_id": null,
"compute_profile_id": null,
"provision_method": null,
"grub_pass": "",
"global_status": 0,
"lookup_value_matcher": null,
"pxe_loader": null,
"discovery_rule_id": null
}


The workaround that i will have to use to get facts is:
1. Extract the mac address from the name -> reformat it -> use that to 
query the id of the host with discovery api
2. Use the id to query the facts of that host.

Do i have to follow the above workaround to get facts of the discovered 
system? or am i missing something?
It would be so much easier, if i could just facts in the input json to the 
hook.

I am using the following versions foreman and its plugins on CentOS7:

tfm-rubygem-foreman_setup-5.0.0-1.fm1_13.el7.noarch
foreman-release-1.15.3-1.el7.noarch
foreman-installer-1.15.3-1.el7.noarch
foreman-libvirt-1.15.3-1.el7.noarch
foreman-postgresql-1.15.3-1.el7.noarch
tfm-rubygem-foreman_hooks-0.3.14-1.fm1_15.el7.noarch
foreman-selinux-1.15.3-1.el7.noarch
foreman-debug-1.15.3-1.el7.noarch
foreman-release-scl-3-1.el7.noarch
tfm-rubygem-hammer_cli_foreman-0.10.2-1.el7.noarch
tfm-rubygem-foreman_discovery-9.1.1-1.fm1_15.el7.noarch
foreman-cli-1.15.3-1.el7.noarch
tfm-rubygem-foreman_memcache-0.0.6-1.fm1_15.el7.noarch
foreman-proxy-1.15.3-1.el7.noarch
foreman-1.15.3-1.el7.noarch

Thanks,
Lohit


-- 
You received this message because you are subscribed to the Google Groups 
"Foreman users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to foreman-users+unsubscr...@googlegroups.com.
To post to this group, send email to foreman-users@googlegroups.com.
Visit this group at https://groups.google.com/group/foreman-users.
For more options, visit https://groups.google.com/d/optout.


[foreman-users] TFTP timeout on secondary smartproxy interface

2017-08-21 Thread justin parker
Hello,
I have setup a smartproxy to service multiple vlan networks via vlan 
interfaces on the smartproxy.  Each of the vlan interfaces are providing 
the same services DHCP and PXE.  However, I am only able to actually PXE 
boot on one of the interfaces.  The others I keep getting TFTP timeouts.  I 
can pull IP's off the same interfaces and I've confirmed using a TFTP 
client that TFTP is working by downloading the pxelinux.0 file from the 
smartproxy.  I have included a copy of my dhcpd.conf file below (IP's have 
been changed in order to protect the innocent).

# dhcpd.conf
omapi-port 7911;

default-lease-time 43200;
max-lease-time 86400;



ddns-update-style none;

option domain-name "somewhere.com";
option domain-name-servers 10.10.10.14;
option ntp-servers none;

allow booting;
allow bootp;

option fqdn.no-client-updateon;  # set the "O" and "S" flag bits
option fqdn.rcode2255;
option pxegrub code 150 = text ;




# Bootfile Handoff
#next-server 10.61.67.90;
option architecture code 93 = unsigned integer 16 ;
if option architecture = 00:06 {
  filename "grub2/shim.efi";
} elsif option architecture = 00:07 {
  filename "grub2/shim.efi";
} elsif option architecture = 00:09 {
  filename "grub2/shim.efi";
} else {
  filename "pxelinux.0";
}

log-facility local7;

include "/etc/dhcp/dhcpd.hosts";

# somewhere.com
subnet 10.61.65.0 netmask 255.255.255.128 {
  pool
  {
range 10.61.65.20 10.61.65.126;
  }

  next-server 10.61.65.126;
  option subnet-mask 255.255.255.128;
  option routers 10.61.65.1;
  option domain-search "somewhere.com";
}
subnet 10.61.67.0 netmask 255.255.255.128 {
  pool
  {
range 10.61.67.20 10.61.67.126;
  }
  next-server 10.61.67.90;
  option subnet-mask 255.255.255.128;
  option routers 10.61.67.1;
  option domain-search "somewhere.com";
}

-- 
You received this message because you are subscribed to the Google Groups 
"Foreman users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to foreman-users+unsubscr...@googlegroups.com.
To post to this group, send email to foreman-users@googlegroups.com.
Visit this group at https://groups.google.com/group/foreman-users.
For more options, visit https://groups.google.com/d/optout.


[foreman-users] Re: foreman ans ansible playbook

2017-08-21 Thread Michael Klug
I think for me (RHEL) the user is foreman-proxy. I just use the same 
hostkey as for roor, so no double import is needed. 

On Tuesday, August 1, 2017 at 10:58:13 AM UTC+2, Arsène Gschwind wrote:
>
> Hi,
>
> I'm trying to use ansible with foreman but when tying to execute an 
> ansible playbook I get the following error:
>
> fatal: [darla-hesley.vm.sapify.ch]: UNREACHABLE! => {"changed": false, "msg": 
> "Failed to connect to the host via ssh: write: Broken pipe\r\n", 
> "unreachable": true}
>   to retry, use: --limit 
> @/tmp/foreman-playbook-9bc23404-2d9b-4c65-ad58-04a799263dc1.retry
>
> When running the playbook directly with ansible from the shell  it works 
> even when doing a sudo -u foreman-proxy.
>
> I'm not sure which user foreman is using to run playbooks, I've tried to 
> set the user in the foreman settings->ansible tab or even using the 
> parameter ansible_user on the host, neither did help.
> I'm running the following version:
> foreman : Version 1.15.2
> katello: 3.4.2
> foreman-tasks: 0.9.4
> ansible 2.3.1.0
>
> Thanks for any help
> rgds,
> Arsène
>

-- 
You received this message because you are subscribed to the Google Groups 
"Foreman users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to foreman-users+unsubscr...@googlegroups.com.
To post to this group, send email to foreman-users@googlegroups.com.
Visit this group at https://groups.google.com/group/foreman-users.
For more options, visit https://groups.google.com/d/optout.