Re: [foreman-users] Re: Katello 3.0.2 (Saison) Released

2016-08-01 Thread Rick Langston
Thank you for help on this

yes here is the actual capture

command line output
[root@dscaprv01 tmp]# systemctl restart pulp_celerybeat.service
[root@dscaprv01 tmp]# systemctl status pulp_celerybeat.service
● pulp_celerybeat.service - Pulp's Celerybeat
   Loaded: loaded (/usr/lib/systemd/system/pulp_celerybeat.service; 
enabled; vendor preset: disabled)
   Active: active (running) since Mon 2016-08-01 08:34:20 CDT; 7s ago
 Main PID: 5887 (celery)
   CGroup: /system.slice/pulp_celerybeat.service
   └─5887 /usr/bin/python /usr/bin/celery beat 
--app=pulp.server.async.celery_instance.celery 
--scheduler=pulp.server.async.scheduler.Scheduler

Aug 01 08:34:20 dscaprv01.corp.acxiom.net systemd[1]: Started Pulp's 
Celerybeat.
Aug 01 08:34:20 dscaprv01.corp.acxiom.net systemd[1]: Starting Pulp's 
Celerybeat...
Aug 01 08:34:25 dscaprv01.corp.acxiom.net pulp[5887]: celery.beat:INFO: 
beat: Starting...
Aug 01 08:34:25 dscaprv01.corp.acxiom.net pulp[5887]: 
pulp.server.db.connection:INFO: Attempting to connect to localhost:27017
Aug 01 08:34:25 dscaprv01.corp.acxiom.net pulp[5887]: 
pulp.server.async.scheduler:INFO: Worker Timeout Monitor Started
Aug 01 08:34:25 dscaprv01.corp.acxiom.net pulp[5887]: 
pulp.server.db.connection:INFO: Attempting to connect to localhost:27017
Aug 01 08:34:26 dscaprv01.corp.acxiom.net pulp[5887]: 
pulp.server.db.connection:INFO: Write concern for Mongo connection: {}
[root@dscaprv01 tmp]# systemctl status pulp_celerybeat.service
● pulp_celerybeat.service - Pulp's Celerybeat
   Loaded: loaded (/usr/lib/systemd/system/pulp_celerybeat.service; 
enabled; vendor preset: disabled)
   Active: inactive (dead) since Mon 2016-08-01 08:34:31 CDT; 18s ago
  Process: 5887 ExecStart=/usr/bin/celery beat 
--app=pulp.server.async.celery_instance.celery 
--scheduler=pulp.server.async.scheduler.Scheduler (code=exited, 
status=0/SUCCESS)
 Main PID: 5887 (code=exited, status=0/SUCCESS)

Aug 01 08:34:30 dscaprv01.corp.acxiom.net pulp[5887]: celery.beat:CRITICAL: 
(5887-79264) raise Timeout("Connection attach timed out")
Aug 01 08:34:30 dscaprv01.corp.acxiom.net pulp[5887]: celery.beat:CRITICAL: 
(5887-79264) Timeout: Connection attach timed out
Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: celery beat v3.1.11 
(Cipater) is starting.
Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: __-... __   
-_
Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: Configuration ->
Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: . broker -> 
qpid://dscaprv01.corp.acxiom.net:5671//
Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: . loader -> 
celery.loaders.app.AppLoader
Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: . scheduler -> 
pulp.server.async.scheduler.Scheduler
Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: . logfile -> 
[stderr]@%INFO
Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: . maxinterval -> 
now (0s)

/var/log/messages output

==> /var/log/messages <==
Aug  1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) beat 
raised exception : 
Timeout('Connection attach timed out',)
Aug  1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) 
Traceback (most recent call last):
Aug  1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168)   File 
"/usr/lib/python2.7/site-packages/celery/apps/beat.py", line 112, in 
start_scheduler
Aug  1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) 
beat.start()
Aug  1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168)   File 
"/usr/lib/python2.7/site-packages/celery/beat.py", line 462, in start
Aug  1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) 
interval = self.scheduler.tick()
Aug  1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168)   File 
"/usr/lib/python2.7/site-packages/pulp/server/async/scheduler.py", line 
265, in tick
Aug  1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) ret 
= self.call_tick(self, celerybeat_name)
Aug  1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168)   File 
"/usr/lib/python2.7/site-packages/pulp/server/async/scheduler.py", line 
230, in call_tick
Aug  1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) ret 
= super(Scheduler, self).tick()
Aug  1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168)   File 
"/usr/lib/python2.7/site-packages/celery/beat.py", line 220, in tick
Aug  1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) 
next_time_to_run = self.maybe_due(entry, self.publisher)
Aug  1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168)   File 
"/usr/lib/python2.7/site-packages/kombu/utils/__init__.py", line 325, in 
__get__
Aug  1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168) 
value = obj.__dict__[self.__name__] = self.__get(obj)
Aug  1 08:45:03 dscaprv01 pulp: celery.beat:CRITICAL: (7070-59168)   File 
"/usr/lib/python2.7/site-packages/celery/beat.py", line 342, in publisher
Aug  1 08:45:03 

Re: [foreman-users] Re: Katello 3.0.2 (Saison) Released

2016-08-01 Thread Chris Duryee


On 08/01/2016 09:39 AM, Rick Langston wrote:
> The backend service all say ok but when a run a katelli-service status I 
> can see that celery.bet fails to status. if I restart the server it and 
> immediately  check the status it says running but checking the status again 
> shows it timed out. 
> 

Is this the sequence of events?

* service pulp_celerybeat start (outputs success)
* service pulp_celerybeat status (outputs success)
* wait some number of seconds
* service pulp_celerybeat status (outputs error)

> 
> No memory errros noted
> 
> 
> [root@dscaprv01 tmp]# systemctl status pulp_celerybeat.service
> ● pulp_celerybeat.service - Pulp's Celerybeat
>Loaded: loaded (/usr/lib/systemd/system/pulp_celerybeat.service; 
> enabled; vendor preset: disabled)
>Active: inactive (dead) since Mon 2016-08-01 08:34:31 CDT; 18s ago
>   Process: 5887 ExecStart=/usr/bin/celery beat 
> --app=pulp.server.async.celery_instance.celery 
> --scheduler=pulp.server.async.scheduler.Scheduler (code=exited, 
> status=0/SUCCESS)
>  Main PID: 5887 (code=exited, status=0/SUCCESS)
> 
> Aug 01 08:34:30 dscaprv01.corp.acxiom.net pulp[5887]: celery.beat:CRITICAL: 
> (5887-79264) raise Timeout("Connection attach timed out")
> Aug 01 08:34:30 dscaprv01.corp.acxiom.net pulp[5887]: celery.beat:CRITICAL: 
> (5887-79264) Timeout: Connection attach timed out
> Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: celery beat v3.1.11 
> (Cipater) is starting.
> Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: __-... __   
> -_
> Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: Configuration ->
> Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: . broker -> 
> qpid://dscaprv01.corp.acxiom.net:5671//
> Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: . loader -> 
> celery.loaders.app.AppLoader
> Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: . scheduler -> 
> pulp.server.async.scheduler.Scheduler
> Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: . logfile -> 
> [stderr]@%INFO
> Aug 01 08:34:30 dscaprv01.corp.acxiom.net celery[5887]: . maxinterval -> 
> now (0s)
> 
> 
> On Monday, August 1, 2016 at 8:22:46 AM UTC-5, Chris Duryee wrote:
>>
>>
>>
>> On 08/01/2016 08:50 AM, Rick Langston wrote: 
>>> I do see this issue in messages but not sure if its related 
>>>
>>> Aug  1 07:25:16 dscaprv01 pulp: celery.beat:CRITICAL: (28691-76416) beat 
>>> raised exception : 
>>> Timeout('Connection attach timed out',) 
>>> Aug  1 07:25:16 dscaprv01 pulp: celery.beat:CRITICAL: (28691-76416) 
>>> Traceback (most recent call last): 
>>> Aug  1 07:25:16 dscaprv01 pulp: celery.beat:CRITICAL: (28691-76416)   
>> File 
>>> "/usr/lib/python2.7/site-packages/celery/apps/beat.py", line 112, in 
>>> start_scheduler 
>>> Aug  1 07:25:16 dscaprv01 pulp: celery.beat:CRITICAL: (28691-76416) 
>>> beat.start() 
>>>   
>>
>>
>> That is the likely culprit:) 
>>
>> Next time your task hangs, check in the "/about" page on your Katello 
>> instance and ensure everything under "Backend System Status" says "OK" 
>> with no further message. 
>>
>> If there are pulp errors, a possible quick fix is to ensure qpidd is 
>> still running, then restart pulp_workers, pulp_celerybeat and 
>> pulp_resource_manager. I suspect your task will get picked up after that. 
>>
>> Also, please check dmesg for out-of-memory errors. There are some other 
>> possible things we can check, but I would be curious first about the 
>> backend system status output. 
>>
>>>
>>> On Monday, August 1, 2016 at 7:12:26 AM UTC-5, Chris Duryee wrote: 



 On 08/01/2016 07:54 AM, Rick Langston wrote: 
> Hello 
>
> I seem to be having some odd behavior with this version.  With a fresh 
> install on centos 7 I have setup a product which completes normally 
>> but 
> when I discover a repo and save them i get these meta data task that 
 seem 
> to just wait forever.  Any ideas what can be the culprit 
>

 Are there any related errors in /var/log/messages? 

>
> Action: 
>
> Actions::Pulp::Repository::DistributorPublish 
>
> State: waiting for Pulp to start the task 
> Input: 
>
> {"pulp_id"=>"test-centos-6_updates_x86_64", 
>  "distributor_type_id"=>"yum_distributor", 
>  "source_pulp_id"=>nil, 
>  "dependency"=>nil, 
>  "remote_user"=>"admin", 
>  "remote_cp_user"=>"admin", 
>  "locale"=>"en"} 
>
> Output: 
>
> {"pulp_tasks"=> 
>   [{"exception"=>nil, 
> "task_type"=>"pulp.server.managers.repo.publish.publish", 
> 
>> "_href"=>"/pulp/api/v2/tasks/a40815d5-9ba4-463a-8216-338cdcc4b1cc/", 
> "task_id"=>"a40815d5-9ba4-463a-8216-338cdcc4b1cc", 
> "tags"=> 
>  ["pulp:repository:test-centos-6_updates_x86_64", 
 "pulp:action:publish"], 
> "finish_time"=>nil, 
> "_ns"=>"task_status", 
> "start_time"=>nil, 
> "traceback"=>nil, 
> 

Re: [foreman-users] [katello] Cannot Register Content Hosts

2016-08-01 Thread John Mitsch
Good to hear!

John Mitsch
Red Hat Engineering
(860)-967-7285
irc: jomitsch

On Mon, Aug 1, 2016 at 7:25 AM,  wrote:

> Yep, that's fixed it. Thank you for your assistance.
>
> On Friday, 29 July 2016 20:09:46 UTC+1, John Mitsch wrote:
>>
>> Looks like that step was already executed, can you try going to the
>> dynflow console (in the task details page), skipping that step, and
>> resuming the task? Let me know if you have any questions.
>>
>> John Mitsch
>> Red Hat Engineering
>> (860)-967-7285
>> irc: jomitsch
>>
>> On Fri, Jul 29, 2016 at 10:32 AM,  wrote:
>>
>>> No, resume fails. In /var/log/messages I see
>>>
>>>  pulp: pulp.server.webservices.middleware.exception:INFO: Duplicate
>>> resource: 687b8d5b-68bb-440e-9f97-eede3449f67d
>>>
>>>
>>>
>>> On Thursday, 28 July 2016 17:59:20 UTC+1, John Mitsch wrote:

 Are the broken hosts still blocked by a stopped task? If so, can you
 resume that task? If the task creating them hasn't completed, then deleting
 them may cause some issues.

 John Mitsch
 Red Hat Engineering
 (860)-967-7285
 irc: jomitsch

 On Thu, Jul 28, 2016 at 12:40 PM,  wrote:

> certificate and key were missing from /etc/pki/pulp. Seems like this
> step was missed/failed during the install - maybe due to system clock 
> being
> out?
>
> I've put some in place (and fixed system clock) and can register new
> hosts now, but the two broken hosts are completely wedged. Any way to
> forcibly remove them?
>
>
> On Thursday, 28 July 2016 15:55:20 UTC+1, John Mitsch wrote:
>>
>> Do you see anything related in /var/log/messages?
>>
>> John Mitsch
>> Red Hat Engineering
>> (860)-967-7285
>> irc: jomitsch
>>
>> On Thu, Jul 28, 2016 at 10:37 AM,  wrote:
>>
>>> The Action is Actions::Pulp::Consumer::Create. I tried doing a
>>> resume on the task and the error is now 409 Conflict. There is a large
>>> backtrace.
>>>
>>> /opt/theforeman/tfm/root/usr/share/gems/gems/rest-client-1.6.7/lib/restclient/abstract_response.rb:48:in
>>> `return!'
>>> /opt/theforeman/tfm/root/usr/share/gems/gems/runcible-1.7.2/lib/runcible/base.rb:79:in
>>> `block in get_response'
>>> /opt/theforeman/tfm/root/usr/share/gems/gems/rest-client-1.6.7/lib/restclient/request.rb:228:in
>>> `call'
>>> /opt/theforeman/tfm/root/usr/share/gems/gems/rest-client-1.6.7/lib/restclient/request.rb:228:in
>>> `process_result'
>>> /opt/theforeman/tfm/root/usr/share/gems/gems/rbovirt-0.0.37/lib/restclient_ext/request.rb:50:in
>>> `block in transmit'
>>> /opt/rh/rh-ruby22/root/usr/share/ruby/net/http.rb:853:in `start'
>>> /opt/theforeman/tfm/root/usr/share/gems/gems/rbovirt-0.0.37/lib/restclient_ext/request.rb:44:in
>>> `transmit'
>>> /opt/theforeman/tfm/root/usr/share/gems/gems/rest-client-1.6.7/lib/restclient/request.rb:64:in
>>> `execute'
>>> /opt/theforeman/tfm/root/usr/share/gems/gems/rest-client-1.6.7/lib/restclient/request.rb:33:in
>>> `execute'
>>> /opt/theforeman/tfm/root/usr/share/gems/gems/rest-client-1.6.7/lib/restclient/resource.rb:67:in
>>> `post'
>>> /opt/theforeman/tfm/root/usr/share/gems/gems/runcible-1.7.2/lib/runcible/base.rb:78:in
>>> `get_response'
>>> /opt/theforeman/tfm/root/usr/share/gems/gems/runcible-1.7.2/lib/runcible/base.rb:66:in
>>> `call'
>>> /opt/theforeman/tfm/root/usr/share/gems/gems/runcible-1.7.2/lib/runcible/resources/consumer.rb:20:in
>>> `create'
>>> /opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.0.2/app/lib/actions/pulp/consumer/create.rb:13:in
>>> `run'
>>> /opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-0.8.11/lib/dynflow/action.rb:506:in
>>> `block (3 levels) in execute_run'
>>> /opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-0.8.11/lib/dynflow/middleware/stack.rb:26:in
>>> `call'
>>> /opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-0.8.11/lib/dynflow/middleware/stack.rb:26:in
>>> `pass'
>>> /opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-0.8.11/lib/dynflow/middleware.rb:17:in
>>> `pass'
>>> /opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-0.8.11/lib/dynflow/middleware.rb:30:in
>>> `run'
>>> /opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-0.8.11/lib/dynflow/middleware/stack.rb:22:in
>>> `call'
>>> /opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-0.8.11/lib/dynflow/middleware/stack.rb:26:in
>>> `pass'
>>> /opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-0.8.11/lib/dynflow/middleware.rb:17:in
>>> `pass'
>>> /opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.0.2/app/lib/actions/middleware/remote_action.rb:16:in
>>> `block in run'
>>> /opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.0.2/app/lib/actions/middleware/remote_action.rb:40:in
>>> 

Re: [foreman-users] [katello] Cannot Register Content Hosts

2016-08-01 Thread apgriffiths79
Yep, that's fixed it. Thank you for your assistance.

On Friday, 29 July 2016 20:09:46 UTC+1, John Mitsch wrote:
>
> Looks like that step was already executed, can you try going to the 
> dynflow console (in the task details page), skipping that step, and 
> resuming the task? Let me know if you have any questions.
>
> John Mitsch
> Red Hat Engineering
> (860)-967-7285
> irc: jomitsch
>
> On Fri, Jul 29, 2016 at 10:32 AM,  
> wrote:
>
>> No, resume fails. In /var/log/messages I see
>>
>>  pulp: pulp.server.webservices.middleware.exception:INFO: Duplicate 
>> resource: 687b8d5b-68bb-440e-9f97-eede3449f67d
>>
>>
>>
>> On Thursday, 28 July 2016 17:59:20 UTC+1, John Mitsch wrote:
>>>
>>> Are the broken hosts still blocked by a stopped task? If so, can you 
>>> resume that task? If the task creating them hasn't completed, then deleting 
>>> them may cause some issues.
>>>
>>> John Mitsch
>>> Red Hat Engineering
>>> (860)-967-7285
>>> irc: jomitsch
>>>
>>> On Thu, Jul 28, 2016 at 12:40 PM,  wrote:
>>>
 certificate and key were missing from /etc/pki/pulp. Seems like this 
 step was missed/failed during the install - maybe due to system clock 
 being 
 out?

 I've put some in place (and fixed system clock) and can register new 
 hosts now, but the two broken hosts are completely wedged. Any way to 
 forcibly remove them?


 On Thursday, 28 July 2016 15:55:20 UTC+1, John Mitsch wrote:
>
> Do you see anything related in /var/log/messages?
>
> John Mitsch
> Red Hat Engineering
> (860)-967-7285
> irc: jomitsch
>
> On Thu, Jul 28, 2016 at 10:37 AM,  wrote:
>
>> The Action is Actions::Pulp::Consumer::Create. I tried doing a resume 
>> on the task and the error is now 409 Conflict. There is a large 
>> backtrace.
>>
>> /opt/theforeman/tfm/root/usr/share/gems/gems/rest-client-1.6.7/lib/restclient/abstract_response.rb:48:in
>>  
>> `return!'
>> /opt/theforeman/tfm/root/usr/share/gems/gems/runcible-1.7.2/lib/runcible/base.rb:79:in
>>  
>> `block in get_response'
>> /opt/theforeman/tfm/root/usr/share/gems/gems/rest-client-1.6.7/lib/restclient/request.rb:228:in
>>  
>> `call'
>> /opt/theforeman/tfm/root/usr/share/gems/gems/rest-client-1.6.7/lib/restclient/request.rb:228:in
>>  
>> `process_result'
>> /opt/theforeman/tfm/root/usr/share/gems/gems/rbovirt-0.0.37/lib/restclient_ext/request.rb:50:in
>>  
>> `block in transmit'
>> /opt/rh/rh-ruby22/root/usr/share/ruby/net/http.rb:853:in `start'
>> /opt/theforeman/tfm/root/usr/share/gems/gems/rbovirt-0.0.37/lib/restclient_ext/request.rb:44:in
>>  
>> `transmit'
>> /opt/theforeman/tfm/root/usr/share/gems/gems/rest-client-1.6.7/lib/restclient/request.rb:64:in
>>  
>> `execute'
>> /opt/theforeman/tfm/root/usr/share/gems/gems/rest-client-1.6.7/lib/restclient/request.rb:33:in
>>  
>> `execute'
>> /opt/theforeman/tfm/root/usr/share/gems/gems/rest-client-1.6.7/lib/restclient/resource.rb:67:in
>>  
>> `post'
>> /opt/theforeman/tfm/root/usr/share/gems/gems/runcible-1.7.2/lib/runcible/base.rb:78:in
>>  
>> `get_response'
>> /opt/theforeman/tfm/root/usr/share/gems/gems/runcible-1.7.2/lib/runcible/base.rb:66:in
>>  
>> `call'
>> /opt/theforeman/tfm/root/usr/share/gems/gems/runcible-1.7.2/lib/runcible/resources/consumer.rb:20:in
>>  
>> `create'
>> /opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.0.2/app/lib/actions/pulp/consumer/create.rb:13:in
>>  
>> `run'
>> /opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-0.8.11/lib/dynflow/action.rb:506:in
>>  
>> `block (3 levels) in execute_run'
>> /opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-0.8.11/lib/dynflow/middleware/stack.rb:26:in
>>  
>> `call'
>> /opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-0.8.11/lib/dynflow/middleware/stack.rb:26:in
>>  
>> `pass'
>> /opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-0.8.11/lib/dynflow/middleware.rb:17:in
>>  
>> `pass'
>> /opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-0.8.11/lib/dynflow/middleware.rb:30:in
>>  
>> `run'
>> /opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-0.8.11/lib/dynflow/middleware/stack.rb:22:in
>>  
>> `call'
>> /opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-0.8.11/lib/dynflow/middleware/stack.rb:26:in
>>  
>> `pass'
>> /opt/theforeman/tfm/root/usr/share/gems/gems/dynflow-0.8.11/lib/dynflow/middleware.rb:17:in
>>  
>> `pass'
>> /opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.0.2/app/lib/actions/middleware/remote_action.rb:16:in
>>  
>> `block in run'
>> /opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.0.2/app/lib/actions/middleware/remote_action.rb:40:in
>>  
>> `block in as_remote_user'
>> 

Re: [foreman-users] Katello inter sync option

2016-08-01 Thread Unix SA
yes, that could be a workaround, but not appropriate solution. also 
exporting content again takes lot of space, i think it should create links 
similar to PULP.

anyone in katello development team can please comment ? if this is going to 
improve in future?

On Saturday, 30 July 2016 00:18:22 UTC+5:30, Chris Duryee wrote:
>
>
>
> On 07/28/2016 11:21 PM, Unix SA wrote: 
> > Hi, 
> > 
> >>> Can you give more detail about your use case? It sounds like there are 
> >>> multiple users in the same org, each wanting to manage their own CV of 
> >>> exported content, and each with their own set of repos to import on 
> the 
> >>> downstream Katello server? 
> > 
> > that's correct, i have one satellite in Engineering and one in 
> Production ( 
> > downstream ), now we download contents from Redhat to our Engineering 
> > satellite test it and release it to Production. 
> > 
> > so for example, we have RHEL6.5 kickstart,  RHE6.5 rpms, RHEL6 satellite 
> > tools in one content view, and RHEL7.2 kickstart, RHEL7.2 rpms, RHEL7 
> > satellite tools in another content view, similar things will happen for 
> > RHEL7.3 later. apart from that we have other content views for 
> middleware, 
> > db and all. 
> > 
> > now i want to periodically allow production satellite to sync contents 
> from 
> > engineering satellite, using current approach, i will only able to sync 
> > either RHEL6 or RHEL7, unless i change URL 
> > 
> > ideally this export/import feature should allow, individual repository 
> > sync, by providing URL, that may work for custom contents but not sure 
> how 
> > it will work for RHEL contents. 
> > 
>
> That is allowed today, but via hammer. If you use 'hammer repository 
> sync' with the '--source-url' option, you can set it to any URL desired. 
>
> Unfortunately, it's difficult to import from multiple CVs without 
> changing the URL, since the CDN URL is tied to the organization/manifest 
> and not to a particular product or repo. One workaround would be to set 
> your CDN URLs and enable repos to get everything created, then use the 
> --source-url method above once the repos were created to sync content 
> down. 
>
>
> > Regads, 
> > DJ 
> > 
> > On Thursday, 28 July 2016 19:38:45 UTC+5:30, Chris Duryee wrote: 
> >> 
> >> 
> >> 
> >> On 07/27/2016 11:47 PM, Unix SA wrote: 
> >>> Well that's not the right way, if it allows to export CV version, it 
> >> should 
> >>> allow different CVs, also i dont want to mix my contents in single CVs 
> >> as 
> >>> there are different people manage their own CVs, also why i will 
> include 
> >> my 
> >>> RHEL6.5 and RHEL7 in single content view? 
> >>> 
> >> 
> >> The tricky part is that if you are using the CDN URL to manage the 
> >> import location, the exported contents need to be in the same tree 
> >> format as what Katello expects. 
> >> 
> >> Can you give more detail about your use case? It sounds like there are 
> >> multiple users in the same org, each wanting to manage their own CV of 
> >> exported content, and each with their own set of repos to import on the 
> >> downstream Katello server? 
> >> 
> >> 
> >>> Regards, 
> >>> DJ 
> >>> 
> >>> On Wednesday, 27 July 2016 18:36:36 UTC+5:30, Chris Duryee wrote: 
>  
>  
>  
>  On 07/27/2016 04:38 AM, Unix SA wrote: 
> > 
> > Hello, 
> > 
> > I am following procedure below to sync contents from my upstream 
>  satellite 
> > to downstream. 
> > 
> > 
>  
> >> 
> https://access.redhat.com/documentation/en/red-hat-satellite/6.2-beta/paged/content-management-guide/appendix-c-synchronizing-content-between-satellite-servers
>  
> > 
> > now after updating my organization to URL mention in document, i am 
> >> able 
>  to 
> > sync one CV 
> > 
> > "" 
> > 
> > $ hammer organization update \ 
> > --name "Mega Subsidiary" \ 
> > --redhat-repository-url \ 
> > http://megacorp.com/pub/cdn-latest 
> > 
> > Organization updated 
> > """ 
> > 
> > but for another CV, the URL is different, do i have to update URL 
>  everytime when i want to sync different CVs ? or am i missing 
> >> something? 
> > 
> > 
> > Procedure i followed is:- 
> > 1) set export path to /var/lib/html/pub/export 
> > 2) set selinux permissions 
> > 3) set "immediate" for my RHEL7 kickstart repository 
> > 4) export repository with "hammer repository export --id 2 
>  --export-to-iso 
> > 0" 
> > 5) in downstream URL update manifest with URL 
> > " 
>  
> >> 
> https://upstream/pub/export/ORG-Red_Hat_Enterprise_Linux_Server-Red_Hat_Enterprise_Linux_7_Server_Kickstart_x86_64_7_2/ORG/Library/;
>  
>
> >> 
>  
> > 
> > sync works for "kickstart" , but then how to sync for "rpm 
> repository" 
>  now 
> > as URL will be different for it? 
> > 
> > Regards, 
> > DJ 
> > 
>  
>  Typically, the easiest way to do this is to create a single CV 

Re: [foreman-users] foreman api call redirecting

2016-08-01 Thread Suresh P
Yes it is due to curl comparability.   I have ran the curl command on
centos6.4 os which is not supporting https://foreman-url and latest version
of foreman will not support http.

Right now i'm using the same command with https from centos 7.x os version
which is working perfect.

On Fri, Jul 22, 2016 at 9:35 PM, Jeff Sparrow 
wrote:

> I am getting the same issue, did you find a solution by chance?
>
>
> On Tuesday, March 22, 2016 at 8:16:45 AM UTC-5, Suresh P wrote:
>>
>> Getting following error in  /var/log/foreman/production.log
>>
>> | Started GET "/api/domains?page=2" for 172.29.248.108 at 2016-03-22
>> 06:14:30 -0700
>> 2016-03-22 06:14:30 [app] [I]   Rendered api/v2/reports/create.json.rabl
>> (7.6ms)
>> 2016-03-22 06:14:30 [app] [I] Completed 201 Created in 45ms (Views: 7.6ms
>> | ActiveRecord: 7.9ms)
>> 2016-03-22 06:14:30 [app] [I] Processing by
>> Api::V2::DomainsController#index as JSON
>> 2016-03-22 06:14:30 [app] [I]   Parameters: {"page"=>"2", "apiv"=>"v2"}
>> 2016-03-22 06:14:30 [app] [I] Redirected to
>> https://us1-foreman.zohonoc.com/api/domains?page=2
>> 2016-03-22 06:14:30 [app] [I] Filter chain halted as
>> #
>> rendered or redirected
>> 2016-03-22 06:14:30 [app] [I] Completed 301 Moved Permanently in 1ms
>> (ActiveRecord: 0.0m
>>
>> Regards,
>> Suresh
>>
>>
>>
>> On Monday, 21 March 2016 21:50:58 UTC+5:30, Suresh P wrote:
>>>
>>>
>>> Any help!
>>>
>>> On Thursday, 17 March 2016 13:21:54 UTC+5:30, Suresh P wrote:

 Hi Ohad,

 If i change it to https i'm getting following error.

 curl: (35) NSS: client certificate not found (nickname not specified)

 Regards,
 Suresh


 On Thursday, 17 March 2016 13:11:49 UTC+5:30, ohad wrote:
>
>
>
> On Thu, Mar 17, 2016 at 9:39 AM, Suresh P  wrote:
>
>> Hi,
>>
>> I have used following api call for hostgroup changing purpose in
>> standalone foreman setup.   It worked well.
>>
>> curl -k -u username:password -H "Accept: version=2,application/json"
>> -H "Content-Type: application/json" -X PUT -d '{"host":{ 
>> "hostgroup_name":
>> ["Free"] }}' http://foremanurl/api/hosts/172.x.x.x
>>
>> Currently i have moved my setup to HA(Behind load balancer).But
>> now i'm getting following message.   Kindly help me to fix this.
>>
>> You are being https://foremanurl/api/hosts/172.x.x.x;>redirected.
>>
>
> I assume you should change to https://foremanurl vs http.
>
> Ohad
>
>>
>> Regards,
>> Suresh
>>
>> --
>> You received this message because you are subscribed to the Google
>> Groups "Foreman users" group.
>> To unsubscribe from this group and stop receiving emails from it,
>> send an email to foreman-user...@googlegroups.com.
>> To post to this group, send email to forema...@googlegroups.com.
>> Visit this group at https://groups.google.com/group/foreman-users.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "Foreman users" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/foreman-users/hDYvjh9PLwo/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> foreman-users+unsubscr...@googlegroups.com.
> To post to this group, send email to foreman-users@googlegroups.com.
> Visit this group at https://groups.google.com/group/foreman-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Foreman users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to foreman-users+unsubscr...@googlegroups.com.
To post to this group, send email to foreman-users@googlegroups.com.
Visit this group at https://groups.google.com/group/foreman-users.
For more options, visit https://groups.google.com/d/optout.