[Yahoo-eng-team] [Bug 1928922] Re: evacuation tests in nova-live-migration post hook fails with VirtualInterfaceCreateException due to vif-plugged event received before nova starts waiting for it.

2022-02-24 Thread Bogdan Dobrelya
*** This bug is a duplicate of bug 1901707 ***
https://bugs.launchpad.net/bugs/1901707

** This bug has been marked a duplicate of bug 1901707
   race condition on port binding vs instance being resumed for live-migrations

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1928922

Title:
  evacuation tests in nova-live-migration post hook fails with
  VirtualInterfaceCreateException due to vif-plugged event received
  before nova starts waiting for it.

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  Example POST_FAILURE run
  
https://zuul.opendev.org/t/openstack/build/4d0d5072c1e0479db616a211e0afda42/logs

  
https://zuul.opendev.org/t/openstack/build/4d0d5072c1e0479db616a211e0afda42/log/controller/logs/screen-
  n-cpu.txt#13322

  May 14 16:30:35.248552 ubuntu-focal-ovh-bhs1-0024684290 nova-
  compute[94366]: ERROR nova.compute.manager [None
  req-778f8336-8e29-4282-a112-91a876303fe3 demo admin] [instance:
  021396f6-40ff-434c-acbe-7092a6b1bcd9] Setting instance vm_state to
  ERROR: nova.exception.VirtualInterfaceCreateException: Virtual
  Interface creation failed

  May 14 16:30:35.248552 ubuntu-focal-ovh-bhs1-0024684290 nova-
  compute[94366]: ERROR nova.compute.manager [instance:
  021396f6-40ff-434c-acbe-7092a6b1bcd9] Traceback (most recent call
  last):

  May 14 16:30:35.248552 ubuntu-focal-ovh-bhs1-0024684290 nova-
  compute[94366]: ERROR nova.compute.manager [instance:
  021396f6-40ff-434c-acbe-7092a6b1bcd9]   File
  "/opt/stack/nova/nova/virt/libvirt/driver.py", line 7188, in
  _create_guest_with_network

  May 14 16:30:35.248552 ubuntu-focal-ovh-bhs1-0024684290 nova-
  compute[94366]: ERROR nova.compute.manager [instance:
  021396f6-40ff-434c-acbe-7092a6b1bcd9] guest = self._create_guest(

  May 14 16:30:35.248552 ubuntu-focal-ovh-bhs1-0024684290 nova-
  compute[94366]: ERROR nova.compute.manager [instance:
  021396f6-40ff-434c-acbe-7092a6b1bcd9]   File
  "/usr/lib/python3.8/contextlib.py", line 120, in __exit__

  May 14 16:30:35.248552 ubuntu-focal-ovh-bhs1-0024684290 nova-
  compute[94366]: ERROR nova.compute.manager [instance:
  021396f6-40ff-434c-acbe-7092a6b1bcd9] next(self.gen)

  May 14 16:30:35.248552 ubuntu-focal-ovh-bhs1-0024684290 nova-
  compute[94366]: ERROR nova.compute.manager [instance:
  021396f6-40ff-434c-acbe-7092a6b1bcd9]   File
  "/opt/stack/nova/nova/compute/manager.py", line 479, in
  wait_for_instance_event

  May 14 16:30:35.248552 ubuntu-focal-ovh-bhs1-0024684290 nova-
  compute[94366]: ERROR nova.compute.manager [instance:
  021396f6-40ff-434c-acbe-7092a6b1bcd9] actual_event = event.wait()

  May 14 16:30:35.248552 ubuntu-focal-ovh-bhs1-0024684290 nova-
  compute[94366]: ERROR nova.compute.manager [instance:
  021396f6-40ff-434c-acbe-7092a6b1bcd9]   File
  "/usr/local/lib/python3.8/dist-packages/eventlet/event.py", line 125,
  in wait

  May 14 16:30:35.248552 ubuntu-focal-ovh-bhs1-0024684290 nova-
  compute[94366]: ERROR nova.compute.manager [instance:
  021396f6-40ff-434c-acbe-7092a6b1bcd9] result = hub.switch()

  May 14 16:30:35.248552 ubuntu-focal-ovh-bhs1-0024684290 nova-
  compute[94366]: ERROR nova.compute.manager [instance:
  021396f6-40ff-434c-acbe-7092a6b1bcd9]   File
  "/usr/local/lib/python3.8/dist-packages/eventlet/hubs/hub.py", line
  313, in switch

  May 14 16:30:35.248552 ubuntu-focal-ovh-bhs1-0024684290 nova-
  compute[94366]: ERROR nova.compute.manager [instance:
  021396f6-40ff-434c-acbe-7092a6b1bcd9] return
  self.greenlet.switch()

  May 14 16:30:35.248552 ubuntu-focal-ovh-bhs1-0024684290 nova-
  compute[94366]: ERROR nova.compute.manager [instance:
  021396f6-40ff-434c-acbe-7092a6b1bcd9] eventlet.timeout.Timeout: 300
  seconds

  May 14 16:30:35.248552 ubuntu-focal-ovh-bhs1-0024684290 nova-
  compute[94366]: ERROR nova.compute.manager [instance:
  021396f6-40ff-434c-acbe-7092a6b1bcd9]

  May 14 16:30:35.248552 ubuntu-focal-ovh-bhs1-0024684290 nova-
  compute[94366]: ERROR nova.compute.manager [instance:
  021396f6-40ff-434c-acbe-7092a6b1bcd9] During handling of the above
  exception, another exception occurred:

  May 14 16:30:35.248552 ubuntu-focal-ovh-bhs1-0024684290 nova-
  compute[94366]: ERROR nova.compute.manager [instance:
  021396f6-40ff-434c-acbe-7092a6b1bcd9]

  May 14 16:30:35.248552 ubuntu-focal-ovh-bhs1-0024684290 nova-
  compute[94366]: ERROR nova.compute.manager [instance:
  021396f6-40ff-434c-acbe-7092a6b1bcd9] Traceback (most recent call
  last):

  May 14 16:30:35.248552 ubuntu-focal-ovh-bhs1-0024684290 nova-
  compute[94366]: ERROR nova.compute.manager [instance:
  021396f6-40ff-434c-acbe-7092a6b1bcd9]   File
  "/opt/stack/nova/nova/compute/manager.py", line 10090, in
  _error_out_instance_on_exception

  May 14 16:30:35.248552 ubuntu-focal-ovh-bhs1-0024684290 nova-
  compute[94366]: ERROR nova.compute.manager [instance:
  

[Yahoo-eng-team] [Bug 1944779] Re: VirtualInterfaceCreateException due to "Timeout waiting for [('network-vif-plugged'..."

2021-11-30 Thread Bogdan Dobrelya
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1944779

Title:
  VirtualInterfaceCreateException due to "Timeout waiting for
  [('network-vif-plugged'..."

Status in neutron:
  New
Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Seeing this occasionally in the gate recently, instance spawn fails
  with VirtualInterfaceCreateException after timing out 300 seconds
  waiting for the 'network-vif-plugged' event from neutron. The event
  never arrives (before or after we begin waiting).

  Here's log excerpts showing how it looks [1]:

  Sep 23 06:10:44.144477 ubuntu-focal-rax-iad-0026631025 nova-
  compute[110797]: DEBUG nova.compute.manager [None
  req-6d0b7dcf-7f96-47d2-8673-626abdd275b7 tempest-
  TestSnapshotPattern-20482801 tempest-
  TestSnapshotPattern-20482801-project] [instance:
  487a3a37-fe41-447d-bc77-9ab3fce47473] Preparing to wait for external
  event network-vif-plugged-9129f5b9-2940-44c2-8d44-04c46df5fe49
  {{(pid=110797) prepare_for_instance_event
  /opt/stack/nova/nova/compute/manager.py:280}}

  [...]

  Sep 23 06:15:44.961239 ubuntu-focal-rax-iad-0026631025 nova-
  compute[110797]: WARNING nova.virt.libvirt.driver [None
  req-6d0b7dcf-7f96-47d2-8673-626abdd275b7 tempest-
  TestSnapshotPattern-20482801 tempest-
  TestSnapshotPattern-20482801-project] [instance:
  487a3a37-fe41-447d-bc77-9ab3fce47473] Timeout waiting for [('network-
  vif-plugged', '9129f5b9-2940-44c2-8d44-04c46df5fe49')] for instance
  with vm_state building and task_state spawning:
  eventlet.timeout.Timeout: 300 seconds

  
  Logstash query:

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Instance%20failed%20to%20spawn%3A%20nova.exception.VirtualInterfaceCreateException%3A%20Virtual%20Interface%20creation%20failed%5C%22%20AND%20tags%3A%5C%22screen-
  n-cpu.txt%5C%22=7d

  [1] https://paste.opendev.org/show/809546

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1944779/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276694] Re: Openstack services should support SIGHUP signal

2021-09-01 Thread Bogdan Dobrelya
** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1276694

Title:
  Openstack services should support SIGHUP signal

Status in Cinder:
  Invalid
Status in Glance:
  Fix Released
Status in OpenStack Heat:
  Fix Released
Status in OpenStack Identity (keystone):
  Invalid
Status in Murano:
  Confirmed
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo-incubator:
  Invalid
Status in oslo.config:
  Fix Released
Status in oslo.log:
  Fix Released
Status in oslo.service:
  Fix Released
Status in Sahara:
  Fix Released

Bug description:
  1)In order to more effectively manage the unlinked and open (lsof +L1)
  log files descriptors w/o restarting the services, SIGHUP signal
  should be accepted by every Openstack service.

  That would allow, e.g. logrotate jobs to gracefully HUP services after
  their log files were rotated. The only option we have for now is to
  force the services restart, quite a poor option from the services
  continuous accessibility PoV.

  Note: according to  http://en.wikipedia.org/wiki/Unix_signal
  SIGHUP
     ... Many daemons will reload their configuration files and reopen their 
logfiles instead of exiting when receiving this signal.

  Currently Murano and Glance are out of sync with Oslo SIGHUP support.

  There is also the following issue exists for some of the services of OS 
projects with synced SIGHUP support:
  2)
  heat-api-cfn, heat-api, heat-api-cloudwatch, keystone:  looks like the synced 
code is never being executed, thus SIGHUP is not supported for them. Here is a 
simple test scenario:
  2.1) modify 
/site-packages//openstack/common/service.py
  def _sighup_supported():
  +LOG.warning("SIGHUP is supported: {0}".format(hasattr(signal, 'SIGHUP')))
  return hasattr(signal, 'SIGHUP')
  2.2) restart service foo-service-name and check logs for "SIGHUP is 
supported", if service  really supports it, the appropriate messages would be 
present in the logs.
  2.3) issue kill -HUP  and check logs for "SIGHUP is 
supported" and "Caught SIGHUP", if service  really supports it, the appropriate 
messages would be present in the logs. Besides that, the service should remain 
started and its main thread PID should not be changed.

  e.g.
  2.a) heat-engine supports HUPing:
  #service openstack-heat-engine restart
  <132>Apr 11 14:03:48 node-3 heat-heat.openstack.common.service WARNING: 
SIGHUP is supported: True

  2.b)But heat-api don't know how to HUP:
  #service openstack-heat-api restart
  <134>Apr 11 14:06:22 node-3 heat-heat.api INFO: Starting Heat ReST API on 
0.0.0.0:8004
  <134>Apr 11 14:06:22 node-3 heat-eventlet.wsgi.server INFO: Starting single 
process server

  2.c) HUPing heat-engine is OK
  #pid=$(cat /var/run/heat/openstack-heat-engine.pid); kill -HUP $pid && echo 
$pid
  16512
  <134>Apr 11 14:12:15 node-3 heat-heat.openstack.common.service INFO: Caught 
SIGHUP, exiting
  <132>Apr 11 14:12:15 node-3 heat-heat.openstack.common.service WARNING: 
SIGHUP is supported: True
  <134>Apr 11 14:12:15 node-3 heat-heat.openstack.common.rpc.common INFO: 
Connected to AMQP server on ...
  service openstack-heat-engine status
  openstack-heat-engine (pid  16512) is running...

  2.d) HUPed heat-api is dead now ;(
  #kill -HUP $(cat /var/run/heat/openstack-heat-api.pid)
  (no new logs)
  # service openstack-heat-api status
  openstack-heat-api dead but pid file exists

  3)
  nova-cert, nova-novncproxy, nova-objectstore, nova-consoleauth, 
nova-scheduler - unlike to case 2, after kill -HUP  command 
was issued, there would be a "Caught SIGHUP" message in the logs, BUT the 
associated service would have got dead anyway. Instead, the service should 
remain started and its main thread PID should not be changed (similar to the 
2.c case).

  So, looks like there are a lot of things still should be done to
  ensure POSIX standards abidance in Openstack :-)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1276694/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1829062] Re: nova placement api non-responsive due to eventlet error

2019-08-01 Thread Bogdan Dobrelya
** Changed in: tripleo
   Status: Fix Released => Triaged

** Changed in: tripleo
 Assignee: Bogdan Dobrelya (bogdando) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1829062

Title:
  nova placement api non-responsive due to eventlet error

Status in OpenStack Compute (nova):
  New
Status in StarlingX:
  Fix Released
Status in tripleo:
  Triaged

Bug description:
  In starlingx setup, we're running a nova docker image based on nova 
stable/stein as of May 6.
  We're seeing nova-compute processes stalling and not creating resource 
providers with placement.
  openstack hypervisor list
  ++-+-+-+---+
  | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
  ++-+-+-+---+
  | 5  | worker-1| QEMU| 192.168.206.247 | down  |
  | 8  | worker-2| QEMU| 192.168.206.211 | down  |
  ++-+-+-+---+

  Observe this error in nova-placement-api logs related to eventlet at same 
time:
  2019-05-14 00:44:03.636229 Traceback (most recent call last):
  2019-05-14 00:44:03.636276 File 
"/var/lib/openstack/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 
460, in fire_timers
  2019-05-14 00:44:03.636536 timer()
  2019-05-14 00:44:03.636560 File 
"/var/lib/openstack/lib/python2.7/site-packages/eventlet/hubs/timer.py", line 
59, in _call_
  2019-05-14 00:44:03.636647 cb(*args, **kw)
  2019-05-14 00:44:03.636661 File 
"/var/lib/openstack/lib/python2.7/site-packages/eventlet/semaphore.py", line 
147, in _do_acquire
  2019-05-14 00:44:03.636774 waiter.switch()
  2019-05-14 00:44:03.636792 error: cannot switch to a different thread

  This is a new behaviour for us in stable/stein and suspect this is due to 
merge of eventlet related change on May 4:
  
https://github.com/openstack/nova/commit/6755034e109079fb5e8bbafcd611a919f0884d14

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1829062/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1829062] Re: nova placement api non-responsive due to eventlet error

2019-08-01 Thread Bogdan Dobrelya
** Changed in: tripleo
   Status: In Progress => Fix Released

** Tags added: queens-backport-potential rocky-backport-potential stein-
backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1829062

Title:
  nova placement api non-responsive due to eventlet error

Status in OpenStack Compute (nova):
  New
Status in StarlingX:
  Fix Released
Status in tripleo:
  Fix Released

Bug description:
  In starlingx setup, we're running a nova docker image based on nova 
stable/stein as of May 6.
  We're seeing nova-compute processes stalling and not creating resource 
providers with placement.
  openstack hypervisor list
  ++-+-+-+---+
  | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
  ++-+-+-+---+
  | 5  | worker-1| QEMU| 192.168.206.247 | down  |
  | 8  | worker-2| QEMU| 192.168.206.211 | down  |
  ++-+-+-+---+

  Observe this error in nova-placement-api logs related to eventlet at same 
time:
  2019-05-14 00:44:03.636229 Traceback (most recent call last):
  2019-05-14 00:44:03.636276 File 
"/var/lib/openstack/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 
460, in fire_timers
  2019-05-14 00:44:03.636536 timer()
  2019-05-14 00:44:03.636560 File 
"/var/lib/openstack/lib/python2.7/site-packages/eventlet/hubs/timer.py", line 
59, in _call_
  2019-05-14 00:44:03.636647 cb(*args, **kw)
  2019-05-14 00:44:03.636661 File 
"/var/lib/openstack/lib/python2.7/site-packages/eventlet/semaphore.py", line 
147, in _do_acquire
  2019-05-14 00:44:03.636774 waiter.switch()
  2019-05-14 00:44:03.636792 error: cannot switch to a different thread

  This is a new behaviour for us in stable/stein and suspect this is due to 
merge of eventlet related change on May 4:
  
https://github.com/openstack/nova/commit/6755034e109079fb5e8bbafcd611a919f0884d14

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1829062/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1829062] Re: nova placement api non-responsive due to eventlet error

2019-07-03 Thread Bogdan Dobrelya
So for TripleO, we're about to implement 
https://bugs.launchpad.net/tripleo/+bug/1829062/comments/7
based on that tuning MPM/event example https://review.opendev.org/#/c/72666/ 
from the past

** Also affects: tripleo
   Importance: Undecided
   Status: New

** Changed in: tripleo
   Status: New => In Progress

** Changed in: tripleo
   Importance: Undecided => High

** Changed in: tripleo
 Assignee: (unassigned) => Bogdan Dobrelya (bogdando)

** Changed in: tripleo
Milestone: None => train-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1829062

Title:
  nova placement api non-responsive due to eventlet error

Status in OpenStack Compute (nova):
  New
Status in StarlingX:
  Fix Released
Status in tripleo:
  In Progress

Bug description:
  In starlingx setup, we're running a nova docker image based on nova 
stable/stein as of May 6.
  We're seeing nova-compute processes stalling and not creating resource 
providers with placement.
  openstack hypervisor list
  ++-+-+-+---+
  | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
  ++-+-+-+---+
  | 5  | worker-1| QEMU| 192.168.206.247 | down  |
  | 8  | worker-2| QEMU| 192.168.206.211 | down  |
  ++-+-+-+---+

  Observe this error in nova-placement-api logs related to eventlet at same 
time:
  2019-05-14 00:44:03.636229 Traceback (most recent call last):
  2019-05-14 00:44:03.636276 File 
"/var/lib/openstack/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 
460, in fire_timers
  2019-05-14 00:44:03.636536 timer()
  2019-05-14 00:44:03.636560 File 
"/var/lib/openstack/lib/python2.7/site-packages/eventlet/hubs/timer.py", line 
59, in _call_
  2019-05-14 00:44:03.636647 cb(*args, **kw)
  2019-05-14 00:44:03.636661 File 
"/var/lib/openstack/lib/python2.7/site-packages/eventlet/semaphore.py", line 
147, in _do_acquire
  2019-05-14 00:44:03.636774 waiter.switch()
  2019-05-14 00:44:03.636792 error: cannot switch to a different thread

  This is a new behaviour for us in stable/stein and suspect this is due to 
merge of eventlet related change on May 4:
  
https://github.com/openstack/nova/commit/6755034e109079fb5e8bbafcd611a919f0884d14

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1829062/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1831315] Re: nova-manage cell_v2 discover_hosts fails for IPv6 - ValueError: invalid literal for int() with base 10 - db connection URI gets its brackets eaten

2019-06-24 Thread Bogdan Dobrelya
** No longer affects: oslo.config

** Changed in: nova
   Status: In Progress => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1831315

Title:
  nova-manage cell_v2 discover_hosts fails for IPv6 - ValueError:
  invalid literal for int() with base 10 - db connection URI gets its
  brackets eaten

Status in OpenStack Compute (nova):
  New
Status in tripleo:
  In Progress

Bug description:
  file: undercloud.conf

  [DEFAULT]

  enable_routed_networks = true
  overcloud_domain_name = localdomain
  undercloud_ntp_servers = pool.ntp.org
  undercloud_hostname = undercloud.rdocloud
  local_interface = eth1
  local_mtu = 1450
  local_ip = fd12:3456:789a:1::1/64 
  undercloud_public_host = fd12:3456:789a:1::2
  undercloud_admin_host = fd12:3456:789a:1::3
  undercloud_nameservers = 2001:4860:4860::
  local_subnet = ctlplane-subnet
  subnets = ctlplane-subnet

  [ctlplane-subnet]
  cidr = fd12:3456:789a:1::/64
  dhcp_start = fd12:3456:789a:1::10
  dhcp_end = fd12:3456:789a:1::20
  gateway = fd12:3456:789a:1::1
  inspection_iprange = fd12:3456:789a:1::30,fd12:3456:789a:1::40
  masquerade = false

  Deploying the undercloud fails. Nova API log errors:

  ERROR nova.context Traceback (most recent call last):
  ERROR nova.context   File "/usr/lib/python2.7/site-packages/nova/context.py", 
line 442, in gather_result
  ERROR nova.context result = fn(*args, **kwargs)
  ERROR nova.context   File 
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 184, in 
wrapper
  ERROR nova.context result = fn(cls, context, *args, **kwargs)
  ERROR nova.context   File 
"/usr/lib/python2.7/site-packages/nova/objects/service.py", line 603, in get_all
  ERROR nova.context db_services = db.service_get_all(context, 
disabled=disabled)
  ERROR nova.context   File "/usr/lib/python2.7/site-packages/nova/db/api.py", 
line 131, in service_get_all
  20 ERROR nova.context return IMPL.service_get_all(context, disabled)
  ERROR nova.context   File 
"/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 228, in 
wrapped
  ERROR nova.context with ctxt_mgr.reader.using(context):
  ERROR nova.context   File "/usr/lib64/python2.7/contextlib.py", line 17, in 
__enter__
  nova.context return self.gen.next()
  ERROR nova.context   File 
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 
1064, in _transaction_scope
  ERROR nova.context context=context) as resource:
  nova.context   File "/usr/lib64/python2.7/contextlib.py", line 17, in 
__enter__
  nova.context return self.gen.next()
  ERROR nova.context   File 
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 
659, in _session
  ERROR nova.context bind=self.connection, mode=self.mode)
  ERROR nova.context   File 
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 
418, in _create_session
  ERROR nova.context self._start()
  ERROR nova.context   File 
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 
510, in _start
  ERROR nova.context engine_args, maker_args)
  nova.context   File 
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 
534, in _setup_for_connection
  ERROR nova.context sql_connection=sql_connection, **engine_kwargs)
  ERROR nova.context   File 
"/usr/lib/python2.7/site-packages/debtcollector/renames.py", line 43, in 
decorator
  ERROR nova.context return wrapped(*args, **kwargs)
  ERROR nova.context   File 
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py", line 153, in 
create_engine
  ERROR nova.context url = sqlalchemy.engine.url.make_url(sql_connection)
  ERROR nova.context   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/url.py", line 225, in 
make_url
  ERROR nova.context return _parse_rfc1738_args(name_or_url)
  ERROR nova.context   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/url.py", line 284, in 
_parse_rfc1738_args
  ERROR nova.context return URL(name, **components)
  ERROR nova.context   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/url.py", line 71, in 
__init__
  ERROR nova.context self.port = int(port)
  ERROR nova.context ValueError: invalid literal for int() with base 10: 
'3456:789a:1::3'

  
  nova.conf:

  [DEFAULT]
  rootwrap_config=/etc/nova/rootwrap.conf
  compute_driver=ironic.IronicDriver
  allow_resize_to_same_host=False
  vif_plugging_is_fatal=True
  vif_plugging_timeout=300
  force_raw_images=True
  reserved_host_memory_mb=0
  ram_allocation_ratio=1.0
  sync_power_state_interval=-1
  heal_instance_info_cache_interval=60
  instance_name_template=instance-%08x
  force_config_drive=True
  my_ip=fd12:3456:789a:1::1
  host=undercloud.localdomain
  ssl_only=False
  state_path=/var/lib/nova
  report_interval=10
  service_down_time=60
  

[Yahoo-eng-team] [Bug 1831315] Re: Udercloud IPv6 - ValueError: invalid literal for int() with base 10: '3456:789a:1::3' - db connection URI gets its brackets eaten

2019-06-07 Thread Bogdan Dobrelya
** Changed in: tripleo
   Status: Triaged => Won't Fix

** Summary changed:

- Udercloud IPv6 - ValueError: invalid literal for int() with base 10: 
'3456:789a:1::3' - db connection URI gets its brackets eaten
+ nova-manage cell_v2 discover_hosts fails for IPv6 - ValueError: invalid 
literal for int() with base 10: '3456:789a:1::3' - db connection URI gets its 
brackets eaten

** Summary changed:

- nova-manage cell_v2 discover_hosts fails for IPv6 - ValueError: invalid 
literal for int() with base 10: '3456:789a:1::3' - db connection URI gets its 
brackets eaten
+ nova-manage cell_v2 discover_hosts fails for IPv6 - ValueError: invalid 
literal for int() with base 10 - db connection URI gets its brackets eaten

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1831315

Title:
  nova-manage cell_v2 discover_hosts fails for IPv6 - ValueError:
  invalid literal for int() with base 10 - db connection URI gets its
  brackets eaten

Status in OpenStack Compute (nova):
  Confirmed
Status in oslo.config:
  Confirmed
Status in tripleo:
  Won't Fix

Bug description:
  file: undercloud.conf

  [DEFAULT]

  enable_routed_networks = true
  overcloud_domain_name = localdomain
  undercloud_ntp_servers = pool.ntp.org
  undercloud_hostname = undercloud.rdocloud
  local_interface = eth1
  local_mtu = 1450
  local_ip = fd12:3456:789a:1::1/64 
  undercloud_public_host = fd12:3456:789a:1::2
  undercloud_admin_host = fd12:3456:789a:1::3
  undercloud_nameservers = 2001:4860:4860::
  local_subnet = ctlplane-subnet
  subnets = ctlplane-subnet

  [ctlplane-subnet]
  cidr = fd12:3456:789a:1::/64
  dhcp_start = fd12:3456:789a:1::10
  dhcp_end = fd12:3456:789a:1::20
  gateway = fd12:3456:789a:1::1
  inspection_iprange = fd12:3456:789a:1::30,fd12:3456:789a:1::40
  masquerade = false

  Deploying the undercloud fails. Nova API log errors:

  ERROR nova.context Traceback (most recent call last):
  ERROR nova.context   File "/usr/lib/python2.7/site-packages/nova/context.py", 
line 442, in gather_result
  ERROR nova.context result = fn(*args, **kwargs)
  ERROR nova.context   File 
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 184, in 
wrapper
  ERROR nova.context result = fn(cls, context, *args, **kwargs)
  ERROR nova.context   File 
"/usr/lib/python2.7/site-packages/nova/objects/service.py", line 603, in get_all
  ERROR nova.context db_services = db.service_get_all(context, 
disabled=disabled)
  ERROR nova.context   File "/usr/lib/python2.7/site-packages/nova/db/api.py", 
line 131, in service_get_all
  20 ERROR nova.context return IMPL.service_get_all(context, disabled)
  ERROR nova.context   File 
"/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 228, in 
wrapped
  ERROR nova.context with ctxt_mgr.reader.using(context):
  ERROR nova.context   File "/usr/lib64/python2.7/contextlib.py", line 17, in 
__enter__
  nova.context return self.gen.next()
  ERROR nova.context   File 
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 
1064, in _transaction_scope
  ERROR nova.context context=context) as resource:
  nova.context   File "/usr/lib64/python2.7/contextlib.py", line 17, in 
__enter__
  nova.context return self.gen.next()
  ERROR nova.context   File 
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 
659, in _session
  ERROR nova.context bind=self.connection, mode=self.mode)
  ERROR nova.context   File 
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 
418, in _create_session
  ERROR nova.context self._start()
  ERROR nova.context   File 
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 
510, in _start
  ERROR nova.context engine_args, maker_args)
  nova.context   File 
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 
534, in _setup_for_connection
  ERROR nova.context sql_connection=sql_connection, **engine_kwargs)
  ERROR nova.context   File 
"/usr/lib/python2.7/site-packages/debtcollector/renames.py", line 43, in 
decorator
  ERROR nova.context return wrapped(*args, **kwargs)
  ERROR nova.context   File 
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py", line 153, in 
create_engine
  ERROR nova.context url = sqlalchemy.engine.url.make_url(sql_connection)
  ERROR nova.context   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/url.py", line 225, in 
make_url
  ERROR nova.context return _parse_rfc1738_args(name_or_url)
  ERROR nova.context   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/url.py", line 284, in 
_parse_rfc1738_args
  ERROR nova.context return URL(name, **components)
  ERROR nova.context   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/url.py", line 71, in 
__init__
  ERROR nova.context self.port = int(port)

[Yahoo-eng-team] [Bug 1831315] Re: Udercloud IPv6 - ValueError: invalid literal for int() with base 10: '3456:789a:1::3' - db connection URI gets its brackets eaten

2019-06-06 Thread Bogdan Dobrelya
** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1831315

Title:
  Udercloud IPv6 - ValueError: invalid literal for int() with base 10:
  '3456:789a:1::3' - db connection URI gets its brackets eaten

Status in OpenStack Compute (nova):
  Confirmed
Status in oslo.config:
  Confirmed
Status in tripleo:
  Triaged

Bug description:
  file: undercloud.conf

  [DEFAULT]

  enable_routed_networks = true
  overcloud_domain_name = localdomain
  undercloud_ntp_servers = pool.ntp.org
  undercloud_hostname = undercloud.rdocloud
  local_interface = eth1
  local_mtu = 1450
  local_ip = fd12:3456:789a:1::1/64 
  undercloud_public_host = fd12:3456:789a:1::2
  undercloud_admin_host = fd12:3456:789a:1::3
  undercloud_nameservers = 2001:4860:4860::
  local_subnet = ctlplane-subnet
  subnets = ctlplane-subnet

  [ctlplane-subnet]
  cidr = fd12:3456:789a:1::/64
  dhcp_start = fd12:3456:789a:1::10
  dhcp_end = fd12:3456:789a:1::20
  gateway = fd12:3456:789a:1::1
  inspection_iprange = fd12:3456:789a:1::30,fd12:3456:789a:1::40
  masquerade = false

  Deploying the undercloud fails. Nova API log errors:

  ERROR nova.context Traceback (most recent call last):
  ERROR nova.context   File "/usr/lib/python2.7/site-packages/nova/context.py", 
line 442, in gather_result
  ERROR nova.context result = fn(*args, **kwargs)
  ERROR nova.context   File 
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 184, in 
wrapper
  ERROR nova.context result = fn(cls, context, *args, **kwargs)
  ERROR nova.context   File 
"/usr/lib/python2.7/site-packages/nova/objects/service.py", line 603, in get_all
  ERROR nova.context db_services = db.service_get_all(context, 
disabled=disabled)
  ERROR nova.context   File "/usr/lib/python2.7/site-packages/nova/db/api.py", 
line 131, in service_get_all
  20 ERROR nova.context return IMPL.service_get_all(context, disabled)
  ERROR nova.context   File 
"/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py", line 228, in 
wrapped
  ERROR nova.context with ctxt_mgr.reader.using(context):
  ERROR nova.context   File "/usr/lib64/python2.7/contextlib.py", line 17, in 
__enter__
  nova.context return self.gen.next()
  ERROR nova.context   File 
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 
1064, in _transaction_scope
  ERROR nova.context context=context) as resource:
  nova.context   File "/usr/lib64/python2.7/contextlib.py", line 17, in 
__enter__
  nova.context return self.gen.next()
  ERROR nova.context   File 
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 
659, in _session
  ERROR nova.context bind=self.connection, mode=self.mode)
  ERROR nova.context   File 
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 
418, in _create_session
  ERROR nova.context self._start()
  ERROR nova.context   File 
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 
510, in _start
  ERROR nova.context engine_args, maker_args)
  nova.context   File 
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 
534, in _setup_for_connection
  ERROR nova.context sql_connection=sql_connection, **engine_kwargs)
  ERROR nova.context   File 
"/usr/lib/python2.7/site-packages/debtcollector/renames.py", line 43, in 
decorator
  ERROR nova.context return wrapped(*args, **kwargs)
  ERROR nova.context   File 
"/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py", line 153, in 
create_engine
  ERROR nova.context url = sqlalchemy.engine.url.make_url(sql_connection)
  ERROR nova.context   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/url.py", line 225, in 
make_url
  ERROR nova.context return _parse_rfc1738_args(name_or_url)
  ERROR nova.context   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/url.py", line 284, in 
_parse_rfc1738_args
  ERROR nova.context return URL(name, **components)
  ERROR nova.context   File 
"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/url.py", line 71, in 
__init__
  ERROR nova.context self.port = int(port)
  ERROR nova.context ValueError: invalid literal for int() with base 10: 
'3456:789a:1::3'

  
  nova.conf:

  [DEFAULT]
  rootwrap_config=/etc/nova/rootwrap.conf
  compute_driver=ironic.IronicDriver
  allow_resize_to_same_host=False
  vif_plugging_is_fatal=True
  vif_plugging_timeout=300
  force_raw_images=True
  reserved_host_memory_mb=0
  ram_allocation_ratio=1.0
  sync_power_state_interval=-1
  heal_instance_info_cache_interval=60
  instance_name_template=instance-%08x
  force_config_drive=True
  my_ip=fd12:3456:789a:1::1
  host=undercloud.localdomain
  ssl_only=False
  state_path=/var/lib/nova
  

[Yahoo-eng-team] [Bug 1715374] Re: Reloading compute with SIGHUP prenvents instances to boot

2018-09-10 Thread Bogdan Dobrelya
** Also affects: tripleo
   Importance: Undecided
   Status: New

** Changed in: tripleo
   Status: New => In Progress

** Changed in: tripleo
   Importance: Undecided => Critical

** Changed in: tripleo
Milestone: None => stein-1

** Changed in: tripleo
 Assignee: (unassigned) => Bogdan Dobrelya (bogdando)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1715374

Title:
  Reloading compute with SIGHUP prenvents instances to boot

Status in OpenStack Compute (nova):
  Confirmed
Status in tripleo:
  In Progress

Bug description:
  When trying to boot a new instance at a compute-node, where nova-
  compute received SIGHUP(the SIGHUP is used as a trigger for reloading
  mutable options), it always failed.

== nova/compute/manager.py ==
  def cancel_all_events(self):
  if self._events is None:
  LOG.debug('Unexpected attempt to cancel events during shutdown.')
  return
  our_events = self._events
  # NOTE(danms): Block new events
  self._events = None<--- Set self._events to 
"None" 
  ...
  =

This will cause a NovaException when prepare_for_instance_event() was 
called.
It's the cause of the failure of network allocation.

  == nova/compute/manager.py ==
  def prepare_for_instance_event(self, instance, event_name):
  ...
  if self._events is None:
  # NOTE(danms): We really should have a more specific error
  # here, but this is what we use for our default error case
  raise exception.NovaException('In shutdown, no new events '
'can be scheduled')
  =

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1715374/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1785568] Re: Multiple migration requests for same vm might fail

2018-08-07 Thread Bogdan Dobrelya
I believe this should be fixed in Nova Placement. Not sure this belongs
to tripleo.

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => Incomplete

** Changed in: tripleo
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1785568

Title:
  Multiple migration requests for same vm might fail

Status in OpenStack Compute (nova):
  Incomplete
Status in tripleo:
  Won't Fix

Bug description:
  If there are multiple migration requests for the same instance, 
  the placement inventory might not be cleaned up in time, 
  leading to further migration requests to this compute to fail: 

  Allocation for VCPU on resource provider b6d1973f-
  bc39-4ea0-8d28-15f988f762e1 violates min_unit, max_unit, or step_size.
  Requested: 5, min_unit: 1, max_unit: 4, step_size: 1 Placement API
  returning an error response: Unable to allocate inventory: Unable to
  create allocation for 'VCPU' on resource provider The requested amount
  would violate inventory constraints

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1785568/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1777475] Re: Undercloud vm in state error after update of the undercloud.

2018-06-25 Thread Bogdan Dobrelya
We need the similar fix for t-h-t/puppet in order to fix this for
containerized undercloud, which is going to be default installation
method in Rocky. The instack only fix is not complete, reopening.

** Changed in: tripleo
   Status: Fix Released => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1777475

Title:
  Undercloud vm in state error after update of the undercloud.

Status in OpenStack Compute (nova):
  New
Status in tripleo:
  Triaged

Bug description:
  Hi,

  after an update of the undercloud, the undercloud vm is in error:

  [stack@undercloud-0 ~]$ openstack server list 

   
  
+--+--+++++

   
  | ID   | Name | Status | Networks 
  | Image  | Flavor |   

  
+--+--+++++

   
  | 9f80c38a-9f33-4a18-88e0-b89776e62150 | compute-0| ERROR  | 
ctlplane=192.168.24.18 | overcloud-full | compute|  
 
  | e87efe17-b939-4df2-af0c-8e2effd58c95 | controller-1 | ERROR  | 
ctlplane=192.168.24.9  | overcloud-full | controller |  
 
  | 5a3ea20c-75e8-49fe-90b6-edad01fc0a48 | controller-2 | ERROR  | 
ctlplane=192.168.24.13 | overcloud-full | controller |  
 
  | ba0f26e7-ec2c-4e61-be8e-05edf00ce78a | controller-0 | ERROR  | 
ctlplane=192.168.24.8  | overcloud-full | controller |  
 
  
+--+--+++++
 

  
  Originally found starting there 
https://bugzilla.redhat.com/show_bug.cgi?id=1590297#c14

  It boils down to a ordering issue between openstack-ironic-conductor
  and openstack-nova-compute, a simple reproducer is:

  sudo systemctl stop openstack-ironic-conductor
  sudo systemctl restart openstack-nova-compute

  on the undercloud.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1777475/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1748658] Re: Restarting Neutron containers which make use of network namespaces doesn't work

2018-02-12 Thread Bogdan Dobrelya
For containerized services deployed with tripleo, it's addressed in
https://review.openstack.org/#/c/542858/

** Also affects: tripleo
   Importance: Undecided
   Status: New

** Changed in: tripleo
   Status: New => In Progress

** Changed in: tripleo
Milestone: None => queens-rc1

** Changed in: tripleo
   Importance: Undecided => High

** Changed in: tripleo
 Assignee: (unassigned) => Brent Eagles (beagles)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1748658

Title:
  Restarting Neutron containers which make use of network namespaces
  doesn't work

Status in neutron:
  New
Status in tripleo:
  In Progress

Bug description:
  When DHCP, L3, Metadata or OVN-Metadata containers are restarted they can't
  set the previous namespaces:

  
  [heat-admin@overcloud-novacompute-0 neutron]$ sudo docker restart 8559f5a7fa45
  8559f5a7fa45


  [heat-admin@overcloud-novacompute-0 neutron]$ tail -f 
/var/log/containers/neutron/networking-ovn-metadata-agent.log 
  2018-02-09 08:34:41.059 5 CRITICAL neutron [-] Unhandled error: 
ProcessExecutionError: Exit code: 2; Stdin: ; Stdout: ; Stderr: RTNETLINK 
answers: Invalid argument
  2018-02-09 08:34:41.059 5 ERROR neutron Traceback (most recent call last):
  2018-02-09 08:34:41.059 5 ERROR neutron   File 
"/usr/bin/networking-ovn-metadata-agent", line 10, in 
  2018-02-09 08:34:41.059 5 ERROR neutron sys.exit(main())
  2018-02-09 08:34:41.059 5 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/networking_ovn/cmd/eventlet/agents/metadata.py",
 line 17, in main
  2018-02-09 08:34:41.059 5 ERROR neutron metadata_agent.main()
  2018-02-09 08:34:41.059 5 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/networking_ovn/agent/metadata_agent.py", line 
38, in main
  2018-02-09 08:34:41.059 5 ERROR neutron agt.start()
  2018-02-09 08:34:41.059 5 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/networking_ovn/agent/metadata/agent.py", line 
147, in start
  2018-02-09 08:34:41.059 5 ERROR neutron self.sync()
  2018-02-09 08:34:41.059 5 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/networking_ovn/agent/metadata/agent.py", line 
56, in wrapped
  2018-02-09 08:34:41.059 5 ERROR neutron return f(*args, **kwargs)
  2018-02-09 08:34:41.059 5 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/networking_ovn/agent/metadata/agent.py", line 
169, in sync
  2018-02-09 08:34:41.059 5 ERROR neutron metadata_namespaces = 
self.ensure_all_networks_provisioned()
  2018-02-09 08:34:41.059 5 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/networking_ovn/agent/metadata/agent.py", line 
350, in ensure_all_networks_provisioned
  2018-02-09 08:34:41.059 5 ERROR neutron netns = 
self.provision_datapath(datapath)
  2018-02-09 08:34:41.059 5 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/networking_ovn/agent/metadata/agent.py", line 
294, in provision_datapath
  2018-02-09 08:34:41.059 5 ERROR neutron veth_name[0], veth_name[1], 
namespace)
  2018-02-09 08:34:41.059 5 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 182, in 
add_veth
  2018-02-09 08:34:41.059 5 ERROR neutron self._as_root([], 'link', 
tuple(args))
  2018-02-09 08:34:41.059 5 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 94, in 
_as_root
  2018-02-09 08:34:41.059 5 ERROR neutron namespace=namespace)
  2018-02-09 08:34:41.059 5 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", line 102, in 
_execute
  2018-02-09 08:34:41.059 5 ERROR neutron 
log_fail_as_error=self.log_fail_as_error)
  2018-02-09 08:34:41.059 5 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py", line 151, in 
execute
  2018-02-09 08:34:41.059 5 ERROR neutron raise ProcessExecutionError(msg, 
returncode=returncode)
  2018-02-09 08:34:41.059 5 ERROR neutron ProcessExecutionError: Exit code: 2; 
Stdin: ; Stdout: ; Stderr: RTNETLINK answers: Invalid argument
  2018-02-09 08:34:41.059 5 ERROR neutron 
  2018-02-09 08:34:41.059 5 ERROR neutron 
  2018-02-09 08:34:41.177 21 INFO oslo_service.service [-] Parent process has 
died unexpectedly, exiting
  2018-02-09 08:34:41.178 21 INFO eventlet.wsgi.server [-] (21) wsgi exited, 
is_accepting=True

  
  An easy way to reproduce the bug:

  [heat-admin@overcloud-novacompute-0 ~]$ sudo docker exec -u root -it
  5c5f254a9321bd74b5911f46acb9513574c2cd9a3c59805a85cffd960bcc864d
  /bin/bash

  [root@overcloud-novacompute-0 /]# ip netns a my_netns
  [root@overcloud-novacompute-0 /]# exit

  [heat-admin@overcloud-novacompute-0 ~]$ sudo ip netns
  [heat-admin@overcloud-novacompute-0 ~]$ sudo docker restart 
5c5f254a9321bd74b5911f46acb9513574c2cd9a3c59805a85cffd960bcc864d
  5c5f254a9321bd74b5911f46acb9513574c2cd9a3c59805a85cffd960bcc864d

  

[Yahoo-eng-team] [Bug 1276694] Re: Openstack services should support SIGHUP signal

2015-09-19 Thread Bogdan Dobrelya
Added Oslo.service according to this 
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074745.html
"oslo.service is responsible for catching/handling signals"

** Also affects: oslo.service
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1276694

Title:
  Openstack services should support SIGHUP signal

Status in Glance:
  Fix Released
Status in heat:
  Fix Released
Status in Keystone:
  Confirmed
Status in murano:
  Confirmed
Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo-incubator:
  Invalid
Status in oslo.config:
  New
Status in oslo.log:
  New
Status in oslo.service:
  New
Status in Sahara:
  In Progress

Bug description:
  1)In order to more effectively manage the unlinked and open (lsof +L1)
  log files descriptors w/o restarting the services, SIGHUP signal
  should be accepted by every Openstack service.

  That would allow, e.g. logrotate jobs to gracefully HUP services after
  their log files were rotated. The only option we have for now is to
  force the services restart, quite a poor option from the services
  continuous accessibility PoV.

  Note: according to  http://en.wikipedia.org/wiki/Unix_signal
  SIGHUP
     ... Many daemons will reload their configuration files and reopen their 
logfiles instead of exiting when receiving this signal.

  Currently Murano and Glance are out of sync with Oslo SIGHUP support.

  There is also the following issue exists for some of the services of OS 
projects with synced SIGHUP support:
  2)
  heat-api-cfn, heat-api, heat-api-cloudwatch, keystone:  looks like the synced 
code is never being executed, thus SIGHUP is not supported for them. Here is a 
simple test scenario:
  2.1) modify 
/site-packages//openstack/common/service.py
  def _sighup_supported():
  +LOG.warning("SIGHUP is supported: {0}".format(hasattr(signal, 'SIGHUP')))
  return hasattr(signal, 'SIGHUP')
  2.2) restart service foo-service-name and check logs for "SIGHUP is 
supported", if service  really supports it, the appropriate messages would be 
present in the logs.
  2.3) issue kill -HUP  and check logs for "SIGHUP is 
supported" and "Caught SIGHUP", if service  really supports it, the appropriate 
messages would be present in the logs. Besides that, the service should remain 
started and its main thread PID should not be changed.

  e.g.
  2.a) heat-engine supports HUPing:
  #service openstack-heat-engine restart
  <132>Apr 11 14:03:48 node-3 heat-heat.openstack.common.service WARNING: 
SIGHUP is supported: True

  2.b)But heat-api don't know how to HUP:
  #service openstack-heat-api restart
  <134>Apr 11 14:06:22 node-3 heat-heat.api INFO: Starting Heat ReST API on 
0.0.0.0:8004
  <134>Apr 11 14:06:22 node-3 heat-eventlet.wsgi.server INFO: Starting single 
process server

  2.c) HUPing heat-engine is OK
  #pid=$(cat /var/run/heat/openstack-heat-engine.pid); kill -HUP $pid && echo 
$pid
  16512
  <134>Apr 11 14:12:15 node-3 heat-heat.openstack.common.service INFO: Caught 
SIGHUP, exiting
  <132>Apr 11 14:12:15 node-3 heat-heat.openstack.common.service WARNING: 
SIGHUP is supported: True
  <134>Apr 11 14:12:15 node-3 heat-heat.openstack.common.rpc.common INFO: 
Connected to AMQP server on ...
  service openstack-heat-engine status
  openstack-heat-engine (pid  16512) is running...

  2.d) HUPed heat-api is dead now ;(
  #kill -HUP $(cat /var/run/heat/openstack-heat-api.pid)
  (no new logs)
  # service openstack-heat-api status
  openstack-heat-api dead but pid file exists

  3)
  nova-cert, nova-novncproxy, nova-objectstore, nova-consoleauth, 
nova-scheduler - unlike to case 2, after kill -HUP  command 
was issued, there would be a "Caught SIGHUP" message in the logs, BUT the 
associated service would have got dead anyway. Instead, the service should 
remain started and its main thread PID should not be changed (similar to the 
2.c case).

  So, looks like there are a lot of things still should be done to
  ensure POSIX standards abidance in Openstack :-)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1276694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276694] Re: Openstack services should support SIGHUP signal

2015-09-19 Thread Bogdan Dobrelya
Added the Oslo.config as there is at least one patch addressing this
issue https://review.openstack.org/#/c/213062/

** Also affects: oslo.config
   Importance: Undecided
   Status: New

** Also affects: oslo.log
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1276694

Title:
  Openstack services should support SIGHUP signal

Status in Glance:
  Fix Released
Status in heat:
  Fix Released
Status in Keystone:
  Confirmed
Status in murano:
  Confirmed
Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo-incubator:
  Invalid
Status in oslo.config:
  New
Status in oslo.log:
  New
Status in Sahara:
  In Progress

Bug description:
  1)In order to more effectively manage the unlinked and open (lsof +L1)
  log files descriptors w/o restarting the services, SIGHUP signal
  should be accepted by every Openstack service.

  That would allow, e.g. logrotate jobs to gracefully HUP services after
  their log files were rotated. The only option we have for now is to
  force the services restart, quite a poor option from the services
  continuous accessibility PoV.

  Note: according to  http://en.wikipedia.org/wiki/Unix_signal
  SIGHUP
     ... Many daemons will reload their configuration files and reopen their 
logfiles instead of exiting when receiving this signal.

  Currently Murano and Glance are out of sync with Oslo SIGHUP support.

  There is also the following issue exists for some of the services of OS 
projects with synced SIGHUP support:
  2)
  heat-api-cfn, heat-api, heat-api-cloudwatch, keystone:  looks like the synced 
code is never being executed, thus SIGHUP is not supported for them. Here is a 
simple test scenario:
  2.1) modify 
/site-packages//openstack/common/service.py
  def _sighup_supported():
  +LOG.warning("SIGHUP is supported: {0}".format(hasattr(signal, 'SIGHUP')))
  return hasattr(signal, 'SIGHUP')
  2.2) restart service foo-service-name and check logs for "SIGHUP is 
supported", if service  really supports it, the appropriate messages would be 
present in the logs.
  2.3) issue kill -HUP  and check logs for "SIGHUP is 
supported" and "Caught SIGHUP", if service  really supports it, the appropriate 
messages would be present in the logs. Besides that, the service should remain 
started and its main thread PID should not be changed.

  e.g.
  2.a) heat-engine supports HUPing:
  #service openstack-heat-engine restart
  <132>Apr 11 14:03:48 node-3 heat-heat.openstack.common.service WARNING: 
SIGHUP is supported: True

  2.b)But heat-api don't know how to HUP:
  #service openstack-heat-api restart
  <134>Apr 11 14:06:22 node-3 heat-heat.api INFO: Starting Heat ReST API on 
0.0.0.0:8004
  <134>Apr 11 14:06:22 node-3 heat-eventlet.wsgi.server INFO: Starting single 
process server

  2.c) HUPing heat-engine is OK
  #pid=$(cat /var/run/heat/openstack-heat-engine.pid); kill -HUP $pid && echo 
$pid
  16512
  <134>Apr 11 14:12:15 node-3 heat-heat.openstack.common.service INFO: Caught 
SIGHUP, exiting
  <132>Apr 11 14:12:15 node-3 heat-heat.openstack.common.service WARNING: 
SIGHUP is supported: True
  <134>Apr 11 14:12:15 node-3 heat-heat.openstack.common.rpc.common INFO: 
Connected to AMQP server on ...
  service openstack-heat-engine status
  openstack-heat-engine (pid  16512) is running...

  2.d) HUPed heat-api is dead now ;(
  #kill -HUP $(cat /var/run/heat/openstack-heat-api.pid)
  (no new logs)
  # service openstack-heat-api status
  openstack-heat-api dead but pid file exists

  3)
  nova-cert, nova-novncproxy, nova-objectstore, nova-consoleauth, 
nova-scheduler - unlike to case 2, after kill -HUP  command 
was issued, there would be a "Caught SIGHUP" message in the logs, BUT the 
associated service would have got dead anyway. Instead, the service should 
remain started and its main thread PID should not be changed (similar to the 
2.c case).

  So, looks like there are a lot of things still should be done to
  ensure POSIX standards abidance in Openstack :-)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1276694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488809] Re: [Juno][UCA] Non default configuration sections ignored for nova.conf

2015-08-27 Thread Bogdan Dobrelya
** Changed in: cloud-archive
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1488809

Title:
  [Juno][UCA] Non default configuration sections ignored for nova.conf

Status in ubuntu-cloud-archive:
  Invalid
Status in OpenStack Compute (nova):
  Invalid
Status in oslo.config:
  Invalid

Bug description:
  Non default configuration sections [glance], [neutron] ignored for
  nova.conf then installed from UCA packages:

  How to reproduce:
  1) Install and configure OpenStack Juno Nova with Neutron at compute node 
using UCA (http://archive.ubuntu.com/ubuntu/ trusty-updates/main amd64 
Packages):
  python-oslo.config 1:1.2.1-0ubuntu2
  python-oslo.messaging 1.3.0-0ubuntu1.2
  python-oslo.rootwrap 1.2.0-0ubuntu1
  nova-common 1:2014.1.5-0ubuntu1.2
  python-nova 1:2014.1.5-0ubuntu1.2
  neutron-common 1:2014.1.5-0ubuntu1

  /etc/nova/nova.conf example:
  [DEFAULT]
  debug=True
  ...
  [glance]
  api_servers=10.0.0.3:9292

  [neutron]
  admin_auth_url=http://10.0.0.3:5000/v2.0
  admin_username=admin
  admin_tenant_name=services
  admin_password=admin
  url=http://10.0.0.3:9696
  ...

  2) From nova log, check which values has been applied:
  # grep -E 'admin_auth_url\s+=|admin_username\s+=|api_servers\s+=' 
/var/log/nova/nova-compute.log
  2015-08-26 07:34:48.193 30535 DEBUG nova.openstack.common.service [-] 
glance_api_servers = ['192.168.121.14:9292'] log_opt_values 
/usr/lib/python2.7/dist-packages/oslo/config/cfg.py:1941
  2015-08-26 07:34:48.210 30535 DEBUG nova.openstack.common.service [-] 
neutron_admin_auth_url = http://localhost:5000/v2.0 log_opt_values 
/usr/lib/python2.7/dist-packages/oslo/config/cfg.py:1941
  2015-08-26 07:34:48.211 30535 DEBUG nova.openstack.common.service [-] 
neutron_admin_username = None log_opt_values 
/usr/lib/python2.7/dist-packages/oslo/config/cfg.py:1941

  Expected:
  configuration options to be applied from [glance], [neutron] sections 
according to the docs 
http://docs.openstack.org/juno/config-reference/content/list-of-compute-config-options.html

  Actual:
  Defaults for the deprecated options were applied from the [DEFAULT] section 
instead

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1488809/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488809] [NEW] [Juno][UCA] Non default configuration sections ignored for nova.conf

2015-08-26 Thread Bogdan Dobrelya
Public bug reported:

Non default configuration sections [glance], [neutron] ignored for
nova.conf then installed from UCA packages:

How to reproduce:
1) Install and configure OpenStack Juno Nova with Neutron at compute node using 
UCA (http://archive.ubuntu.com/ubuntu/ trusty-updates/main amd64 Packages):
python-oslo.config 1:1.2.1-0ubuntu2
python-oslo.messaging 1.3.0-0ubuntu1.2
python-oslo.rootwrap 1.2.0-0ubuntu1
nova-common 1:2014.1.5-0ubuntu1.2
python-nova 1:2014.1.5-0ubuntu1.2
neutron-common 1:2014.1.5-0ubuntu1

/etc/nova/nova.conf example:
[DEFAULT]
debug=True
...
[glance]
api_servers=10.0.0.3:9292

[neutron]
admin_auth_url=http://10.0.0.3:5000/v2.0
admin_username=admin
admin_tenant_name=services
admin_password=admin
url=http://10.0.0.3:9696
...

2) From nova log, check which values has been applied:
# grep -E 'admin_auth_url\s+=|admin_username\s+=|api_servers\s+=' 
/var/log/nova/nova-compute.log
2015-08-26 07:34:48.193 30535 DEBUG nova.openstack.common.service [-] 
glance_api_servers = ['192.168.121.14:9292'] log_opt_values 
/usr/lib/python2.7/dist-packages/oslo/config/cfg.py:1941
2015-08-26 07:34:48.210 30535 DEBUG nova.openstack.common.service [-] 
neutron_admin_auth_url = http://localhost:5000/v2.0 log_opt_values 
/usr/lib/python2.7/dist-packages/oslo/config/cfg.py:1941
2015-08-26 07:34:48.211 30535 DEBUG nova.openstack.common.service [-] 
neutron_admin_username = None log_opt_values 
/usr/lib/python2.7/dist-packages/oslo/config/cfg.py:1941

Expected:
configuration options to be applied from [glance], [neutron] sections according 
to the docs 
http://docs.openstack.org/juno/config-reference/content/list-of-compute-config-options.html

Actual:
Defaults for the deprecated options were applied from the [DEFAULT] section 
instead

** Affects: cloud-archive
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: oslo.config
 Importance: Undecided
 Status: New


** Tags: oslo

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: oslo.config
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1488809

Title:
  [Juno][UCA] Non default configuration sections ignored for nova.conf

Status in ubuntu-cloud-archive:
  New
Status in OpenStack Compute (nova):
  New
Status in oslo.config:
  New

Bug description:
  Non default configuration sections [glance], [neutron] ignored for
  nova.conf then installed from UCA packages:

  How to reproduce:
  1) Install and configure OpenStack Juno Nova with Neutron at compute node 
using UCA (http://archive.ubuntu.com/ubuntu/ trusty-updates/main amd64 
Packages):
  python-oslo.config 1:1.2.1-0ubuntu2
  python-oslo.messaging 1.3.0-0ubuntu1.2
  python-oslo.rootwrap 1.2.0-0ubuntu1
  nova-common 1:2014.1.5-0ubuntu1.2
  python-nova 1:2014.1.5-0ubuntu1.2
  neutron-common 1:2014.1.5-0ubuntu1

  /etc/nova/nova.conf example:
  [DEFAULT]
  debug=True
  ...
  [glance]
  api_servers=10.0.0.3:9292

  [neutron]
  admin_auth_url=http://10.0.0.3:5000/v2.0
  admin_username=admin
  admin_tenant_name=services
  admin_password=admin
  url=http://10.0.0.3:9696
  ...

  2) From nova log, check which values has been applied:
  # grep -E 'admin_auth_url\s+=|admin_username\s+=|api_servers\s+=' 
/var/log/nova/nova-compute.log
  2015-08-26 07:34:48.193 30535 DEBUG nova.openstack.common.service [-] 
glance_api_servers = ['192.168.121.14:9292'] log_opt_values 
/usr/lib/python2.7/dist-packages/oslo/config/cfg.py:1941
  2015-08-26 07:34:48.210 30535 DEBUG nova.openstack.common.service [-] 
neutron_admin_auth_url = http://localhost:5000/v2.0 log_opt_values 
/usr/lib/python2.7/dist-packages/oslo/config/cfg.py:1941
  2015-08-26 07:34:48.211 30535 DEBUG nova.openstack.common.service [-] 
neutron_admin_username = None log_opt_values 
/usr/lib/python2.7/dist-packages/oslo/config/cfg.py:1941

  Expected:
  configuration options to be applied from [glance], [neutron] sections 
according to the docs 
http://docs.openstack.org/juno/config-reference/content/list-of-compute-config-options.html

  Actual:
  Defaults for the deprecated options were applied from the [DEFAULT] section 
instead

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1488809/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1426332] Re: Nova doesn't honor enable_new_services=False

2015-02-28 Thread Bogdan Dobrelya
Hi, @Eric, no I didn't restart nova-conductor, should I? I will recheck
and update the bug, thank you. Perhaps, this bug has a doc impact, are
there any details in documentation?  If, not there should be an
updatedfor enable_new_services and nova-conductor.

** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

** Changed in: openstack-manuals
   Status: New = Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1426332

Title:
  Nova doesn't honor enable_new_services=False

Status in OpenStack Compute (Nova):
  Incomplete
Status in OpenStack Manuals:
  Incomplete

Bug description:
  When specified enable_new_services=False for nova.conf at new node and
  started nova-compute service, the nova-compute service is shown
  'enabled' and 'up' in nova service-list output.

  But it is expected instead that the compute service will not register
  itself and will not be shown in nova-service list

  How to reproduce with existing compute node (Destructive! Use it only on test 
environments):
  1) stop nova-compute service
  2) remove nova-compute service from DB, for example node-foo:
  use nova;
  delete from compute_nodes where hypervisor_hostname='node-foo';
  delete from services where host='node-foo';
  3) Set enable_new_services=False in nova.conf
  4) start nova-compute service
  5) check nova service-list and launch instance, as an option

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1426332/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1426332] [NEW] Nova doesn't honor enable_new_services=False

2015-02-27 Thread Bogdan Dobrelya
Public bug reported:

When specified enable_new_services=False for nova.conf at new node and
started nova-compute service, the nova-compute service is shown
'enabled' and 'up' in nova service-list output.

But it is expected instead that the compute service will not register
itself and will not be shown in nova-service list

How to reproduce with existing compute node (Destructive! Use it only on test 
environments):
1) stop nova-compute service
2) remove nova-compute service from DB, for example node-foo:
use nova;
delete from compute_nodes where hypervisor_hostname='node-foo';
delete from services where host='node-foo';
3) Set enable_new_services=False in nova.conf
4) start nova-compute service
5) check nova service-list and launch instance, as an option

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  When specified enable_new_services=False for nova.conf at new node and
  started nova-compute service, the nova-compute service is shown
  'enabled' and 'up' in nova service-list output.
  
  But it is expected instead that the compute service will not register
  itself and will not be shown in nova-service list
+ 
+ How to reproduce with existing compute node (Destructive! Use it only on test 
environments):
+ 1) stop nova-compute service
+ 2) remove nova-compute service from DB, for example node-foo:
+ use nova;
+ delete from compute_nodes where hypervisor_hostname='node-foo';
+ delete from services where host='node-foo';
+ 3) Set enable_new_services=False in nova.conf
+ 4) start nova-compute service
+ 5) check nova service-list and launch instance, as an option

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1426332

Title:
  Nova doesn't honor enable_new_services=False

Status in OpenStack Compute (Nova):
  New

Bug description:
  When specified enable_new_services=False for nova.conf at new node and
  started nova-compute service, the nova-compute service is shown
  'enabled' and 'up' in nova service-list output.

  But it is expected instead that the compute service will not register
  itself and will not be shown in nova-service list

  How to reproduce with existing compute node (Destructive! Use it only on test 
environments):
  1) stop nova-compute service
  2) remove nova-compute service from DB, for example node-foo:
  use nova;
  delete from compute_nodes where hypervisor_hostname='node-foo';
  delete from services where host='node-foo';
  3) Set enable_new_services=False in nova.conf
  4) start nova-compute service
  5) check nova service-list and launch instance, as an option

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1426332/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412855] Re: Horizon logs in with unencrypted credentials

2015-02-12 Thread Bogdan Dobrelya
https://github.com/stackforge/puppet-horizon supports SSL starting from
3.0, hence triaged

** No longer affects: fuel/7.0.x

** Also affects: fuel/5.0.x
   Importance: Undecided
   Status: New

** Also affects: fuel/6.0.x
   Importance: Undecided
   Status: New

** Also affects: fuel/5.1.x
   Importance: Undecided
   Status: New

** Changed in: fuel/6.0.x
   Status: New = Triaged

** Changed in: fuel/5.1.x
   Status: New = Triaged

** Changed in: fuel/5.0.x
   Status: New = Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1412855

Title:
  Horizon logs in with unencrypted credentials

Status in Fuel: OpenStack installer that works:
  Triaged
Status in Fuel for OpenStack 5.0.x series:
  Won't Fix
Status in Fuel for OpenStack 5.1.x series:
  Won't Fix
Status in Fuel for OpenStack 6.0.x series:
  Won't Fix
Status in Fuel for OpenStack 6.1.x series:
  Won't Fix
Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  Horizon logs-in with  unencrypted credentials over HTTP.

  Steps:
  1) Open browser development tools.
  2) Log-in to Horizon
  3) Find POST request with /horizon/auth/login path.

  Request details:

  Remote Address:172.16.0.2:80
  Request URL:http://172.16.0.2/horizon/auth/login/
  Request Method:POST
  Status Code:302 FOUND
  Form Data:
  
csrfmiddlewaretoken=ulASpgYAsaikVCWsBxH6kFN2MECpaT9Yregion=http%3A%2F%2F192.168.0.1%3A5000%2Fv2.0username=adminpassword=admin

  Actual: security settings are applied on stage of product deployment

  Expected: use HTTPS by default to improve infrastructure security on
  stage of installation and deployment.

  Environment:
  Fuel build_id: 2014-12-26_14-25-46,release: 6.0
  Dashboard Version: 2014.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel/+bug/1412855/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412855] Re: Horizon logs in with unencrypted credentials

2015-02-12 Thread Bogdan Dobrelya
This bug should be superseded by
https://blueprints.launchpad.net/fuel/+spec/ssl-endpoints

** Changed in: fuel/6.0.x
Milestone: None = 6.0.1

** Changed in: fuel/5.1.x
Milestone: None = 5.1.2

** Changed in: fuel/5.0.x
Milestone: None = 5.0.3

** Changed in: fuel/6.0.x
 Assignee: (unassigned) = Fuel Library Team (fuel-library)

** Changed in: fuel/5.1.x
 Assignee: (unassigned) = Fuel Library Team (fuel-library)

** Changed in: fuel/5.0.x
 Assignee: (unassigned) = Fuel Library Team (fuel-library)

** Changed in: fuel/6.0.x
   Status: Triaged = Won't Fix

** Changed in: fuel/5.1.x
   Status: Triaged = Won't Fix

** Changed in: fuel/5.0.x
   Status: Triaged = Won't Fix

** Changed in: fuel/6.1.x
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1412855

Title:
  Horizon logs in with unencrypted credentials

Status in Fuel: OpenStack installer that works:
  Triaged
Status in Fuel for OpenStack 5.0.x series:
  Won't Fix
Status in Fuel for OpenStack 5.1.x series:
  Won't Fix
Status in Fuel for OpenStack 6.0.x series:
  Won't Fix
Status in Fuel for OpenStack 6.1.x series:
  Won't Fix
Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  Horizon logs-in with  unencrypted credentials over HTTP.

  Steps:
  1) Open browser development tools.
  2) Log-in to Horizon
  3) Find POST request with /horizon/auth/login path.

  Request details:

  Remote Address:172.16.0.2:80
  Request URL:http://172.16.0.2/horizon/auth/login/
  Request Method:POST
  Status Code:302 FOUND
  Form Data:
  
csrfmiddlewaretoken=ulASpgYAsaikVCWsBxH6kFN2MECpaT9Yregion=http%3A%2F%2F192.168.0.1%3A5000%2Fv2.0username=adminpassword=admin

  Actual: security settings are applied on stage of product deployment

  Expected: use HTTPS by default to improve infrastructure security on
  stage of installation and deployment.

  Environment:
  Fuel build_id: 2014-12-26_14-25-46,release: 6.0
  Dashboard Version: 2014.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel/+bug/1412855/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1332058] Re: keystone behavior when one memcache backend is down

2014-08-21 Thread Bogdan Dobrelya
@Dolph, then I try to use

backend=dogpile.cache.pylibmc
and
backend_argument=behaviors:tcp_nodelay:False

I recieve an error from keystone:
ERROR: __init__() got an unexpected keyword argument 'behaviors' (HTTP 400)

** Changed in: keystone
   Status: Invalid = New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1332058

Title:
  keystone behavior when one memcache backend is down

Status in OpenStack Identity (Keystone):
  New
Status in Mirantis OpenStack:
  Confirmed

Bug description:
  Hi,

  Our implementation uses dogpile.cache.memcached as a backend for
  tokens. Recently, I have found interesting behavior when one of
  memcache regions went down. There is a 3-6 second delay when I try to
  get a token. If I have 2 backends then I have 6-12 seconds delay. It's
  very easy to test

  Test connection using

  for i in {1..20}; do (time keystone token-get  log2) 21 | grep
  real | awk '{print $2}'; done

  Block one memcache backend using

  iptables -I INPUT -p tcp --dport 11211 -j DROP  (Simulation power
  outage of node)

  Test the speed using

  for i in {1..20}; do (time keystone token-get  log2) 21 | grep
  real | awk '{print $2}'; done

  Also I straced keystone process with

  strace -tt -s 512 -o /root/log1 -f -p PID

  and got

  26872 connect(9, {sa_family=AF_INET, sin_port=htons(11211),
  sin_addr=inet_addr(10.108.2.3)}, 16) = -1 EINPROGRESS (Operation now
  in progress)

  though this IP is down

  Also I checked the code

  
https://github.com/openstack/keystone/blob/master/keystone/common/kvs/core.py#L210-L237
  
https://github.com/openstack/keystone/blob/master/keystone/common/kvs/core.py#L285-L289
   
https://github.com/openstack/keystone/blob/master/keystone/common/kvs/backends/memcached.py#L96

  and was not able to find any piece of details how keystone treats with
  backend when it's down

  There should be a logic which temporarily blocks backend when it's not
  accessible. After timeout period, backend should be probed (but not
  blocking get/set operations of current backends) and if connection is
  successful it should be added back to operation. Here is a sample how
  it could be implemented

  http://dogpilecache.readthedocs.org/en/latest/usage.html#changing-
  backend-behavior

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1332058/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1293794] Re: memcached_servers timeout causes poor API response time

2014-07-31 Thread Bogdan Dobrelya
Action items: add new option which configures socket_timeout for python-
memcached

** Also affects: oslo
   Importance: Undecided
   Status: New

** Changed in: oslo
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1293794

Title:
  memcached_servers timeout causes poor API response time

Status in OpenStack Compute (Nova):
  Invalid
Status in Oslo - a Library of Common OpenStack Code:
  Confirmed

Bug description:
  In nova.conf, when configured for HA by setting the memcached_servers
  parameter to several memcached servers in the nova API cluster, e.g.:

  memcached_servers=192.168.50.11:11211,192.168.50.12:11211,192.168.50.13:11211

  If there are memcached servers on this list that are down, the time it
  takes to complete Nova API requests increases from  1 second to 3-6
  seconds.

  It seems to me that Nova should protect itself from such performance
  degradation in an HA scenario.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1293794/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276694] Re: Openstack services should support SIGHUP signal

2014-07-24 Thread Bogdan Dobrelya
** Summary changed:

- [mos] Openstack services should support SIGHUP signal
+ Openstack services should support SIGHUP signal

** No longer affects: fuel

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1276694

Title:
  Openstack services should support SIGHUP signal

Status in OpenStack Image Registry and Delivery Service (Glance):
  Confirmed
Status in Orchestration API (Heat):
  Confirmed
Status in OpenStack Identity (Keystone):
  Confirmed
Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo - a Library of Common OpenStack Code:
  Invalid

Bug description:
  1)In order to more effectively manage the unlinked and open (lsof +L1)
  log files descriptors w/o restarting the services, SIGHUP signal
  should be accepted by every Openstack service.

  That would allow, e.g. logrotate jobs to gracefully HUP services after
  their log files were rotated. The only option we have for now is to
  force the services restart, quite a poor option from the services
  continuous accessibility PoV.

  Note: according to  http://en.wikipedia.org/wiki/Unix_signal
  SIGHUP
     ... Many daemons will reload their configuration files and reopen their 
logfiles instead of exiting when receiving this signal.

  Currently Murano and Glance are out of sync with Oslo SIGHUP support.

  There is also the following issue exists for some of the services of OS 
projects with synced SIGHUP support:
  2)
  heat-api-cfn, heat-api, heat-api-cloudwatch, keystone:  looks like the synced 
code is never being executed, thus SIGHUP is not supported for them. Here is a 
simple test scenario:
  2.1) modify 
python-path/site-packages/foo-service-name/openstack/common/service.py
  def _sighup_supported():
  +LOG.warning(SIGHUP is supported: {0}.format(hasattr(signal, 'SIGHUP')))
  return hasattr(signal, 'SIGHUP')
  2.2) restart service foo-service-name and check logs for SIGHUP is 
supported, if service  really supports it, the appropriate messages would be 
present in the logs.
  2.3) issue kill -HUP foo-service-pid and check logs for SIGHUP is 
supported and Caught SIGHUP, if service  really supports it, the appropriate 
messages would be present in the logs. Besides that, the service should remain 
started and its main thread PID should not be changed.

  e.g.
  2.a) heat-engine supports HUPing:
  #service openstack-heat-engine restart
  132Apr 11 14:03:48 node-3 heat-heat.openstack.common.service WARNING: 
SIGHUP is supported: True

  2.b)But heat-api don't know how to HUP:
  #service openstack-heat-api restart
  134Apr 11 14:06:22 node-3 heat-heat.api INFO: Starting Heat ReST API on 
0.0.0.0:8004
  134Apr 11 14:06:22 node-3 heat-eventlet.wsgi.server INFO: Starting single 
process server

  2.c) HUPing heat-engine is OK
  #pid=$(cat /var/run/heat/openstack-heat-engine.pid); kill -HUP $pid  echo 
$pid
  16512
  134Apr 11 14:12:15 node-3 heat-heat.openstack.common.service INFO: Caught 
SIGHUP, exiting
  132Apr 11 14:12:15 node-3 heat-heat.openstack.common.service WARNING: 
SIGHUP is supported: True
  134Apr 11 14:12:15 node-3 heat-heat.openstack.common.rpc.common INFO: 
Connected to AMQP server on ...
  service openstack-heat-engine status
  openstack-heat-engine (pid  16512) is running...

  2.d) HUPed heat-api is dead now ;(
  #kill -HUP $(cat /var/run/heat/openstack-heat-api.pid)
  (no new logs)
  # service openstack-heat-api status
  openstack-heat-api dead but pid file exists

  3)
  nova-cert, nova-novncproxy, nova-objectstore, nova-consoleauth, 
nova-scheduler - unlike to case 2, after kill -HUP foo-service-pid command 
was issued, there would be a Caught SIGHUP message in the logs, BUT the 
associated service would have got dead anyway. Instead, the service should 
remain started and its main thread PID should not be changed (similar to the 
2.c case).

  So, looks like there are a lot of things still should be done to
  ensure POSIX standards abidance in Openstack :-)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1276694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 856764] Re: RabbitMQ connections lack heartbeat or TCP keepalives

2014-07-01 Thread Bogdan Dobrelya
** Also affects: fuel
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/856764

Title:
  RabbitMQ connections lack heartbeat or TCP keepalives

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Ceilometer icehouse series:
  Fix Released
Status in Cinder:
  New
Status in Fuel: OpenStack installer that works:
  New
Status in Orchestration API (Heat):
  Confirmed
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  Invalid
Status in Oslo - a Library of Common OpenStack Code:
  In Progress
Status in Messaging API for OpenStack:
  In Progress

Bug description:
  There is currently no method built into Nova to keep connections from
  various components into RabbitMQ alive.  As a result, placing a
  stateful firewall (such as a Cisco ASA) between the connection
  can/does result in idle connections being terminated without either
  endpoint being aware.

  This issue can be mitigated a few different ways:

  1. Connections to RabbitMQ set socket options to enable TCP
  keepalives.

  2. Rabbit has heartbeat functionality.  If the client requests
  heartbeats on connection, rabbit server will regularly send messages
  to each connections with the expectation of a response.

  3. Other?

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/856764/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1101404] Re: nova syslog logging to /dev/log race condition in python 2.6

2014-06-23 Thread Bogdan Dobrelya
** Also affects: mos
   Importance: Undecided
   Status: New

** Changed in: mos
   Status: New = Confirmed

** Changed in: mos
   Importance: Undecided = High

** Changed in: mos
Milestone: None = 5.1

** Changed in: mos
 Assignee: (unassigned) = MOS Nova (mos-nova)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1101404

Title:
  nova syslog logging to /dev/log race condition in python 2.6

Status in OpenStack Identity (Keystone):
  Confirmed
Status in Mirantis OpenStack:
  Confirmed
Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo - a Library of Common OpenStack Code:
  Confirmed

Bug description:
  
  running nova-api-ec2
  running rsyslog

  service rsyslog restart ; service nova-api-ec2 restart

  nova-api-ec2 consumes up to 100% of the available CPU (or at least a
  full core) and s not responsive.  /var/log/nova/nova-api-ec2.lgo
  states the socket is already in use.

  strace the process

  sendto(3, 1422013-01-18 20:00:22 24882 INFO nova.service [-] Caught
  SIGTERM, exiting\0, 77, 0, NULL, 0) = -1 ENOTCONN (Transport endpoint
  is not connected)

  service nova-api-ec2 restart fails as upstart already thinks the
  process has been terminated.

  The only way to recover is to pkill -9 nova-api-ec2 and then restart
  it with 'service nova-api-ec2 restart'.

  The same behavior has been seen in all nova-api services.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1101404/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 856764] Re: RabbitMQ connections lack heartbeat or TCP keepalives

2014-05-26 Thread Bogdan Dobrelya
Please sync kombu_reconnect_delay for all affected projects as well.

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New = Confirmed

** Also affects: heat
   Importance: Undecided
   Status: New

** Changed in: heat
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/856764

Title:
  RabbitMQ connections lack heartbeat or TCP keepalives

Status in Orchestration API (Heat):
  Confirmed
Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in OpenStack Compute (Nova):
  Invalid
Status in Oslo - a Library of Common OpenStack Code:
  Triaged
Status in Messaging API for OpenStack:
  In Progress

Bug description:
  There is currently no method built into Nova to keep connections from
  various components into RabbitMQ alive.  As a result, placing a
  stateful firewall (such as a Cisco ASA) between the connection
  can/does result in idle connections being terminated without either
  endpoint being aware.

  This issue can be mitigated a few different ways:

  1. Connections to RabbitMQ set socket options to enable TCP
  keepalives.

  2. Rabbit has heartbeat functionality.  If the client requests
  heartbeats on connection, rabbit server will regularly send messages
  to each connections with the expectation of a response.

  3. Other?

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/856764/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 856764] Re: RabbitMQ connections lack heartbeat or TCP keepalives

2014-05-26 Thread Bogdan Dobrelya
** Also affects: ceilometer
   Importance: Undecided
   Status: New

** Changed in: ceilometer
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/856764

Title:
  RabbitMQ connections lack heartbeat or TCP keepalives

Status in OpenStack Telemetry (Ceilometer):
  Confirmed
Status in Orchestration API (Heat):
  Confirmed
Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in OpenStack Compute (Nova):
  Invalid
Status in Oslo - a Library of Common OpenStack Code:
  Triaged
Status in Messaging API for OpenStack:
  In Progress

Bug description:
  There is currently no method built into Nova to keep connections from
  various components into RabbitMQ alive.  As a result, placing a
  stateful firewall (such as a Cisco ASA) between the connection
  can/does result in idle connections being terminated without either
  endpoint being aware.

  This issue can be mitigated a few different ways:

  1. Connections to RabbitMQ set socket options to enable TCP
  keepalives.

  2. Rabbit has heartbeat functionality.  If the client requests
  heartbeats on connection, rabbit server will regularly send messages
  to each connections with the expectation of a response.

  3. Other?

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/856764/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1306559] [NEW] Fix python26 compatibility for RFCSysLogHandler

2014-04-11 Thread Bogdan Dobrelya
Public bug reported:

Currently used pattern 
https://review.openstack.org/#/c/63094/15/openstack/common/log.py (lines 
471-479)  will fail for Python 2.6.x.
In order to fix the broken Python 2.6.x compatibility, old style explicit 
superclass method calls should be used instead.
(an example of script to test this behavior with Python 2.7 vs 2.6: 
http://pastebin.com/e1wANw47 )

** Affects: nova
 Importance: Undecided
 Status: Confirmed

** Affects: oslo
 Importance: Undecided
 Status: New


** Tags: log

** Summary changed:

-  Fix python26 compatibility forRFCSysLogHandler
+ Fix python26 compatibility for RFCSysLogHandler

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1306559

Title:
  Fix python26 compatibility for RFCSysLogHandler

Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo - a Library of Common OpenStack Code:
  New

Bug description:
  Currently used pattern 
https://review.openstack.org/#/c/63094/15/openstack/common/log.py (lines 
471-479)  will fail for Python 2.6.x.
  In order to fix the broken Python 2.6.x compatibility, old style explicit 
superclass method calls should be used instead.
  (an example of script to test this behavior with Python 2.7 vs 2.6: 
http://pastebin.com/e1wANw47 )

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1306559/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1306559] Re: Fix python26 compatibility for RFCSysLogHandler

2014-04-11 Thread Bogdan Dobrelya
Addressed by: https://review.openstack.org/#/c/86875/

** Also affects: oslo
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: Bogdan Dobrelya (bogdando) = (unassigned)

** Changed in: nova
   Status: In Progress = Confirmed

** Also affects: ceilometer
   Importance: Undecided
   Status: New

** Also affects: cinder
   Importance: Undecided
   Status: New

** Also affects: glance
   Importance: Undecided
   Status: New

** Also affects: keystone
   Importance: Undecided
   Status: New

** Also affects: sahara
   Importance: Undecided
   Status: New

** Also affects: heat
   Importance: Undecided
   Status: New

** Also affects: murano
   Importance: Undecided
   Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: ceilometer
   Status: New = Confirmed

** Changed in: cinder
   Status: New = Confirmed

** Changed in: glance
   Status: New = Confirmed

** Changed in: keystone
   Status: New = Confirmed

** Changed in: sahara
   Status: New = Confirmed

** Changed in: heat
   Status: New = Confirmed

** Changed in: murano
   Status: New = Confirmed

** Changed in: neutron
   Status: New = Confirmed

** Changed in: oslo
   Status: New = In Progress

** Changed in: oslo
 Assignee: (unassigned) = Bogdan Dobrelya (bogdando)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1306559

Title:
  Fix python26 compatibility for RFCSysLogHandler

Status in OpenStack Telemetry (Ceilometer):
  Confirmed
Status in Cinder:
  Confirmed
Status in OpenStack Image Registry and Delivery Service (Glance):
  Confirmed
Status in Orchestration API (Heat):
  Confirmed
Status in OpenStack Identity (Keystone):
  Confirmed
Status in Murano:
  Confirmed
Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo - a Library of Common OpenStack Code:
  In Progress
Status in OpenStack Data Processing (Sahara, ex. Savanna):
  Confirmed

Bug description:
  Currently used pattern 
https://review.openstack.org/#/c/63094/15/openstack/common/log.py (lines 
471-479)  will fail for Python 2.6.x.
  In order to fix the broken Python 2.6.x compatibility, old style explicit 
superclass method calls should be used instead.
  (an example of script to test this behavior with Python 2.7 vs 2.6: 
http://pastebin.com/e1wANw47 )

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1306559/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276694] Re: Openstack services should support SIGHUP signal

2014-04-11 Thread Bogdan Dobrelya
** Description changed:

- In order to more effectively manage the unlinked and open (lsof +L1) log
- files descriptors w/o restarting the services, SIGHUP signal should be
- accepted by every Openstack service.
+ 1)In order to more effectively manage the unlinked and open (lsof +L1)
+ log files descriptors w/o restarting the services, SIGHUP signal should
+ be accepted by every Openstack service.
  
  That would allow, e.g. logrotate jobs to gracefully HUP services after
  their log files were rotated. The only option we have for now is to
  force the services restart, quite a poor option from the services
  continuous accessibility PoV.
  
  Note: according to  http://en.wikipedia.org/wiki/Unix_signal
  SIGHUP
-... Many daemons will reload their configuration files and reopen their 
logfiles instead of exiting when receiving this signal.
+    ... Many daemons will reload their configuration files and reopen their 
logfiles instead of exiting when receiving this signal.
+ 
+ Currently Murano and Glance are out of sync with Oslo SIGHUP support.
+ 
+ There is also the following issue exists for some of the services of OS 
projects with synced SIGHUP support:
+ 2)
+ heat-api-cfn, heat-api, heat-api-cloudwatch, keystone:  looks like the synced 
code is never being executed, thus SIGHUP is not supported for them. Here is a 
simple test scenario:
+ 2.1) modify 
python-path/site-packages/foo-service-name/openstack/common/service.py
+ def _sighup_supported():
+ +LOG.warning(SIGHUP is supported: {0}.format(hasattr(signal, 'SIGHUP')))
+ return hasattr(signal, 'SIGHUP')
+ 2.2) restart service foo-service-name and check logs for SIGHUP is 
supported, if service  really supports it, the appropriate messages would be 
present in the logs.
+ 2.3) issue kill -HUP foo-service-pid and check logs for SIGHUP is 
supported, if service  really supports it, the appropriate messages would be 
present in the logs. Besides that, the service should remain started and its 
main thread PID should not be changed.
+ 
+ 3)
+ nova-cert, nova-novncproxy, nova-objectstore, nova-consoleauth, 
nova-scheduler - in case of the kill -HUP foo-service-pid command was issued, 
the associated service would have got dead, but there are SIGHUP related 
messages in the logs. But the service should remain started and its main thread 
PID should not be changed.
+ 
+ So, looks like there is a lot of things still should be done to ensure
+ POSIX standards abidance in Openstack :-)

** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: heat
   Importance: Undecided
   Status: New

** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1276694

Title:
  Openstack services should support SIGHUP signal

Status in OpenStack Image Registry and Delivery Service (Glance):
  Confirmed
Status in Orchestration API (Heat):
  New
Status in OpenStack Identity (Keystone):
  New
Status in Murano:
  Confirmed
Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  Invalid

Bug description:
  1)In order to more effectively manage the unlinked and open (lsof +L1)
  log files descriptors w/o restarting the services, SIGHUP signal
  should be accepted by every Openstack service.

  That would allow, e.g. logrotate jobs to gracefully HUP services after
  their log files were rotated. The only option we have for now is to
  force the services restart, quite a poor option from the services
  continuous accessibility PoV.

  Note: according to  http://en.wikipedia.org/wiki/Unix_signal
  SIGHUP
     ... Many daemons will reload their configuration files and reopen their 
logfiles instead of exiting when receiving this signal.

  Currently Murano and Glance are out of sync with Oslo SIGHUP support.

  There is also the following issue exists for some of the services of OS 
projects with synced SIGHUP support:
  2)
  heat-api-cfn, heat-api, heat-api-cloudwatch, keystone:  looks like the synced 
code is never being executed, thus SIGHUP is not supported for them. Here is a 
simple test scenario:
  2.1) modify 
python-path/site-packages/foo-service-name/openstack/common/service.py
  def _sighup_supported():
  +LOG.warning(SIGHUP is supported: {0}.format(hasattr(signal, 'SIGHUP')))
  return hasattr(signal, 'SIGHUP')
  2.2) restart service foo-service-name and check logs for SIGHUP is 
supported, if service  really supports it, the appropriate messages would be 
present in the logs.
  2.3) issue kill -HUP foo-service-pid and check logs for SIGHUP is 
supported, if service  really supports it, the appropriate messages would be 
present in the logs. Besides that, the service should remain started and its 
main thread PID should not be changed.

  3)
  nova-cert, nova-novncproxy, nova-objectstore, 

[Yahoo-eng-team] [Bug 904307] Re: Application/server name not available in service logs

2014-02-12 Thread Bogdan Dobrelya
** Also affects: murano
   Importance: Undecided
   Status: New

** Also affects: heat
   Importance: Undecided
   Status: New

** Also affects: ceilometer
   Importance: Undecided
   Status: New

** Also affects: savanna
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/904307

Title:
  Application/server name not available in service logs

Status in OpenStack Telemetry (Ceilometer):
  New
Status in Cinder:
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in Orchestration API (Heat):
  New
Status in OpenStack Identity (Keystone):
  Triaged
Status in Murano Project:
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo - a Library of Common OpenStack Code:
  Fix Committed
Status in OpenStack Data Processing (Savanna):
  New

Bug description:
  If Nova is configured to use syslog based logging, and there are
  multiple services running on a system, it becomes difficult to
  identify the service that emitted the log. This can be resolved if the
  log record also contains the name of the service/binary that generated
  the log. This will also be useful with an OpenStack system using a
  centralized syslog based logging mechanism.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/904307/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276694] Re: Openstack services should support SIGHUP signal

2014-02-06 Thread Bogdan Dobrelya
Glance is affected, see
https://github.com/openstack/glance/blob/master/glance/openstack/common/service.py

** Also affects: glance
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1276694

Title:
  Openstack services should support SIGHUP signal

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in Oslo - a Library of Common OpenStack Code:
  Invalid

Bug description:
  In order to more effectively manage the unlinked and open (lsof +L1)
  log files descriptors w/o restarting the services, SIGHUP signal
  should be accepted by every Openstack service.

  That would allow, e.g. logrotate jobs to gracefully HUP services after
  their log files were rotated. The only option we have for now is to
  force the services restart, quite a poor option from the services
  continuous accessibility PoV.

  Note: according to  http://en.wikipedia.org/wiki/Unix_signal
  SIGHUP
 ... Many daemons will reload their configuration files and reopen their 
logfiles instead of exiting when receiving this signal.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1276694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276694] Re: Openstack services should support SIGHUP signal

2014-02-06 Thread Bogdan Dobrelya
murano-api, murano-conductor are affected, see

https://github.com/stackforge/murano-api/blob/master/muranoapi/openstack/common/service.py

https://github.com/stackforge/murano-conductor/blob/master/muranoconductor/openstack/common/service.py


** Also affects: murano
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1276694

Title:
  Openstack services should support SIGHUP signal

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in Murano Project:
  New
Status in Oslo - a Library of Common OpenStack Code:
  Invalid

Bug description:
  In order to more effectively manage the unlinked and open (lsof +L1)
  log files descriptors w/o restarting the services, SIGHUP signal
  should be accepted by every Openstack service.

  That would allow, e.g. logrotate jobs to gracefully HUP services after
  their log files were rotated. The only option we have for now is to
  force the services restart, quite a poor option from the services
  continuous accessibility PoV.

  Note: according to  http://en.wikipedia.org/wiki/Unix_signal
  SIGHUP
 ... Many daemons will reload their configuration files and reopen their 
logfiles instead of exiting when receiving this signal.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1276694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 904307] Re: Application/server name not available in service logs

2014-01-21 Thread Bogdan Dobrelya
Added all affected core projects to not forgot to update them in I-3 as
well

** Also affects: glance
   Importance: Undecided
   Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

** Also affects: cinder
   Importance: Undecided
   Status: New

** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/904307

Title:
  Application/server name not available in service logs

Status in Cinder:
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Identity (Keystone):
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo - a Library of Common OpenStack Code:
  In Progress

Bug description:
  If Nova is configured to use syslog based logging, and there are
  multiple services running on a system, it becomes difficult to
  identify the service that emitted the log. This can be resolved if the
  log record also contains the name of the service/binary that generated
  the log. This will also be useful with an OpenStack system using a
  centralized syslog based logging mechanism.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/904307/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp