[Yahoo-eng-team] [Bug 1493126] Re: openstack group create fails while using admin token

2015-09-19 Thread Henry Nash
I do not consider this a bug.  We state that you must either explicitly
supply the domain_id of a group in the entity passed to the create call
OR use a domain scoped token.  Since the ADMIN token is not a domain
scoped token, you must provide it in the entity itself (which, to be
honest, should be the recommended way of doing it anyway).

** Changed in: keystone
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1493126

Title:
  openstack group create fails while using admin token

Status in Keystone:
  Invalid

Bug description:
  While using --os-token=ADMIN_TOKEN rather then admin user credentials
  fails with error message:

  $ openstack --os-token= group create "qwerty"
  ERROR: openstack The request you have made requires authentication. (Disable 
debug mode to suppress these details.) (HTTP 401) (Request-ID: req-8b45e<...>)

  OS_USERNAME and OS_PASSWORD are set to ""

  Keystone log contains:

  2015-09-07 19:30:50.514850 14499 DEBUG keystone.middleware.core [-] RBAC: 
auth_context: {} process_request 
/opt/stack/keystone/keystone/middleware/core.py:209
  2015-09-07 19:30:50.533697 14499 INFO keystone.common.wsgi [-] POST 
http://172.16.51.28:5000/v3/groups
  2015-09-07 19:30:50.536504 14499 WARNING keystone.common.controller [-] RBAC: 
Bypassing authorization
  2015-09-07 19:30:50.539266 14499 WARNING keystone.common.utils [-] Couldn't 
find the auth context.
  2015-09-07 19:30:50.547398 14499 WARNING keystone.common.wsgi [-] 
Authorization failed. The request you have made requires authentication. 
(Disable debug mode to suppress these details.) (Disable debug mode to suppress 
these details.) from 

  Using admin credentials works fine.

  ---
  Investigation gave me that the root cause of this is that during group 
creation [0] the token information is being extracted from context [1] which is 
{empty} for request authenticated using ADMIN_TOKEN [2]

  [0] 
https://github.com/openstack/keystone/blob/master/keystone/identity/controllers.py#L300
  [1] 
https://github.com/openstack/keystone/blob/master/keystone/common/utils.py#L523-L525
  [2] 
https://github.com/openstack/keystone/blob/master/keystone/middleware/core.py#L72

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1493126/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474198] Re: task_state not NONE after instance boot failed

2015-09-19 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1474198

Title:
  task_state not NONE after instance boot failed

Status in OpenStack Compute (nova):
  Expired

Bug description:
  1. Exact version of Nova:
  python-novaclient-2.23.0
  openstack-nova-common-2015.1.0
  python-nova-2015.1.0
  openstack-nova-api-2015.1.0
  openstack-nova-scheduler-2015.1.0
  openstack-nova-conductor-2015.1.0
  openstack-nova-compute-2015.1.0
  openstack-nova-2015.1.0

  2. Relevant log files:
  2015-07-14 11:15:07.559 19984 ERROR nova.compute.manager 
[req-8b567c49-850a-4f00-a73b-c2879528ef39 - - - - -] [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5] Instance failed to spawn
  2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5] Traceback (most recent call last):
  2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2565, in 
_build_resources
  2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5] yield resources
  2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2437, in 
_build_and_run_instance
  2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5] block_device_info=block_device_info)
  2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2385, in 
spawn
  2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5] write_to_disk=True)
  2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4232, in 
_get_guest_xml
  2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5] context)
  2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4103, in 
_get_guest_config
  2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5] flavor, virt_type)
  2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/vif.py", line 374, in 
get_config
  2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5] _("Unexpected vif_type=%s") % 
vif_type)
  2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5] NovaException: Unexpected 
vif_type=binding_failed
  2015-07-14 11:15:07.559 19984 TRACE nova.compute.manager [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5] 
  2015-07-14 11:15:07.565 19984 INFO nova.compute.manager 
[req-a32fae7b-2a26-4d44-ab89-e16db804a9f0 58e88aff70dd4959ba5293dab8f6ceac 
c45dae15962c4797b70f6c278a232f3c - - -] [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5] Terminating instance
  2015-07-14 11:15:07.572 19984 INFO nova.virt.libvirt.driver [-] [instance: 
f0a16736-078a-4476-a56a-abee46fdc5f5] During wait destroy, instance disappeared.

  3. Reproduce steps:
  * Stop neutron-openvswitch-agent on compute node;
  * Boot one instance

  Expected result:
  Task state of instance should be None

  Actual result:
  Task state of instance was always spawning
  # nova list
  
+--++++-+--+
  | ID   | Name   | 
Status | Task State | Power State | Networks |
  
+--++++-+--+
  | f0a16736-078a-4476-a56a-abee46fdc5f5 | instance_test_vif_binding_fail | 
ERROR  | spawning   | NOSTATE |  |
  
+--++++-+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1474198/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : 

[Yahoo-eng-team] [Bug 1497508] [NEW] functional tests fail due to ping timeout on servers start

2015-09-19 Thread Venkatesh Sampath
Public bug reported:

Whenever I try running ‘tox -epy27’, the functional tests continuously
fails due to ping timeout while trying to start the servers (glance-api,
glance-registry etc.,) for running the tests.

I am running the tests from a VM with 8GB of RAM and 2.7GHz Intel Core
i7 Processor.

And I could never make the functional tests to pass until I bump up the
timeout value from the current value of 10 secs.

Below is the snippet of exception stack trace captured from the console
output.

CONSOLE OUTPUT WITH ERROR STACKTRACE:

venkatesh@vsbox:~/workspace/sf_oswork/repos/glance$ tox -epy27
py27 develop-inst-noop: /media/sf_oswork/repos/glance
py27 installed: 
aioeventlet==0.4,alembic==0.8.2,amqp==1.4.6,anyjson==0.3.3,appdirs==1.4.0,automaton==0.7.0,Babel==2.0,cachetools==1.1.1,castellan==0.2.1,cffi==1.2.1,contextlib2==0.4.0,coverage==3.7.1,cryptography==1.0.1,debtcollector==0.8.0,decorator==4.0.2,docutils==0.12,enum34==1.0.4,eventlet==0.17.4,extras==0.0.3,fasteners==0.13.0,fixtures==1.3.1,flake8==2.2.4,funcsigs==0.4,functools32==3.2.3.post2,futures==3.0.3,futurist==0.5.0,-e
 
git+...@github.com:openstack/glance.git@cef71f71ded895817eb245cd6aa5519293443d71#egg=glance-gerrit_master,glance-store==0.9.1,greenlet==0.4.9,hacking==0.10.2,httplib2==0.9.1,idna==2.0,ipaddress==1.0.14,iso8601==0.1.10,Jinja2==2.8,jsonschema==2.5.1,keystonemiddleware==2.2.0,kombu==3.0.26,linecache2==1.0.0,Mako==1.0.2,MarkupSafe==0.23,mccabe==0.2.1,mock==1.3.0,monotonic==0.3,mox3==0.10.0,msgpack-python==0.4.6,netaddr==0.7.18,netifaces==0.10.4,networkx==1.10,os-client-config==1.6.3,oslo.concurrency==2.6.0,oslo.config==2.4.0,oslo.context==0.6.0,oslo.db==2.5.0,oslo.i18n==2.6.0,oslo.log==1.11.0,oslo.messaging==2.5.0,oslo.middleware==2.8.0,oslo.policy==0.11.0,oslo.serialization==1.9.0,oslo.service==0.9.0,oslo.utils==2.5.0,oslosphinx==3.2.0,oslotest==1.11.0,osprofiler==0.3.0,Paste==2.0.2,PasteDeploy==1.5.2,pbr==1.7.0,pep8==1.5.7,prettytable==0.7.2,psutil==1.2.1,psycopg2==2.6.1,pyasn1==0.1.8,pycadf==1.1.0,pycparser==2.14,pycrypto==2.6.1,pyflakes==0.8.1,Pygments==2.0.2,PyMySQL==0.6.6,pyOpenSSL==0.15.1,pysendfile==2.0.1,python-editor==0.4,python-keystoneclient==1.7.0,python-mimeparse==0.1.4,python-subunit==1.1.0,pytz==2015.4,PyYAML==3.11,qpid-python==0.26,repoze.lru==0.6,requests==2.7.0,retrying==1.3.3,Routes==2.2,semantic-version==2.4.2,simplegeneric==0.8.1,six==1.9.0,Sphinx==1.2.3,SQLAlchemy==1.0.8,sqlalchemy-migrate==0.10.0,sqlparse==0.1.16,stevedore==1.8.0,taskflow==1.20.0,Tempita==0.5.2,testrepository==0.0.20,testresources==0.2.7,testscenarios==0.5.0,testtools==1.8.0,traceback2==1.4.0,trollius==2.0,unittest2==1.1.0,WebOb==1.4.1,wheel==0.24.0,wrapt==1.10.5,WSME==0.8.0,xattr==0.7.8
py27 runtests: PYTHONHASHSEED='3954332983'
py27 runtests: commands[0] | lockutils-wrapper python setup.py testr --slowest 
--testr-args=
running testr
running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
${PYTHON:-python} -m subunit.run discover -t ./ ./glance/tests  
==
FAIL: 
glance.tests.functional.artifacts.test_artifacts.TestArtifacts.test_bad_update_property
tags: worker-0
--
registry.log: {{{
2015-09-19 11:00:31.913 2303 DEBUG glance.common.config [-] Loading 
glance-registry from /tmp/tmpWrfnCY/etc/registry-paste.ini load_paste_app 
glance/common/config.py:266
2015-09-19 11:00:42.558 2317 DEBUG glance.common.config [-] Loading 
glance-registry from /tmp/tmpWrfnCY/etc/registry-paste.ini load_paste_app 
glance/common/config.py:266
2015-09-19 11:00:53.144 2331 DEBUG glance.common.config [-] Loading 
glance-registry from /tmp/tmpWrfnCY/etc/registry-paste.ini load_paste_app 
glance/common/config.py:266
}}}

Traceback (most recent call last):
  File "glance/tests/functional/artifacts/test_artifacts.py", line 92, in setUp
self.start_servers(**self.__dict__.copy())
  File "glance/tests/functional/artifacts/test_artifacts.py", line 181, in 
start_servers
super(TestArtifacts, self).start_servers(**kwargs)
  File "glance/tests/functional/__init__.py", line 789, in start_servers
**kwargs)
  File "glance/tests/functional/__init__.py", line 770, in start_with_retry
self.assertTrue(launch_msg is None, launch_msg)
  File 
"/media/sf_oswork/repos/glance/.tox/py27/local/lib/python2.7/site-packages/unittest2/case.py",
 line 702, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true : Unexpected server launch status for: 
registry, 
strace:
==
FAIL: 
glance.tests.functional.artifacts.test_artifacts.TestArtifacts.test_create_artifact_bad_dependency_format
tags: worker-0
--
registry.log: {{{
2015-09-19 11:01:04.257 2345 DEBUG glance.common.config [-] Loading 
glance-registry 

[Yahoo-eng-team] [Bug 1489555] Re: nova rbd volume attach to running instance in Kilo is failing

2015-09-19 Thread venkat bokka
HI josh Durgin,

thanks for your response, now the issue got resolved, this issue is with 
apparmor, apparmor is blocking adding rbd support to kvm instances.
it is resolved by adding below lines to  
/etc/apparmor.d/abstractions/libvirt-qemu and restart apparmor, libvirt and 
nova-compute

# for rbd
/etc/ceph/ceph.conf r,
/usr/lib/x86_64-linux-gnu/qemu/* rmix,


** Changed in: nova
 Assignee: (unassigned) => venkat bokka (venkat-bnagu)

** Changed in: nova
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1489555

Title:
  nova rbd volume attach to running  instance in Kilo is failing

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  i am using openstack Kilo on debian, ceph firefly(0.80.7), libvirt
  1.2.9 and qemu-kvm 2.3

  i am able to create cinder volume with ceph as backend, but when i am trying 
to attach volume to running instance it failing with the error
  libvirtError: internal error: unable to execute QEMU command 'device_add': 
Property 'virtio-blk-device.drive' can't find value 'drive-virtio-disk2'

  Please find the attached nova compute log.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1489555/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497522] [NEW] DHCP agent fail if create a D type subnet

2015-09-19 Thread shihanzhang
Public bug reported:

When we create a subnet,  neutron-server just check the subnet validation, and 
permit to create a D type subnet, 
for example: neutron subnet-create dhcp-test 224.0.0.0/8, but dhcp-agent will 
fail, the error log as bellow:
[-] Unable to enable dhcp for c07785a5-aa25-4939-b74f-481c1158ebcd.
Traceback (most recent call last):
  File "/opt/stack/neutron/neutron/agent/dhcp/agent.py", line 115, in 
call_driver
getattr(driver, action)(**action_kwargs)
  File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 203, in enable
interface_name = self.device_manager.setup(self.network)
  File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 1212, in setup
self._set_default_route(network, interface_name)
  File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 1015, in 
_set_default_route
device.route.add_gateway(subnet.gateway_ip)
  File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 584, in 
add_gateway
self._as_root([ip_version], tuple(args))
  File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 280, in _as_root
use_root_namespace=use_root_namespace)
  File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 80, in _as_root
log_fail_as_error=self.log_fail_as_error)
  File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 89, in _execute
log_fail_as_error=log_fail_as_error)
  File "/opt/stack/neutron/neutron/agent/linux/utils.py", line 160, in execute
raise RuntimeError(m)
RuntimeError: 
Command: ['ip', 'netns', 'exec', u'qdhcp-c07785a5-aa25-4939-b74f-481c1158ebcd', 
'ip', '-4', 'route', 'replace', 'default', 'via', u'224.0.0.1',

Exit code: 2
Stdin: 
Stdout: 
Stderr: RTNETLINK answers: Network is unreachable

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497522

Title:
  DHCP agent fail if create a D type subnet

Status in neutron:
  New

Bug description:
  When we create a subnet,  neutron-server just check the subnet validation, 
and permit to create a D type subnet, 
  for example: neutron subnet-create dhcp-test 224.0.0.0/8, but dhcp-agent will 
fail, the error log as bellow:
  [-] Unable to enable dhcp for c07785a5-aa25-4939-b74f-481c1158ebcd.
  Traceback (most recent call last):
File "/opt/stack/neutron/neutron/agent/dhcp/agent.py", line 115, in 
call_driver
  getattr(driver, action)(**action_kwargs)
File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 203, in enable
  interface_name = self.device_manager.setup(self.network)
File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 1212, in setup
  self._set_default_route(network, interface_name)
File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 1015, in 
_set_default_route
  device.route.add_gateway(subnet.gateway_ip)
File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 584, in 
add_gateway
  self._as_root([ip_version], tuple(args))
File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 280, in 
_as_root
  use_root_namespace=use_root_namespace)
File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 80, in 
_as_root
  log_fail_as_error=self.log_fail_as_error)
File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 89, in 
_execute
  log_fail_as_error=log_fail_as_error)
File "/opt/stack/neutron/neutron/agent/linux/utils.py", line 160, in execute
  raise RuntimeError(m)
  RuntimeError: 
  Command: ['ip', 'netns', 'exec', 
u'qdhcp-c07785a5-aa25-4939-b74f-481c1158ebcd', 'ip', '-4', 'route', 'replace', 
'default', 'via', u'224.0.0.1',

  Exit code: 2
  Stdin: 
  Stdout: 
  Stderr: RTNETLINK answers: Network is unreachable

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497570] [NEW] the GreenThread should be created when admin_state is up

2015-09-19 Thread huangpengtaohw
Public bug reported:

the greenthread to configure dhcp should be created when
network.admin_state_up is True.

the problem in function sync_state is to create a greenthread to
configure dhcp, and the thread will be end immediatly when
network.admin_state_up is False.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497570

Title:
  the GreenThread should be created when admin_state is up

Status in neutron:
  New

Bug description:
  the greenthread to configure dhcp should be created when
  network.admin_state_up is True.

  the problem in function sync_state is to create a greenthread to
  configure dhcp, and the thread will be end immediatly when
  network.admin_state_up is False.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497570/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276694] Re: Openstack services should support SIGHUP signal

2015-09-19 Thread Bogdan Dobrelya
Added Oslo.service according to this 
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074745.html
"oslo.service is responsible for catching/handling signals"

** Also affects: oslo.service
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1276694

Title:
  Openstack services should support SIGHUP signal

Status in Glance:
  Fix Released
Status in heat:
  Fix Released
Status in Keystone:
  Confirmed
Status in murano:
  Confirmed
Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo-incubator:
  Invalid
Status in oslo.config:
  New
Status in oslo.log:
  New
Status in oslo.service:
  New
Status in Sahara:
  In Progress

Bug description:
  1)In order to more effectively manage the unlinked and open (lsof +L1)
  log files descriptors w/o restarting the services, SIGHUP signal
  should be accepted by every Openstack service.

  That would allow, e.g. logrotate jobs to gracefully HUP services after
  their log files were rotated. The only option we have for now is to
  force the services restart, quite a poor option from the services
  continuous accessibility PoV.

  Note: according to  http://en.wikipedia.org/wiki/Unix_signal
  SIGHUP
     ... Many daemons will reload their configuration files and reopen their 
logfiles instead of exiting when receiving this signal.

  Currently Murano and Glance are out of sync with Oslo SIGHUP support.

  There is also the following issue exists for some of the services of OS 
projects with synced SIGHUP support:
  2)
  heat-api-cfn, heat-api, heat-api-cloudwatch, keystone:  looks like the synced 
code is never being executed, thus SIGHUP is not supported for them. Here is a 
simple test scenario:
  2.1) modify 
/site-packages//openstack/common/service.py
  def _sighup_supported():
  +LOG.warning("SIGHUP is supported: {0}".format(hasattr(signal, 'SIGHUP')))
  return hasattr(signal, 'SIGHUP')
  2.2) restart service foo-service-name and check logs for "SIGHUP is 
supported", if service  really supports it, the appropriate messages would be 
present in the logs.
  2.3) issue kill -HUP  and check logs for "SIGHUP is 
supported" and "Caught SIGHUP", if service  really supports it, the appropriate 
messages would be present in the logs. Besides that, the service should remain 
started and its main thread PID should not be changed.

  e.g.
  2.a) heat-engine supports HUPing:
  #service openstack-heat-engine restart
  <132>Apr 11 14:03:48 node-3 heat-heat.openstack.common.service WARNING: 
SIGHUP is supported: True

  2.b)But heat-api don't know how to HUP:
  #service openstack-heat-api restart
  <134>Apr 11 14:06:22 node-3 heat-heat.api INFO: Starting Heat ReST API on 
0.0.0.0:8004
  <134>Apr 11 14:06:22 node-3 heat-eventlet.wsgi.server INFO: Starting single 
process server

  2.c) HUPing heat-engine is OK
  #pid=$(cat /var/run/heat/openstack-heat-engine.pid); kill -HUP $pid && echo 
$pid
  16512
  <134>Apr 11 14:12:15 node-3 heat-heat.openstack.common.service INFO: Caught 
SIGHUP, exiting
  <132>Apr 11 14:12:15 node-3 heat-heat.openstack.common.service WARNING: 
SIGHUP is supported: True
  <134>Apr 11 14:12:15 node-3 heat-heat.openstack.common.rpc.common INFO: 
Connected to AMQP server on ...
  service openstack-heat-engine status
  openstack-heat-engine (pid  16512) is running...

  2.d) HUPed heat-api is dead now ;(
  #kill -HUP $(cat /var/run/heat/openstack-heat-api.pid)
  (no new logs)
  # service openstack-heat-api status
  openstack-heat-api dead but pid file exists

  3)
  nova-cert, nova-novncproxy, nova-objectstore, nova-consoleauth, 
nova-scheduler - unlike to case 2, after kill -HUP  command 
was issued, there would be a "Caught SIGHUP" message in the logs, BUT the 
associated service would have got dead anyway. Instead, the service should 
remain started and its main thread PID should not be changed (similar to the 
2.c case).

  So, looks like there are a lot of things still should be done to
  ensure POSIX standards abidance in Openstack :-)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1276694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1276694] Re: Openstack services should support SIGHUP signal

2015-09-19 Thread Bogdan Dobrelya
Added the Oslo.config as there is at least one patch addressing this
issue https://review.openstack.org/#/c/213062/

** Also affects: oslo.config
   Importance: Undecided
   Status: New

** Also affects: oslo.log
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1276694

Title:
  Openstack services should support SIGHUP signal

Status in Glance:
  Fix Released
Status in heat:
  Fix Released
Status in Keystone:
  Confirmed
Status in murano:
  Confirmed
Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo-incubator:
  Invalid
Status in oslo.config:
  New
Status in oslo.log:
  New
Status in Sahara:
  In Progress

Bug description:
  1)In order to more effectively manage the unlinked and open (lsof +L1)
  log files descriptors w/o restarting the services, SIGHUP signal
  should be accepted by every Openstack service.

  That would allow, e.g. logrotate jobs to gracefully HUP services after
  their log files were rotated. The only option we have for now is to
  force the services restart, quite a poor option from the services
  continuous accessibility PoV.

  Note: according to  http://en.wikipedia.org/wiki/Unix_signal
  SIGHUP
     ... Many daemons will reload their configuration files and reopen their 
logfiles instead of exiting when receiving this signal.

  Currently Murano and Glance are out of sync with Oslo SIGHUP support.

  There is also the following issue exists for some of the services of OS 
projects with synced SIGHUP support:
  2)
  heat-api-cfn, heat-api, heat-api-cloudwatch, keystone:  looks like the synced 
code is never being executed, thus SIGHUP is not supported for them. Here is a 
simple test scenario:
  2.1) modify 
/site-packages//openstack/common/service.py
  def _sighup_supported():
  +LOG.warning("SIGHUP is supported: {0}".format(hasattr(signal, 'SIGHUP')))
  return hasattr(signal, 'SIGHUP')
  2.2) restart service foo-service-name and check logs for "SIGHUP is 
supported", if service  really supports it, the appropriate messages would be 
present in the logs.
  2.3) issue kill -HUP  and check logs for "SIGHUP is 
supported" and "Caught SIGHUP", if service  really supports it, the appropriate 
messages would be present in the logs. Besides that, the service should remain 
started and its main thread PID should not be changed.

  e.g.
  2.a) heat-engine supports HUPing:
  #service openstack-heat-engine restart
  <132>Apr 11 14:03:48 node-3 heat-heat.openstack.common.service WARNING: 
SIGHUP is supported: True

  2.b)But heat-api don't know how to HUP:
  #service openstack-heat-api restart
  <134>Apr 11 14:06:22 node-3 heat-heat.api INFO: Starting Heat ReST API on 
0.0.0.0:8004
  <134>Apr 11 14:06:22 node-3 heat-eventlet.wsgi.server INFO: Starting single 
process server

  2.c) HUPing heat-engine is OK
  #pid=$(cat /var/run/heat/openstack-heat-engine.pid); kill -HUP $pid && echo 
$pid
  16512
  <134>Apr 11 14:12:15 node-3 heat-heat.openstack.common.service INFO: Caught 
SIGHUP, exiting
  <132>Apr 11 14:12:15 node-3 heat-heat.openstack.common.service WARNING: 
SIGHUP is supported: True
  <134>Apr 11 14:12:15 node-3 heat-heat.openstack.common.rpc.common INFO: 
Connected to AMQP server on ...
  service openstack-heat-engine status
  openstack-heat-engine (pid  16512) is running...

  2.d) HUPed heat-api is dead now ;(
  #kill -HUP $(cat /var/run/heat/openstack-heat-api.pid)
  (no new logs)
  # service openstack-heat-api status
  openstack-heat-api dead but pid file exists

  3)
  nova-cert, nova-novncproxy, nova-objectstore, nova-consoleauth, 
nova-scheduler - unlike to case 2, after kill -HUP  command 
was issued, there would be a "Caught SIGHUP" message in the logs, BUT the 
associated service would have got dead anyway. Instead, the service should 
remain started and its main thread PID should not be changed (similar to the 
2.c case).

  So, looks like there are a lot of things still should be done to
  ensure POSIX standards abidance in Openstack :-)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1276694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp