[Yahoo-eng-team] [Bug 1497508] [NEW] functional tests fail due to ping timeout on servers start

2015-09-18 Thread Venkatesh Sampath
Public bug reported:

Whenever I try running ‘tox -epy27’, the functional tests continuously
fails due to ping timeout while trying to start the servers (glance-api,
glance-registry etc.,) for running the tests.

I am running the tests from a VM with 8GB of RAM and 2.7GHz Intel Core
i7 Processor.

And I could never make the functional tests to pass until I bump up the
timeout value from the current value of 10 secs.

Below is the snippet of exception stack trace captured from the console
output.

CONSOLE OUTPUT WITH ERROR STACKTRACE:

venkatesh@vsbox:~/workspace/sf_oswork/repos/glance$ tox -epy27
py27 develop-inst-noop: /media/sf_oswork/repos/glance
py27 installed: 
aioeventlet==0.4,alembic==0.8.2,amqp==1.4.6,anyjson==0.3.3,appdirs==1.4.0,automaton==0.7.0,Babel==2.0,cachetools==1.1.1,castellan==0.2.1,cffi==1.2.1,contextlib2==0.4.0,coverage==3.7.1,cryptography==1.0.1,debtcollector==0.8.0,decorator==4.0.2,docutils==0.12,enum34==1.0.4,eventlet==0.17.4,extras==0.0.3,fasteners==0.13.0,fixtures==1.3.1,flake8==2.2.4,funcsigs==0.4,functools32==3.2.3.post2,futures==3.0.3,futurist==0.5.0,-e
 
git+...@github.com:openstack/glance.git@cef71f71ded895817eb245cd6aa5519293443d71#egg=glance-gerrit_master,glance-store==0.9.1,greenlet==0.4.9,hacking==0.10.2,httplib2==0.9.1,idna==2.0,ipaddress==1.0.14,iso8601==0.1.10,Jinja2==2.8,jsonschema==2.5.1,keystonemiddleware==2.2.0,kombu==3.0.26,linecache2==1.0.0,Mako==1.0.2,MarkupSafe==0.23,mccabe==0.2.1,mock==1.3.0,monotonic==0.3,mox3==0.10.0,msgpack-python==0.4.6,netaddr==0.7.18,netifaces==0.10.4,networkx==1.10,os-client-config==1.6.3,oslo.concurrency==2.6.0,oslo.config==2.4.0,oslo.context==0.6.0,oslo.db==2.5.0,oslo.i18n==2.6.0,oslo.log==1.11.0,oslo.messaging==2.5.0,oslo.middleware==2.8.0,oslo.policy==0.11.0,oslo.serialization==1.9.0,oslo.service==0.9.0,oslo.utils==2.5.0,oslosphinx==3.2.0,oslotest==1.11.0,osprofiler==0.3.0,Paste==2.0.2,PasteDeploy==1.5.2,pbr==1.7.0,pep8==1.5.7,prettytable==0.7.2,psutil==1.2.1,psycopg2==2.6.1,pyasn1==0.1.8,pycadf==1.1.0,pycparser==2.14,pycrypto==2.6.1,pyflakes==0.8.1,Pygments==2.0.2,PyMySQL==0.6.6,pyOpenSSL==0.15.1,pysendfile==2.0.1,python-editor==0.4,python-keystoneclient==1.7.0,python-mimeparse==0.1.4,python-subunit==1.1.0,pytz==2015.4,PyYAML==3.11,qpid-python==0.26,repoze.lru==0.6,requests==2.7.0,retrying==1.3.3,Routes==2.2,semantic-version==2.4.2,simplegeneric==0.8.1,six==1.9.0,Sphinx==1.2.3,SQLAlchemy==1.0.8,sqlalchemy-migrate==0.10.0,sqlparse==0.1.16,stevedore==1.8.0,taskflow==1.20.0,Tempita==0.5.2,testrepository==0.0.20,testresources==0.2.7,testscenarios==0.5.0,testtools==1.8.0,traceback2==1.4.0,trollius==2.0,unittest2==1.1.0,WebOb==1.4.1,wheel==0.24.0,wrapt==1.10.5,WSME==0.8.0,xattr==0.7.8
py27 runtests: PYTHONHASHSEED='3954332983'
py27 runtests: commands[0] | lockutils-wrapper python setup.py testr --slowest 
--testr-args=
running testr
running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
${PYTHON:-python} -m subunit.run discover -t ./ ./glance/tests  
==
FAIL: 
glance.tests.functional.artifacts.test_artifacts.TestArtifacts.test_bad_update_property
tags: worker-0
--
registry.log: {{{
2015-09-19 11:00:31.913 2303 DEBUG glance.common.config [-] Loading 
glance-registry from /tmp/tmpWrfnCY/etc/registry-paste.ini load_paste_app 
glance/common/config.py:266
2015-09-19 11:00:42.558 2317 DEBUG glance.common.config [-] Loading 
glance-registry from /tmp/tmpWrfnCY/etc/registry-paste.ini load_paste_app 
glance/common/config.py:266
2015-09-19 11:00:53.144 2331 DEBUG glance.common.config [-] Loading 
glance-registry from /tmp/tmpWrfnCY/etc/registry-paste.ini load_paste_app 
glance/common/config.py:266
}}}

Traceback (most recent call last):
  File "glance/tests/functional/artifacts/test_artifacts.py", line 92, in setUp
self.start_servers(**self.__dict__.copy())
  File "glance/tests/functional/artifacts/test_artifacts.py", line 181, in 
start_servers
super(TestArtifacts, self).start_servers(**kwargs)
  File "glance/tests/functional/__init__.py", line 789, in start_servers
**kwargs)
  File "glance/tests/functional/__init__.py", line 770, in start_with_retry
self.assertTrue(launch_msg is None, launch_msg)
  File 
"/media/sf_oswork/repos/glance/.tox/py27/local/lib/python2.7/site-packages/unittest2/case.py",
 line 702, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true : Unexpected server launch status for: 
registry, 
strace:
==
FAIL: 
glance.tests.functional.artifacts.test_artifacts.TestArtifacts.test_create_artifact_bad_dependency_format
tags: worker-0
--
registry.log: {{{
2015-09-19 11:01:04.257 2345 DEBUG glance.common.config [-] Loading 
glance-registry 

[Yahoo-eng-team] [Bug 1472936] Re: External flat network show out of the owned tenant.

2015-09-18 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1472936

Title:
  External flat network show out of the owned tenant.

Status in neutron:
  Expired

Bug description:
  Hi guys!

  I found  problem in neutron. When I create a new network and I set
  external and the tenant is i.e demo, then I saw the network from other
  (not admin) tenant as well. I can get floating IP address, router etc.

  It is a problem because I can't assign external network to tenant.

  Here is an example:

  ext-pub1 is an external network, owned by demo tenant. If I list the
  networks in the demo2 tenant I see the ext-pub1 network as well :(

  root@controller01:~# source demo-openrc.sh
  root@controller01:~# neutron net-list
  
+--+---+-+
  | id   | name  | subnets  
   |
  
+--+---+-+
  | 0c034b21-4e79-45c0-8f63-48b58dbd29f9 | demo-net  | 
24ad19d9-7967-42da-8457-bde999558bca 10.0.2.0/24|
  | 3a647d2a-3386-4104-8fbc-0deacac5f0f2 | demo-net2 | 
ffe02388-45d5-431d-967d-625344410081 10.0.3.0/24|
  | 807c842e-2b99-40cc-bbcc-3d74990de142 | ext-net   | 
d6d52683-7dba-432d-adf6-7c582ca0f527|
  | 938c2abb-dfcb-4627-8626-3618c189d4de | ext-pub1  | 
791b8764-a5ab-4b16-a7b0-2b883c4f0e1e 193.225.212.128/27 |
  
+--+---+-+
  root@controller01:~# source demo2-openrc.sh
  root@controller01:~# neutron net-list
  
+--++--+
  | id   | name   | subnets 
 |
  
+--++--+
  | 7b362a28-4bbd-42a3-b99f-81861905b136 | demo2-net | 
abd5d600-2f54-4a27-8d6b-23d690c18e3a 10.0.4.0/24 |
  | 807c842e-2b99-40cc-bbcc-3d74990de142 | ext-net| 
d6d52683-7dba-432d-adf6-7c582ca0f527 |
  | 938c2abb-dfcb-4627-8626-3618c189d4de | ext-pub1   | 
791b8764-a5ab-4b16-a7b0-2b883c4f0e1e |
  
+--++--+

  The database seems be okay:

  root@controller01:~# openstack project list
  +--+-+
  | ID   | Name|
  +--+-+
  | 0d693b62fb2941a381faf68f348f68f8 | service |
  | 1437d72da6e64b2785e04c3e2e73d6a7 | demo2  |
  | 15371d3015b24005a33536a95a750a62 | admin   |
  | 60800c67e5a94d9c90c462283ea3ad0a | demo|
  +--+-+

  Database:

  MariaDB [neutron]> select * from networks;
  
+--+--+++++--+--+
  | tenant_id| id   | 
name   | status | admin_state_up | 
shared | mtu  | vlan_transparent |
  
+--+--+++++--+--+
  | 60800c67e5a94d9c90c462283ea3ad0a | 0c034b21-4e79-45c0-8f63-48b58dbd29f9 | 
demo-net   | ACTIVE |  1 |  
0 |0 | NULL |
  | 60800c67e5a94d9c90c462283ea3ad0a | 3a647d2a-3386-4104-8fbc-0deacac5f0f2 | 
demo-net2  | ACTIVE |  1 |  
0 |0 | NULL |
  |  | 57646215-ad08-465e-9b14-bc00023bd685 | 
HA network tenant 60800c67e5a94d9c90c462283ea3ad0a | ACTIVE |  1 |  
0 |0 | NULL |
  | 15371d3015b24005a33536a95a750a62 | 5a0db313-a44e-41ea-8d45-e6742f0ab608 | 
admin-net  | ACTIVE |  1 |  
0 |0 | NULL |
  | 1437d72da6e64b2785e04c3e2e73d6a7 | 7b362a28-4bbd-42a3-b99f-81861905b136 | 
demo2-net | ACTIVE |  1 |   
   0 |0 | NULL |
  | 15371d3015b24005a33536a95a750a62 | 807c842e-2b99-40cc-bbcc-3d74990de142 | 
ext-net| ACTIVE |  1 |  
0 |0 | NULL |
  | 60800c67e5a94d9c90

[Yahoo-eng-team] [Bug 1469604] Re: LbbasV2-Session persitence HTTP_COOKIE-no cookie sent to client

2015-09-18 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1469604

Title:
  LbbasV2-Session persitence HTTP_COOKIE-no cookie sent to client

Status in neutron:
  Expired

Bug description:
  we configured lbaasv2. LB , listener are created . Thenext step is the
  pool and members

  neutron lbaas-pool-create --lb-algorithm ROUND_ROBIN --listener
  5dd91024-148e-4d80-842f-3122725d0164 --protocol HTTP

  neutron lbaas-member-create 6b7b5daa-f773-46ca-8045-bb1f8ae8fcec
  --protocol-port 80 --subnet 67897b9a-e5dd-405a-80db-7e36ead62c27
  --address 192.168.1.3

  neutron lbaas-member-create 6b7b5daa-f773-46ca-8045-bb1f8ae8fcec
  --protocol-port 80 --subnet 67897b9a-e5dd-405a-80db-7e36ead62c27
  --address 192.168.1.4

  LB works properly.
  After updating the pool to use session persistence with HTTP_COOKIE we 
captured traffic on LB interface and saw that no cookie send towards the 
clients.

  neutron lbaas-pool-update 6b7b5daa-f773-46ca-8045-bb1f8ae8fcec
  --session-persistence type=dict type=HTTP_COOKIE

  ___no session persistence___
  .}._.Z~(HTTP/1.1 200 OK
  Date: Mon, 29 Jun 2015 09:47:09 GMT
  Server: Apache/2.2.15 (Red Hat)
  Last-Modified: Mon, 29 Jun 2015 09:45:01 GMT
  ETag: "23196-b-519a4f18f88f8"
  Accept-Ranges: bytes
  Content-Length: 11
  Content-Type: text/html; charset=UTF-8

  ___With session persitence configured
  ~...\.cHTTP/1.1 200 OK
  Date: Mon, 29 Jun 2015 09:48:48 GMT
  Server: Apache/2.2.15 (Red Hat)
  Last-Modified: Mon, 29 Jun 2015 09:45:01 GMT
  ETag: "23196-b-519a4f18f88f8"
  Accept-Ranges: bytes
  Content-Length: 11
  Content-Type: text/html; charset=UTF-8

  NO ADDITIONAL HEADER ADDED

  LOGS:
  2015-06-29 08:48:02.842 18502 ERROR neutron_lbaas.agent.agent_manager 
[req-1a780c1f-9226-4515-a50d-dc8c0d63cb80 ] Create pool 
6b7b5daa-f773-46ca-8045-bb1f8ae8fcec failed on device driver haproxy_ns
  2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager 
Traceback (most recent call last):
  2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/agent/agent_manager.py", line 
339, in update_pool
  2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager 
driver.pool.update(old_pool, pool)
  2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 416, in update
  2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager 
self.driver.loadbalancer.refresh(new_pool.listener.loadbalancer)
  2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 364, in refresh
  2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager if 
(not self.driver.deploy_instance(loadbalancer) and
  2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 445, in 
inner
  2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager 
return f(*args, **kwargs)
  2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 172, in deploy_instance
  2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager 
self.update(loadbalancer)
  2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 181, in update
  2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager 
self._spawn(loadbalancer, extra_args)
  2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/drivers/haproxy/namespace_driver.py",
 line 347, in _spawn
  2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager 
haproxy_base_dir)
  2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/jinja_cfg.py",
 line 89, in save_config
  2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager 
haproxy_base_dir)
  2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager   File 
"/usr/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/drivers/haproxy/jinja_cfg.py",
 line 221, in render_loadbalancer_obj
  2015-06-29 08:48:02.842 18502 TRACE neutron_lbaas.agent.agent_manager 
loadbalancer = _transform_loadbalancer(loadbalancer, haproxy_base_dir)
  2015-06-29 08:48:02.842 185

[Yahoo-eng-team] [Bug 1497485] [NEW] Nova VM goes into an error State because VM xml is forced to have """" which is not supported with KVM libvirt 1.2.2

2015-09-18 Thread Prinika
Public bug reported:

Testing on openstack master:
Libvirt version 1.2.2
Hypervisor: KVM

On Booting a VM the VM goes into error with the following error log:

2015-09-18 01:30:29.723 ERROR nova.compute.manager 
[req-c5b00bd7-943b-44cd-847b-064286501d6a admin admin] [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] Instance failed to spaw
n
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] Traceback (most recent call last):
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] File 
"/opt/stack/nova/nova/compute/manager.py", line 2152, in _build_resourc
es
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] yield resources
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] File 
"/opt/stack/nova/nova/compute/manager.py", line 2006, in _build_and_run
_instance
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] block_device_info=block_device_info)
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2451, in spawn
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] block_device_info=block_device_info)
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 4522, in _create_do
main_and_network
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] xml, pause=pause, power_on=power_on)
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 4452, in _create_do
main
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] guest.launch(pause=pause)
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] File 
"/opt/stack/nova/nova/virt/libvirt/guest.py", line 141, in launch
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] self._encoded_xml, errors='ignore')
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 1
95, in __exit__
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] six.reraise(self.type_, self.value, 
self.tb)
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] File 
"/opt/stack/nova/nova/virt/libvirt/guest.py", line 136, in launch
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] return self._domain.createWithFlags(flags)
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 183, i
n doit
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] result = proxy_call(self._autowrap, f, 
*args, **kwargs)
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 141, i
n proxy_call
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] rv = execute(f, *args, **kwargs)
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 122, i
n execute
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] six.reraise(c, e, tb)
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 80, in
 tworker
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] rv = meth(*args, **kwargs)
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] File 
"/usr/local/lib/python2.7/dist-packages/libvirt.py", line 900, in creat
eWithFlags
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] if ret == -1: raise libvirtError 
('virDomainCreateWithFlags() failed', dom=self)
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] libvirtError: unsupported configuration: 
scripts are not supported on interfaces of type bridge
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5

[Yahoo-eng-team] [Bug 1497484] [NEW] image-create does not respect the force_raw_images setting

2015-09-18 Thread Nicolas Simonds
Public bug reported:

Instance snapshots of instances sourced from, e.g., QCOW2 images will be
created in the image service as "qcow2" and then switched to "raw" in an
update step.

Use case:

We decided to drop QCOW2 support from certain product configurations, as
force_raw_images is enabled by default, and the conversion overhead made
for a sub-wonderful customer experience.

After dropping QCOW2 from the acceptable list of image formats from
Glance, clients could no longer make instance snapshots from instances
that were spawned from QCOW2 images, despite the fact that the backing
store was not QCOW2.

Steps to Reproduce:

1. Upload a QCOW2 image into Glance
2. Update Nova/Glance configs to disable QCOW2 images and enable 
force_raw_images
3. Boot an instance against the QCOW2 image
4. Create a snapshot of the instance

Expected behavior:

A snapshot of the instance

Actual results:
ERROR (BadRequest): 
 
  400 Bad Request
 
 
  400 Bad Request
  Invalid disk format 'qcow2' for image.

 
 (HTTP 400) (HTTP 400) (Request-ID: 
req-8e8d8d51-8e0c-4033-bb84-774d2ed1f90a)

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1497484

Title:
  image-create does not respect the force_raw_images setting

Status in OpenStack Compute (nova):
  New

Bug description:
  Instance snapshots of instances sourced from, e.g., QCOW2 images will
  be created in the image service as "qcow2" and then switched to "raw"
  in an update step.

  Use case:

  We decided to drop QCOW2 support from certain product configurations,
  as force_raw_images is enabled by default, and the conversion overhead
  made for a sub-wonderful customer experience.

  After dropping QCOW2 from the acceptable list of image formats from
  Glance, clients could no longer make instance snapshots from instances
  that were spawned from QCOW2 images, despite the fact that the backing
  store was not QCOW2.

  Steps to Reproduce:

  1. Upload a QCOW2 image into Glance
  2. Update Nova/Glance configs to disable QCOW2 images and enable 
force_raw_images
  3. Boot an instance against the QCOW2 image
  4. Create a snapshot of the instance

  Expected behavior:

  A snapshot of the instance

  Actual results:
  ERROR (BadRequest): 
   
400 Bad Request
   
   
400 Bad Request
Invalid disk format 'qcow2' for image.
  
   
   (HTTP 400) (HTTP 400) (Request-ID: 
req-8e8d8d51-8e0c-4033-bb84-774d2ed1f90a)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1497484/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497485] [NEW] Nova VM goes into an error State because VM xml is forced to have """" which is not supported with KVM libvirt 1.2.2

2015-09-18 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Testing on openstack master:
Libvirt version 1.2.2
Hypervisor: KVM

On Booting a VM the VM goes into error with the following error log:

2015-09-18 01:30:29.723 ERROR nova.compute.manager 
[req-c5b00bd7-943b-44cd-847b-064286501d6a admin admin] [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] Instance failed to spaw
n
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] Traceback (most recent call last):
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] File 
"/opt/stack/nova/nova/compute/manager.py", line 2152, in _build_resourc
es
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] yield resources
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] File 
"/opt/stack/nova/nova/compute/manager.py", line 2006, in _build_and_run
_instance
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] block_device_info=block_device_info)
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2451, in spawn
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] block_device_info=block_device_info)
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 4522, in _create_do
main_and_network
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] xml, pause=pause, power_on=power_on)
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 4452, in _create_do
main
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] guest.launch(pause=pause)
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] File 
"/opt/stack/nova/nova/virt/libvirt/guest.py", line 141, in launch
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] self._encoded_xml, errors='ignore')
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 1
95, in __exit__
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] six.reraise(self.type_, self.value, 
self.tb)
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] File 
"/opt/stack/nova/nova/virt/libvirt/guest.py", line 136, in launch
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] return self._domain.createWithFlags(flags)
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 183, i
n doit
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] result = proxy_call(self._autowrap, f, 
*args, **kwargs)
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 141, i
n proxy_call
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] rv = execute(f, *args, **kwargs)
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 122, i
n execute
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] six.reraise(c, e, tb)
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 80, in
 tworker
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] rv = meth(*args, **kwargs)
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] File 
"/usr/local/lib/python2.7/dist-packages/libvirt.py", line 900, in creat
eWithFlags
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] if ret == -1: raise libvirtError 
('virDomainCreateWithFlags() failed', dom=self)
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-40f9-a504-a17b79d279a5] libvirtError: unsupported configuration: 
scripts are not supported on interfaces of type bridge
2015-09-18 01:30:29.723 TRACE nova.compute.manager [instance: 
6d9297cc-d97f-4

[Yahoo-eng-team] [Bug 1497471] [NEW] LBaaS V2 api tests too slow with octavia

2015-09-18 Thread Brandon Logan
Public bug reported:

The neutron lbaas v2 tests are too slow to run when using the octavia
driver.   This is because octavia uses a nova instance to host the
haproxy process.  However, with nova booting VMs and the hosts not
having vt-x enabled, it takes a long time for the VM to boot.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497471

Title:
  LBaaS V2 api tests too slow with octavia

Status in neutron:
  New

Bug description:
  The neutron lbaas v2 tests are too slow to run when using the octavia
  driver.   This is because octavia uses a nova instance to host the
  haproxy process.  However, with nova booting VMs and the hosts not
  having vt-x enabled, it takes a long time for the VM to boot.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497471/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497461] [NEW] Fernet tokens fail for some users with LDAP identity backend

2015-09-18 Thread Eric Brown
Public bug reported:

The following bug fixed most situations where when using Fernet + LDAP identify 
backend.
https://bugs.launchpad.net/keystone/+bug/1459382

However, some users have trouble, resulting in a UserNotFound exception in the 
logs with a UUID.  Here's the error:
2015-09-18 20:04:47.313 12979 WARNING keystone.common.wsgi [-] Could not find 
user: 457269632042726f776e203732363230

So the issue is this.  The user DN query + filter will return my user as:
   CN=Eric Brown 
72620,OU=PAO_Users,OU=PaloAlto_California_USA,OU=NALA,OU=SITES,OU=Engineering,DC=vmware,DC=com

Therefore, I have to use CN as the user id attribute.  My user id would
therefore be "Eric Brown 72620".  The fernet token_formatters.py
attempts to convert this user id into a UUID.  And in my case that is
successful.  It results in UUID of 457269632042726f776e203732363230.  Of
course, a user id of 457269632042726f776e203732363230 doesn't exist in
LDAP, so as a result I get a UserNotFound.  So I don't understand why
the convert_uuid_bytes_to_hex is ever used in the case of LDAP backend.

For other users, the token_formatters.convert_uuid_bytes_to_hex() raises
a ValueError and everything works.  Here's an example that illustrates
the behavior

>>> import uuid
>>> uuid_obj = uuid.UUID(bytes='Eric Brown 72620')
>>> uuid_obj.hex
'457269632042726f776e203732363230'

>>> import uuid
>>> uuid_obj = uuid.UUID(bytes='Your Mama')
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib/python2.7/uuid.py", line 144, in __init__
raise ValueError('bytes is not a 16-char string')
ValueError: bytes is not a 16-char string


Here's the complete traceback (after adding some additional debug):

2015-09-18 20:04:47.312 12979 WARNING keystone.common.wsgi [-] EWB Traceback 
(most recent call last):
  File "/usr/lib/python2.7/dist-packages/keystone/common/wsgi.py", line 449, in 
__call__
response = self.process_request(request)
  File "/usr/lib/python2.7/dist-packages/keystone/middleware/core.py", line 
238, in process_request
auth_context = self._build_auth_context(request)
  File "/usr/lib/python2.7/dist-packages/keystone/middleware/core.py", line 
218, in _build_auth_context
token_data=self.token_provider_api.validate_token(token_id))
  File "/usr/lib/python2.7/dist-packages/keystone/token/provider.py", line 198, 
in validate_token
token = self._validate_token(unique_id)
  File "/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 1013, 
in decorate
should_cache_fn)
  File "/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 640, in 
get_or_create
async_creator) as value:
  File "/usr/lib/python2.7/dist-packages/dogpile/core/dogpile.py", line 158, in 
__enter__
return self._enter()
  File "/usr/lib/python2.7/dist-packages/dogpile/core/dogpile.py", line 98, in 
_enter
generated = self._enter_create(createdtime)
  File "/usr/lib/python2.7/dist-packages/dogpile/core/dogpile.py", line 149, in 
_enter_create
created = self.creator()
  File "/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 612, in 
gen_value
created_value = creator()
  File "/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 1009, 
in creator
return fn(*arg, **kw)
  File "/usr/lib/python2.7/dist-packages/keystone/token/provider.py", line 261, 
in _validate_token
return self.driver.validate_v3_token(token_id)
  File 
"/usr/lib/python2.7/dist-packages/keystone/token/providers/fernet/core.py", 
line 258, in validate_v3_token
audit_info=audit_ids)
  File "/usr/lib/python2.7/dist-packages/keystone/token/providers/common.py", 
line 441, in get_token_data
self._populate_user(token_data, user_id, trust)
  File "/usr/lib/python2.7/dist-packages/keystone/token/providers/common.py", 
line 275, in _populate_user
user_ref = self.identity_api.get_user(user_id)
  File "/usr/lib/python2.7/dist-packages/keystone/identity/core.py", line 342, 
in wrapper
return f(self, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/keystone/identity/core.py", line 353, 
in wrapper
return f(self, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 1013, 
in decorate
should_cache_fn)
  File "/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 640, in 
get_or_create
async_creator) as value:
  File "/usr/lib/python2.7/dist-packages/dogpile/core/dogpile.py", line 158, in 
__enter__
return self._enter()
  File "/usr/lib/python2.7/dist-packages/dogpile/core/dogpile.py", line 98, in 
_enter
generated = self._enter_create(createdtime)
  File "/usr/lib/python2.7/dist-packages/dogpile/core/dogpile.py", line 149, in 
_enter_create
created = self.creator()
  File "/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 612, in 
gen_value
created_value = creator()
  File "/usr/lib/python2.7/dist-packages/dogpile/cache/region.py", line 1009, 
in creator
return fn(*arg, **kw)
  File "/usr/lib/python2

[Yahoo-eng-team] [Bug 1497459] [NEW] port usage tracking not reliable anymore

2015-09-18 Thread Salvatore Orlando
Public bug reported:

Patch https://review.openstack.org/#/c/13/23 modified 
neutron.db.ipam_backend in order to ensure a sqlalchemy event is triggered when 
deleting a port. This caused an issue when transaction isolation level is below 
repeatable read as the sqlalchemy ORM mapper throws an exception if the record 
is deleted by another transaction.
Patch https://review.openstack.org/#/c/224289/ fixed this but reinstated 
query.delete which does not trigger the sqlalchemy event.

It might be worth considering just handling the sqlalchemy orm exception
in this case; alternatively usage tracking for ports might be disabled.

A related question is why the logic for deleting a port resides in the
ipam module, but probably it should not be answered here.

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497459

Title:
  port usage tracking not reliable anymore

Status in neutron:
  In Progress

Bug description:
  Patch https://review.openstack.org/#/c/13/23 modified 
neutron.db.ipam_backend in order to ensure a sqlalchemy event is triggered when 
deleting a port. This caused an issue when transaction isolation level is below 
repeatable read as the sqlalchemy ORM mapper throws an exception if the record 
is deleted by another transaction.
  Patch https://review.openstack.org/#/c/224289/ fixed this but reinstated 
query.delete which does not trigger the sqlalchemy event.

  It might be worth considering just handling the sqlalchemy orm
  exception in this case; alternatively usage tracking for ports might
  be disabled.

  A related question is why the logic for deleting a port resides in the
  ipam module, but probably it should not be answered here.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497459/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1457329] Re: Error status of instance after suspend exception

2015-09-18 Thread Matt Riedemann
I agree with Eli that if libvirt fails we shouldn't assume the instance
is running and should be reset to ACTIVE status.

The suspend method in the compute manager will revert the task state to
None because it's using the @reverts_task_state decorator, so at least
you can delete the instance after it's gone into ERROR status:

https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L4018

I guess from the virsh output above the instance is still running in the
hypervisor, so maybe there could be a case made that if the call to
libvirt fails with a certain type of error we could handle it and check
if the guest is still running, but we'd still need to report an instance
fault since the operation failed.

Anyway, I agree the reset-state API is what should be used here.

** Tags removed: volumes

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1457329

Title:
  Error status of instance after suspend exception

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  devstack,version:
  ubuntu@dev1:/opt/stack/nova$ git log -1
  commit 2833f8c08fcfb7961b3c64b285ceff958bf5a05e
  Author: Zhengguang 
  Date:   Thu May 21 02:31:50 2015 +

  remove _rescan_iscsi from disconnect_volume_multipath_iscsi
  
  terminating instance that attached more than one volume, disconnect
  the first volume is ok, but the first volume is not removed, then
  disconnect the second volume, disconnect_volume_multipath_iscsi
  will call _rescan_iscsi so that rescan the first device, although
  the instance is destroyed, the first device is residual, therefor
  we don't need rescan when disconnect volume.
  
  Change-Id: I7f2c688aba9e69afaf370b2badc86a2bb3ee899d
  Closes-Bug:#1402535

  suspend instance, then got exception as follows:

  Setting instance vm_state to ERROR^[[00m^[[01;31m2015-05-21 04:48:29.179 
TRACE nova.compute.manager
  Traceback (most recent call last):
  ^[[01;31m2015-05-21 04:48:29.179 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/compute/manager.py", line 6089, in 
_error_out_instance_on_exception^[[01;31m2015-05-21 04:48:29.179 TRACE 
nova.compute.manageryield
  ^[[01;31m2015-05-21 04:48:29.179 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/compute/manager.py", line 4014, in 
suspend_instance^[[01;31m2015-05-21 04:48:29.179 TRACE nova.compute.manager
self.driver.suspend(context, instance)
  ^[[01;31m2015-05-21 04:48:29.179 TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2248, in 
suspend^[[01;31m2015-05-21 04:48:29.179 TRACE nova.compute.manager 
dom.managedSave(0)
  ^[[01;31m2015-05-21 04:48:29.179 TRACE nova.compute.manager   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 183, in doit 
^[[01;31m2015-05-21 04:48:29.179 TRACE nova.compute.manager   result = 
proxy_call(self._autowrap, f, *args, **kwargs)
  ^[[01;31m2015-05-21 04:48:29.179 TRACE nova.compute.manager  File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 141, in 
proxy_callrv = execute(f, *args, **kwargs)
  ^[[01;31m2015-05-21 04:48:29.179 TRACE nova.compute.manager  File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 122, in 
execute six.reraise(c, e, tb)
  ^[[01;31m2015-05-21 04:48:29.179 TRACE nova.compute.manager  File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 80, in tworker 
  rv = meth(*args, **kwargs)
  ^[[01;31m2015-05-21 04:48:29.179 TRACE nova.compute.manager  File 
"/usr/local/lib/python2.7/dist-packages/libvirt.py", line 1167, in managedSave
  ^[[01;31m2015-05-21 04:48:29.179 TRACE nova.compute.manager   if ret == -1: 
raise libvirtError ('virDomainManagedSave() failed', dom=self)
  libvirtError: operation failed: domain save job: unexpectedly 
failed

  ubuntu@dev1:~$ nova list
  
+--+---+---++-+--+
  | ID   | Name  | Status| Task State | 
Power State | Networks |
  
+--+---+---++-+--+
  | 0096094f-b854-4a56-bb35-c112cdbe20fb | test5 | ERROR | -  | 
Running | private=10.0.0.5, fd3b:f9:a091:0:f816:3eff:fe8e:dc62 |
  
+--+---+---++-+--+

  "virsh list" can see the instance is running
  ubuntu@dev1:~$ virsh list --all
   IdName   State
  
   2 instance-0003  running

  Exp

[Yahoo-eng-team] [Bug 1494310] Re: Arista ML2 driver doesn't synchronize HA networks with EOS correctly

2015-09-18 Thread Shashank Hegde
** Project changed: neutron => networking-arista

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1494310

Title:
  Arista ML2 driver doesn't synchronize HA networks with EOS correctly

Status in networking-arista:
  New

Bug description:
  Running Openstack Kilo and networking-arista from master (1aa15e8).

  With some HA networks already created in Neutron (created
  automatically when adding HA routers), restarting the CVX and neutron-
  server lead to those networks disappearing from EOS (not listed by
  "show openstack networks" anymore).

  The problem seems to come from _cleanup_db() in
  neutron/plugins/ml2/drivers/arista/mechanism_arista.py, which is
  called by initialize() in the same file:

  def _cleanup_db(self):
  """Clean up any uncessary entries in our DB."""
  db_tenants = db_lib.get_tenants()
  for tenant in db_tenants:
  neutron_nets = self.ndb.get_all_networks_for_tenant(tenant)
  neutron_nets_id = []
  for net in neutron_nets:
  neutron_nets_id.append(net['id'])
  db_nets = db_lib.get_networks(tenant)
  for net_id in db_nets.keys():
  if net_id not in neutron_nets_id:
  db_lib.forget_network(tenant, net_id)

  Since HA networks have no tenant_id, they won't be returned by
  self.ndb.get_all_networks_for_tenant(tenant), so they will be removed
  from the DB and won't be added to EOS during synchronization (checking
  the arista_provisioned_nets table in the neutron database shows that
  the HA networks are indeed missing, while they are listed by "neutron
  net-list").

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-arista/+bug/1494310/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497450] [NEW] DNS lookup code in get_ports needs to be optimized

2015-09-18 Thread Ryan Moats
Public bug reported:

Kilo's get_ports code can be found at http://pastebin.com/PjVG2KFt while
Liberty's get_ports code can be found at http://pastebin.com/wpmTx8H7

The difference in the two code paths (the DNS code) leads to an
execution time difference shown in http://ibin.co/2G72PkX2eshD

** Affects: neutron
 Importance: High
 Status: New


** Tags: performance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497450

Title:
  DNS lookup code in get_ports needs to be optimized

Status in neutron:
  New

Bug description:
  Kilo's get_ports code can be found at http://pastebin.com/PjVG2KFt
  while Liberty's get_ports code can be found at
  http://pastebin.com/wpmTx8H7

  The difference in the two code paths (the DNS code) leads to an
  execution time difference shown in http://ibin.co/2G72PkX2eshD

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497450/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497448] [NEW] admin system info page shows useless data, hides useful data

2015-09-18 Thread Eric Peterson
Public bug reported:

The system info page's display of keystone catalog and endpoint hides
important data like regions and various url types (public / private /
etc), and shows silly columns like Status (which is always enabled).

Need to show more information on this page, and hide the useless stuff.

** Affects: horizon
 Importance: Medium
 Assignee: Eric Peterson (ericpeterson-l)
 Status: Confirmed

** Changed in: horizon
 Assignee: (unassigned) => Eric Peterson (ericpeterson-l)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1497448

Title:
  admin system info page shows useless data, hides useful data

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  The system info page's display of keystone catalog and endpoint hides
  important data like regions and various url types (public / private /
  etc), and shows silly columns like Status (which is always enabled).

  Need to show more information on this page, and hide the useless
  stuff.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1497448/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497444] [NEW] Functional tests possibly leaving a network namespace around

2015-09-18 Thread Brian Haley
Public bug reported:

After running the functional tests (dvsm-functional) I noticed I was
unable to run 'ip netns', it was giving me an error something like:

@agent1

I was confused until I noticed a failure in a review:

| ==
| Failed 1 tests - output below:
| ==
| 
neutron.tests.functional.agent.test_l3_agent.L3HATestFramework.test_ha_router_failover
| 
--
| 
| Captured traceback:
| ~~~
| Traceback (most recent call last):
|   File "neutron/tests/functional/agent/test_l3_agent.py", line 795, in 
test_ha_router_failover
| router1 = self.manage_router(self.agent, router_info)
|   File "neutron/tests/functional/agent/test_l3_agent.py", line 133, in 
manage_router
| agent._process_added_router(router)
|   File "neutron/agent/l3/agent.py", line 446, in _process_added_router
| self._router_added(router['id'], router)
|   File "neutron/agent/l3/agent.py", line 335, in _router_added
| ri.initialize(self.process_monitor)
|   File "neutron/agent/l3/ha_router.py", line 87, in initialize
| self.ha_network_added()
|   File "neutron/agent/l3/ha_router.py", line 147, in ha_network_added
| prefix=HA_DEV_PREFIX)
|   File "neutron/agent/linux/interface.py", line 252, in plug
| bridge, namespace, prefix)
|   File "neutron/agent/linux/interface.py", line 346, in plug_new
| namespace_obj = ip.ensure_namespace(namespace)
|   File "neutron/agent/linux/ip_lib.py", line 164, in ensure_namespace
| ip = self.netns.add(name)
|   File "neutron/agent/linux/ip_lib.py", line 794, in add
| self._as_root([], ('add', name), use_root_namespace=True)
|   File "neutron/agent/linux/ip_lib.py", line 281, in _as_root
| use_root_namespace=use_root_namespace)
|   File "neutron/agent/linux/ip_lib.py", line 81, in _as_root
| log_fail_as_error=self.log_fail_as_error)
|   File "neutron/agent/linux/ip_lib.py", line 90, in _execute
| log_fail_as_error=log_fail_as_error)
|   File "neutron/agent/linux/utils.py", line 160, in execute
| raise RuntimeError(m)
| RuntimeError: 
| Command: ['ip', 'netns', 'add', "@agent1"]
| Exit code: 1
| Stdin: 
| Stdout: 
| Stderr: Cannot not create namespace file "/var/run/netns/@agent1": File exists

That is from http://logs.openstack.org/06/225206/2/check/gate-neutron-
dsvm-functional/9ca87f0/console.html

So it looks like a functional test is either creating a network
namespace and not cleaning it up, or doing something else horribly
wrong.

There is a test at cmd/test_netns_cleanup.py that uses mock and
namespaces, but it wasn't obvious to me that it was the culprit.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497444

Title:
  Functional tests possibly leaving a network namespace around

Status in neutron:
  New

Bug description:
  After running the functional tests (dvsm-functional) I noticed I was
  unable to run 'ip netns', it was giving me an error something like:

  @agent1

  I was confused until I noticed a failure in a review:

  | ==
  | Failed 1 tests - output below:
  | ==
  | 
neutron.tests.functional.agent.test_l3_agent.L3HATestFramework.test_ha_router_failover
  | 
--
  | 
  | Captured traceback:
  | ~~~
  | Traceback (most recent call last):
  |   File "neutron/tests/functional/agent/test_l3_agent.py", line 795, in 
test_ha_router_failover
  | router1 = self.manage_router(self.agent, router_info)
  |   File "neutron/tests/functional/agent/test_l3_agent.py", line 133, in 
manage_router
  | agent._process_added_router(router)
  |   File "neutron/agent/l3/agent.py", line 446, in _process_added_router
  | self._router_added(router['id'], router)
  |   File "neutron/agent/l3/agent.py", line 335, in _router_added
  | ri.initialize(self.process_monitor)
  |   File "neutron/agent/l3/ha_router.py", line 87, in initialize
  | self.ha_network_added()
  |   File "neutron/agent/l3/ha_router.py", line 147, in ha_network_added
  | prefix=HA_DEV_PREFIX)
  |   File "neutron/agent/linux/interface.py", line 252, in plug
  | bridge, namespace, prefix)
  |   File "neutron/agent/linux/interface.py", line 346, in plug_new
  | namespace_obj = ip.ensure_namespace(namespace)
  |   File "neutron/agent/linux/ip_lib.py", line 164, in ensure_namespace
  | ip = self.netns.add(name)
  |   File "neutron/agent/linux/ip_lib.py", line 794, in add

[Yahoo-eng-team] [Bug 1497410] [NEW] SSL Offload/Termination configuration not working for non-admin tenants

2015-09-18 Thread Vijay Kumar Venkatachalam
Public bug reported:


>> This honestly hasn’t even been *fully* tested yet, but it SHOULD work.
It did not work. Please read on.
>> User sets ACLs on Secrets and Container in Barbican, to allow the LBaaS user 
>> (right now using whatever user-id we publish in our docs) to read their data.
I did perform the above step to give read access for the container and secrets 
to “admin”, but it did not work.

Root Cause
==
The certmanager in lbaas which connects to barbican uses the keystone session 
gathered from
neutron_lbaas.common.keystone.get_session()
Since the keystone session is marked for tenant “admin” lbaas is not able to 
get the tenant’s container/certificate. 

Fix
==
The keystone session should have been generated with the tenant name set to the 
tenant name of the listener.


From: Adam Harwell [mailto:adam.harw...@rackspace.com] 
Sent: 16 September 2015 00:32
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [neutron][lbaas] Is SSL offload config possible 
using non "admin" tenant?

There is not really good documentation for this yet…
When I say Neutron-LBaaS tenant, I am maybe using the wrong word — I guess the 
user that is configured as the service-account in neutron.conf.
The user will hit the ACL API themselves to set up the ACLs on their own 
secrets/containers, we won’t do it for them. So, workflow is like:

•   User creates Secrets in Barbican.
•   User creates CertificateContainer in Barbican.
•   User sets ACLs on Secrets and Container in Barbican, to allow the LBaaS 
user (right now using whatever user-id we publish in our docs) to read their 
data.
•   User creates a LoadBalancer in Neutron-LBaaS.
•   LBaaS hits Barbican using its standard configured service-account to 
retrieve the Container/Secrets from the user’s Barbican account.
This honestly hasn’t even been *fully* tested yet, but it SHOULD work. The 
question is whether right now in devstack the admin user is allowed to read all 
user secrets just because it is the admin user (which I think might be the 
case), in which case we won’t actually know if ACLs are working as intended 
(but I think we assume that Barbican has tested that feature and we can just 
rely on it working).

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497410

Title:
  SSL Offload/Termination configuration not working for non-admin
  tenants

Status in neutron:
  New

Bug description:
  
  >> This honestly hasn’t even been *fully* tested yet, but it SHOULD work.
  It did not work. Please read on.
  >> User sets ACLs on Secrets and Container in Barbican, to allow the LBaaS 
user (right now using whatever user-id we publish in our docs) to read their 
data.
  I did perform the above step to give read access for the container and 
secrets to “admin”, but it did not work.

  Root Cause
  ==
  The certmanager in lbaas which connects to barbican uses the keystone session 
gathered from
  neutron_lbaas.common.keystone.get_session()
  Since the keystone session is marked for tenant “admin” lbaas is not able to 
get the tenant’s container/certificate. 

  Fix
  ==
  The keystone session should have been generated with the tenant name set to 
the tenant name of the listener.

  
  From: Adam Harwell [mailto:adam.harw...@rackspace.com] 
  Sent: 16 September 2015 00:32
  To: OpenStack Development Mailing List (not for usage questions) 

  Subject: Re: [openstack-dev] [neutron][lbaas] Is SSL offload config possible 
using non "admin" tenant?

  There is not really good documentation for this yet…
  When I say Neutron-LBaaS tenant, I am maybe using the wrong word — I guess 
the user that is configured as the service-account in neutron.conf.
  The user will hit the ACL API themselves to set up the ACLs on their own 
secrets/containers, we won’t do it for them. So, workflow is like:

  • User creates Secrets in Barbican.
  • User creates CertificateContainer in Barbican.
  • User sets ACLs on Secrets and Container in Barbican, to allow the LBaaS 
user (right now using whatever user-id we publish in our docs) to read their 
data.
  • User creates a LoadBalancer in Neutron-LBaaS.
  • LBaaS hits Barbican using its standard configured service-account to 
retrieve the Container/Secrets from the user’s Barbican account.
  This honestly hasn’t even been *fully* tested yet, but it SHOULD work. The 
question is whether right now in devstack the admin user is allowed to read all 
user secrets just because it is the admin user (which I think might be the 
case), in which case we won’t actually know if ACLs are working as intended 
(but I think we assume that Barbican has tested that feature and we can just 
rely on it working).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutr

[Yahoo-eng-team] [Bug 1496424] Re: gate-neutron-python27(34)-constraints jobs blowing up in gate due to misconfig

2015-09-18 Thread Kyle Mestery
** Changed in: neutron
   Status: In Progress => Fix Released

** Changed in: neutron
   Status: Fix Released => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1496424

Title:
  gate-neutron-python27(34)-constraints jobs blowing up in gate due to
  misconfig

Status in neutron:
  Fix Committed

Bug description:
  These look like new jobs but they are not working in the gate:

  http://logs.openstack.org/13/223713/2/gate/gate-neutron-
  python27-constraints/f3ea159/console.html#_2015-09-16_13_48_00_329

  http://logs.openstack.org/13/223713/2/gate/gate-neutron-
  python34-constraints/fd61c44/console.html#_2015-09-16_13_48_01_379

  
http://logstash.openstack.org/#eyJzZWFyY2giOiIobWVzc2FnZTpcIkVSUk9SOiB1bmtub3duIGVudmlyb25tZW50ICdweTI3LWNvbnN0cmFpbnRzJ1wiIE9SIG1lc3NhZ2U6XCJFUlJPUjogdW5rbm93biBlbnZpcm9ubWVudCAncHkzNC1jb25zdHJhaW50cydcIikgQU5EIHRhZ3M6XCJjb25zb2xlXCIgQU5EIHByb2plY3Q6XCJvcGVuc3RhY2svbmV1dHJvblwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDQyNDE0ODc4MDU5LCJtb2RlIjoiIiwiYW5hbHl6ZV9maWVsZCI6IiJ9

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1496424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497408] [NEW] Facets disappear when fullTextSearch facet is removed

2015-09-18 Thread Rajat Vig
Public bug reported:

When searching using the hz-magic-search-bar, when the user selects a
full text search and faceted searches removing the full text search
removes all the facet searches.

** Affects: horizon
 Importance: Undecided
 Assignee: Rajat Vig (rajatv)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1497408

Title:
  Facets disappear when fullTextSearch facet is removed

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When searching using the hz-magic-search-bar, when the user selects a
  full text search and faceted searches removing the full text search
  removes all the facet searches.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1497408/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497401] [NEW] Add Solaris to vm_modes and hvtype

2015-09-18 Thread Drew Fisher
Public bug reported:

In order for Solaris to participate in Nova, a 'solariszones' hypervisor
and vm_mode needs to be added.

** Affects: nova
 Importance: Undecided
 Assignee: Drew Fisher (drew-fisher)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Drew Fisher (drew-fisher)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1497401

Title:
  Add Solaris to vm_modes and hvtype

Status in OpenStack Compute (nova):
  New

Bug description:
  In order for Solaris to participate in Nova, a 'solariszones'
  hypervisor and vm_mode needs to be added.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1497401/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497396] [NEW] Network creation and Router Creation times degrade with large number of instances

2015-09-18 Thread Uday
Public bug reported:

We are trying to analyze why creation of routers and networks degrades
when there is a large number of instances of these. Running cprofile on
the L3 agent indicated that ensure_namespace function seems to degrade
when a large number of namespaces are present.

Looking through the code all the namespaces are listed with the "ip
netns list" command and then compared against the one that is of
interest. This scales badly since with large number if instances the
number of comparisons increases.

An alternate way to achieve the same result could be to check for the
desired namespace ("ls /var/run/netns/qrouter-") or to run a command
(maybe the date commands?)  and check the response.

Either method described above would have a constant time for execution,
rather than the linear time as seen presently.

Thanks,
-Uday

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497396

Title:
  Network creation and Router Creation times degrade with large number
  of instances

Status in neutron:
  New

Bug description:
  We are trying to analyze why creation of routers and networks degrades
  when there is a large number of instances of these. Running cprofile
  on the L3 agent indicated that ensure_namespace function seems to
  degrade when a large number of namespaces are present.

  Looking through the code all the namespaces are listed with the "ip
  netns list" command and then compared against the one that is of
  interest. This scales badly since with large number if instances the
  number of comparisons increases.

  An alternate way to achieve the same result could be to check for the
  desired namespace ("ls /var/run/netns/qrouter-") or to run a
  command (maybe the date commands?)  and check the response.

  Either method described above would have a constant time for
  execution, rather than the linear time as seen presently.

  Thanks,
  -Uday

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497396/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497151] Re: error messages in nova help list

2015-09-18 Thread Mark Doffman
Clearly an issue with novaclient help rather than nova.

** Project changed: nova => python-novaclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1497151

Title:
  error messages in nova help list

Status in python-novaclient:
  New

Bug description:
  1. version
  kilo 2015.1.0

  2. Relevant log files:
  no log

  3. Reproduce steps:

  3.1
  nova help list

  --all-tenants [<0|1>] Display information from all tenants (Admin 
only).
  --tenant []   Display information from single tenant 
(Adminonly).
     The --all-tenants option must also be provided.

  3.2
  nova list --tenant f7a1114e87d9439986a73e9d419a71f7 (This is one of my tenant 
id)

  Expected result:

  Should be prompt something like :“ you need to add parameters --all-
  tenants.”

  Actual result:

  [root@devcontrol ~(keystone_admin)]# nova list --tenant 
f7a1114e87d9439986a73e9d419a71f7
  
+--+--+--+++-++
  | ID   | Name | Tenant ID 
   | Status | Task State | Power State | Networks   |
  
+--+--+--+++-++
  | 4fc84ebe-fee7-4c4e-86d8-7cf5a191135e | testflavor02 | 
f7a1114e87d9439986a73e9d419a71f7 | ACTIVE | -  | Running | 
net1=192.168.0.103 |
  | bb596070-fca2-4cb8-917b-1374c78d1175 | testflavor03 | 
f7a1114e87d9439986a73e9d419a71f7 | ERROR  | -  | NOSTATE |  
  |
  | 3e211f9b-e026-464b-aadc-1f00f5d1a69f | v3test   | 
f7a1114e87d9439986a73e9d419a71f7 | ACTIVE | -  | Running | 
net1=192.168.0.93  |
  | 280cb34c-0548-4ed3-b0d0-f391e875101d | v5test   | 
f7a1114e87d9439986a73e9d419a71f7 | ACTIVE | -  | Running | 
net1=192.168.0.221 |
  
+--+--+--+++-++

  4

  The Actual result show that messages “--all-tenants option must also be 
provided.” in “nova help list”  is wrong
  So,it should be delete

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-novaclient/+bug/1497151/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497379] [NEW] iptables programming: Adding router instances takes increasing amount of time

2015-09-18 Thread Uday
Public bug reported:

We have been trying to analyze why under scale creating additional
routers and associating floating IP address with a VM causes the
operation to take longer completion times, and have found that the
programming of iptables seems to be an issue. Particularly 4 functions
(and their use) seem to degrade with large numbers of router instances.

We gathered this data with cprofile on the L3 agent. Tests were run to
collect data on the first router instantiation and on the 40th router
instantiation, with data also being collected at points between 1 and 40
routers. All the following functions showed an increasing trend:

(_find_last_entry)
(_weed_out_removes)
(_weed_out_duplicate_chains)
(_weed_out_duplicate_rules)

ncalls  tottime  percall  cumtime  percall filename:lineno(function)

For first router instantiation and Floating IP associate:

ncalls  tottime  percall  cumtime  percall filename:lineno(function)
1260.0000.0000.0000.000 
/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_manager.py:504(_find_last_entry)
1780.0000.0000.0000.000 
/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_manager.py:633(_weed_out_removes)
1780.0000.0000.0000.000 
/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_manager.py:608(_weed_out_duplicate_chains)
1780.0000.0000.0000.000 
/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_manager.py:622(_weed_out_duplicate_rules)

40 run of creating router

ncalls  tottime  percall  cumtime  percall filename:lineno(function)
3130.0010.0000.0040.000 
/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_manager.py:504(_find_last_entry)
3710.0000.0000.0040.000 
/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_manager.py:633(_weed_out_removes)
3710.0000.0000.0040.000 
/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_manager.py:608(_weed_out_duplicate_chains)
3710.0000.0000.0040.000 
/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_manager.py:622(_weed_out_duplicate_rules)


In particular, the weed out routines seem to do multiple iterations of matching 
iptables rules with certain rules and then operate on them. The increasing 
iteration numbers the weed out routines seems to be degrading the performance.

Does someone have information on whether this can be optimized?

Thanks,
-Uday

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497379

Title:
  iptables programming: Adding router instances takes increasing amount
  of time

Status in neutron:
  New

Bug description:
  We have been trying to analyze why under scale creating additional
  routers and associating floating IP address with a VM causes the
  operation to take longer completion times, and have found that the
  programming of iptables seems to be an issue. Particularly 4 functions
  (and their use) seem to degrade with large numbers of router
  instances.

  We gathered this data with cprofile on the L3 agent. Tests were run to
  collect data on the first router instantiation and on the 40th router
  instantiation, with data also being collected at points between 1 and
  40 routers. All the following functions showed an increasing trend:

  (_find_last_entry)
  (_weed_out_removes)
  (_weed_out_duplicate_chains)
  (_weed_out_duplicate_rules)

  ncalls  tottime  percall  cumtime  percall filename:lineno(function)

  For first router instantiation and Floating IP associate:

  ncalls  tottime  percall  cumtime  percall filename:lineno(function)
  1260.0000.0000.0000.000 
/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_manager.py:504(_find_last_entry)
  1780.0000.0000.0000.000 
/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_manager.py:633(_weed_out_removes)
  1780.0000.0000.0000.000 
/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_manager.py:608(_weed_out_duplicate_chains)
  1780.0000.0000.0000.000 
/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_manager.py:622(_weed_out_duplicate_rules)

  40 run of creating router

  ncalls  tottime  percall  cumtime  percall filename:lineno(function)
  3130.0010.0000.0040.000 
/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_manager.py:504(_find_last_entry)
  3710.0000.0000.0040.000 
/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_manager.py:633(_weed_out_removes)
  3710.0000.0000.0040.000 
/usr/lib/python2.7/site-packages/neutron/agent/linux/iptables_manager.py:608(_weed_out_duplicate_chains)
  3710.0000.0000.0040.000 
/usr/lib/python2.7/site-packages/neutron/agent/linux/ip

[Yahoo-eng-team] [Bug 1497358] [NEW] Creating instance with Boot from instance (creates new volume) timesout before image can be downloaded

2015-09-18 Thread Sean McGinnis
Public bug reported:

When creating a new instance with the option to create a new volume from
an image and that image is large, instance creation can time out before
the image has had a chance to download.

With a larger image size (like Windows, but could be done with others) this 
process takes a long time. The instance creation times out with the default 
settings and the instance is cleaned up. But in the background the download is 
still happening.

So far, our Cinder driver is not involved in any way in this process.   

Once the image download finally completes it then finally calls in to the 
driver to create the volume. I would think this would be done first to make 
sure there is a volume to transfer the image to in the first place. It then 
attaches the volume to the compute host, then gets the failure rollback and 
deletes the volume.

If you watch the syslog (tail -f /var/log/syslog) while all this is happening 
you can see way after horizon errors out some messages from  the kernel that is 
discovered a new iSCSI device, then shortly afterwards a warning that it 
received an indication that the LUNassignments have changed (from 
the device removal).

Horizon displays the message:

Error: Build of instance 84bba509-1727-4c32-83c4-925f91f12c6f aborted:
Block Device Mapping is Invalid

In the n-cpu.log there is a failure with the message:

VolumeNotCreated: Volume f5818ef3-c21d-44d4-b2e6-9996d4ac7bec did not
finish being created even after we waited 214 seconds or 61 attempts.
And its status is creating.

Someone from the nova team thought this could be the case if everything
is being passed to the novaclient rather than performing the operations
directly (with the volume likely being created first) like the tempest
tests do for boot images. It could be argued that the novaclient should
be updated to perform this correctly via that path, but this surfaces,
and could also be fixed, at the horizon level.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1497358

Title:
  Creating instance with Boot from instance (creates new volume)
  timesout before image can be downloaded

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When creating a new instance with the option to create a new volume
  from an image and that image is large, instance creation can time out
  before the image has had a chance to download.

  With a larger image size (like Windows, but could be done with others) this 
process takes a long time. The instance creation times out with the default 
settings and the instance is cleaned up. But in the background the download is 
still happening.

  
  So far, our Cinder driver is not involved in any way in this process. 
  

  
  Once the image download finally completes it then finally calls in to the 
driver to create the volume. I would think this would be done first to make 
sure there is a volume to transfer the image to in the first place. It then 
attaches the volume to the compute host, then gets the failure rollback and 
deletes the volume.

  
  If you watch the syslog (tail -f /var/log/syslog) while all this is happening 
you can see way after horizon errors out some messages from  the kernel that is 
discovered a new iSCSI device, then shortly afterwards a warning that it 
received an indication that the LUNassignments have changed (from 
the device removal).

  Horizon displays the message:

  Error: Build of instance 84bba509-1727-4c32-83c4-925f91f12c6f aborted:
  Block Device Mapping is Invalid

  In the n-cpu.log there is a failure with the message:

  VolumeNotCreated: Volume f5818ef3-c21d-44d4-b2e6-9996d4ac7bec did not
  finish being created even after we waited 214 seconds or 61 attempts.
  And its status is creating.

  Someone from the nova team thought this could be the case if
  everything is being passed to the novaclient rather than performing
  the operations directly (with the volume likely being created first)
  like the tempest tests do for boot images. It could be argued that the
  novaclient should be updated to perform this correctly via that path,
  but this surfaces, and could also be fixed, at the horizon level.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1497358/+subscriptions

-- 
Mai

[Yahoo-eng-team] [Bug 1453047] Re: ValueError: Tables "artifact_blob_locations, artifact_blobs, artifact_dependencies, artifact_properties, artifact_tags, artifacts, image_locations, image_members, im

2015-09-18 Thread nikhil komawar
This has become non-relevant so, lowering priority and removing
milestone target.

The status has been moved to opinion until we get back to it next time.
If no more updates exists it will be won't fix then.

** Changed in: glance
   Importance: High => Wishlist

** Changed in: glance
   Status: New => Opinion

** Changed in: glance
Milestone: liberty-rc1 => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1453047

Title:
  ValueError: Tables
  "artifact_blob_locations,artifact_blobs,artifact_dependencies,
  artifact_properties,artifact_tags,artifacts,
  image_locations,image_members,image_properties,image_tags,
  images,metadef_namespace_resource_types,
  metadef_namespaces,metadef_objects,metadef_properties,
  metadef_resource_types,metadef_tags,task_info,tasks" have non utf8
  collation, please make sure all tables are CHARSET=utf8

Status in Glance:
  Opinion

Bug description:
  A new sanity_check has been enabled in oslo.db, which verifies the
  table charset. We need to make the switch to utf8 explicit in our
  models definition. The current error in the gate is:

  
  Traceback (most recent call last):
File 
"/home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/test_migrations.py",
 line 572, in test_models_sync
  self.db_sync(self.get_engine())
File "glance/tests/unit/test_migrations.py", line 1686, in db_sync
  migration.db_sync(engine=engine)
File "glance/db/migration.py", line 65, in db_sync
  init_version=init_version)
File 
"/home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/migration.py",
 line 84, in db_sync
  _db_schema_sanity_check(engine)
File 
"/home/jenkins/workspace/gate-glance-python27/.tox/py27/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/migration.py",
 line 114, in _db_schema_sanity_check
  ) % ','.join(table_names))
  ValueError: Tables 
"artifact_blob_locations,artifact_blobs,artifact_dependencies,artifact_properties,artifact_tags,artifacts,image_locations,image_members,image_properties,image_tags,images,metadef_namespace_resource_types,metadef_namespaces,metadef_objects,metadef_properties,metadef_resource_types,metadef_tags,task_info,tasks"
 have non utf8 collation, please make sure all tables are CHARSET=utf8

  
  And the fix required should consist in adding `'charset': 'utf-8'` to our 
GlanceBase model.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1453047/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497343] [NEW] Need to consolidate duplicated volume detach code between compute manager and block_device

2015-09-18 Thread Matt Riedemann
Public bug reported:

In this change:

https://review.openstack.org/#/c/186742/11/nova/virt/block_device.py

It was pointed out that the change is adding volume detach code that is
duplicated with what's also in the _shutdown_instance method in
nova.compute.manager.

We wanted to get that bug fix into liberty before rc1 but we should
consolidate this duplicate volume detach code int the
nova.virt.block_device module and then have the compute manager call
that.

This bug is just tracking the reminder to clean this up.

** Affects: nova
 Importance: Low
 Status: Triaged


** Tags: cleanup debt refactor volumes

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1497343

Title:
  Need to consolidate duplicated volume detach code between compute
  manager and block_device

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  In this change:

  https://review.openstack.org/#/c/186742/11/nova/virt/block_device.py

  It was pointed out that the change is adding volume detach code that
  is duplicated with what's also in the _shutdown_instance method in
  nova.compute.manager.

  We wanted to get that bug fix into liberty before rc1 but we should
  consolidate this duplicate volume detach code int the
  nova.virt.block_device module and then have the compute manager call
  that.

  This bug is just tracking the reminder to clean this up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1497343/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1480131] Re: Volume_Attachment_ID uses Volume_ID

2015-09-18 Thread Matt Riedemann
Given our policy on no more API proxies to other services, I'm inclined
to mark this as 'won't fix':

http://docs.openstack.org/developer/nova/project_scope.html#no-more-api-
proxies

You can get the attachment id from the volume API.

** Changed in: nova
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1480131

Title:
  Volume_Attachment_ID uses Volume_ID

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  Version: Kilo Stable

  Problem Description: querying nova for volume attachments returns the wrong 
volume_attachment_id.
  I receive the volume_id instead of the volume_attachment_id.

  Example:

  curl -g -H "X-Auth-Token: $ADMIN_TOKEN" -X GET
  
https://compute:8774/v2/(tenant_id)/servers/56293904-9384-48f8-9329-c961056583f1
  /os-volume_attachments

  {"volumeAttachments": [{"device": "/dev/vdb", "serverId":
  "56293904-9384-48f8-9329-c961056583f1", "id": "a75bec42-77b5-42ff-
  90e5-e505af14b84a", "volumeId": "a75bec42-77b5-42ff-
  90e5-e505af14b84a"}]}

  
  Having a look at the database directly, I see the real volume_attachment_id:

  select (id, volume_id, instance_uuid) from volume_attachment where
  volume_id='a75bec42-77b5-42ff-90e5-e505af14b84a';

  (9cb82021-e77e-495f-8ade-524bc5ccf68c,a75bec42-77b5-42ff-
  90e5-e505af14b84a,56293904-9384-48f8-9329-c961056583f1)

  
  Cinder API gets it right, though.

  
  Further Impact:
  Horizon uses the returned volume_attachment_id to query  for volume_details.
  That is wrong and only works now because of the broken nova behaviour.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1480131/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477483] Re: Support delegation of bind_port to networking-odl backend driver.

2015-09-18 Thread Kyle Mestery
** Changed in: networking-odl
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1477483

Title:
  Support delegation of bind_port to networking-odl backend driver.

Status in networking-odl:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  The OpenDaylightMechanismDriver currently delegates all functions
  except initialize,bind_port and check_segment.

  this bugs suggest removing check_segment from the front end mech
  dirver and delegating the implementation of bind_port to the
  networking-odl backend driver.

  this will enabled extension of bind_port to support other port types
  such as vhost-user in a separated patch-set to networking-odl without
  requiring further changes to the front end component in neutron.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-odl/+bug/1477483/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444112] Re: ML2 security groups only work with agent drivers

2015-09-18 Thread Kyle Mestery
** Changed in: networking-odl
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1444112

Title:
  ML2 security groups only work with agent drivers

Status in networking-odl:
  Fix Released
Status in networking-ovn:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron kilo series:
  New

Bug description:
  The current ML2 integration with security groups makes a bunch of
  assumptions which don't work for controller based architectures like
  OpenDaylight and OVN. This bug will track the fixing of these issues.

  The main issues include the fact it assumes an agent-based approach
  and will send SG updates via RPC calls to the agents. This isn't true
  for ODL or OVN.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-odl/+bug/1444112/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497309] Re: l3-agent unable to parse output from ip netns list

2015-09-18 Thread James Page
** Description changed:

  When run through sudo, ip netns has some extra output on Ubuntu wily:
  
- $ sudo ip netns 
+ $ sudo ip netns
  qdhcp-35fc068a-750d-4add-b1d2-af392dbd8790 (id: 1)
  qrouter-49c6d7b1-8399-4944-81ad-093b6e786db0 (id: 0)
  
  and from l3-agent:
  
  2015-09-18 14:15:47.889 26554 DEBUG oslo_messaging._drivers.amqpdriver [-] 
MSG_ID is 59f744cafcab474baee68232c4cf70e9 _send 
/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:392
  2015-09-18 14:15:47.895 26554 DEBUG neutron.agent.l3.agent [-] Starting 
_process_routers_loop _process_routers_loop 
/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py:509
  2015-09-18 14:15:47.896 26554 DEBUG oslo_service.loopingcall [-] Fixed 
interval looping call 
'neutron.agent.l3.agent.L3NATAgentWithStateReport._report_state' sleeping for 
29.93 seconds _run_loop 
/usr/lib/python2.7/dist-packages/oslo_service/loopingcall.py:121
  2015-09-18 14:15:47.916 26554 DEBUG neutron.agent.l3.agent [-] Starting 
periodic_sync_routers_task - fullsync:True periodic_sync_routers_task 
/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py:521
  2015-09-18 14:15:47.953 26554 DEBUG neutron.agent.linux.utils [-]
  Command: ['ip', 'netns', 'list']
  Exit code: 0
-  execute /usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py:151
+  execute /usr/lib/python2.7/dist-packages/neutron/agent/linux/utils.py:151
  2015-09-18 14:15:47.954 26554 DEBUG oslo_messaging._drivers.amqpdriver [-] 
MSG_ID is d04be6386ef7495ebeb3cb656fb330a8 _send 
/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:392
  2015-09-18 14:15:48.268 26554 DEBUG neutron.agent.l3.agent [-] Processing 
:[{u'status': u'ACTIVE', u'_interfaces': [{u'status': u'DOWN', u'subnets': 
[{u'ipv6_ra_mode': None, u'cidr': u'192.168.21.0/24', u'gateway_ip': 
u'192.168.21.1', u'id': u'242acaef-22b7-4044-ab1f-788bd31ad1da', 
u'subnetpool_id': None}], u'binding:host_id': u'juju-devel3-machine-12', 
u'name': u'', u'allowed_address_pairs': [], u'admin_state_up': True, 
u'network_id': u'35fc068a-750d-4add-b1d2-af392dbd8790', u'dns_name': u'', 
u'extra_dhcp_opts': [], u'mac_address': u'fa:16:3e:3f:4a:90', 
u'binding:vif_details': {u'port_filter': True, u'ovs_hybrid_plug': True}, 
u'binding:vif_type': u'ovs', u'device_owner': u'network:router_interface', 
u'tenant_id': u'85d6051d040347e5bbd689348405faf0', u'extra_subnets': [], 
u'binding:profile': {}, u'binding:vnic_type': u'normal', u'fixed_ips': 
[{u'subnet_id': u'242acaef-22b7-4044-ab1f-788bd31ad1da', u'prefixlen': 24, 
u'ip_address': u'192.168.21.1'}], u'id': 
u'bfc7b6e2-29a7-428b-b7fd-0675e9bf5df8', u'security_groups': [], u'device_id': 
u'49c6d7b1-8399-4944-81ad-093b6e786db0'}], u'enable_snat': True, u'ha_vr_id': 
0, u'gw_port_host': None, u'gw_port_id': 
u'7cca3db9-5502-43be-b193-59d523e3c81b', u'admin_state_up': True, u'tenant_id': 
u'85d6051d040347e5bbd689348405faf0', u'gw_port': {u'status': u'DOWN', 
u'subnets': [{u'ipv6_ra_mode': None, u'cidr': u'10.5.0.0/16', u'gateway_ip': 
u'10.5.0.1', u'id': u'431e736d-04d1-4817-b3c1-c9579b4b51f0', u'subnetpool_id': 
None}], u'binding:host_id': u'juju-devel3-machine-12', u'name': u'', 
u'allowed_address_pairs': [], u'admin_state_up': True, u'network_id': 
u'00ef84fe-880a-45c6-ae3d-967089b161ef', u'dns_name': u'', u'extra_dhcp_opts': 
[], u'mac_address': u'fa:16:3e:6f:24:28', u'binding:vif_details': 
{u'port_filter': True, u'ovs_hybrid_plug': True}, u'binding:vif_type': u'ovs', 
u'device_owner': u'network:router_gateway', u'tenant_id': u'', 
u'extra_subnets': [], u'binding:profile': {}, u'binding:vnic_type': u'normal', 
u'fixed_ips': [{u'subnet_id': u'431e736d-04d1-4817-b3c1-c9579b4b51f0', 
u'prefixlen': 16, u'ip_address': u'10.5.150.0'}], u'id': 
u'7cca3db9-5502-43be-b193-59d523e3c81b', u'security_groups': [], u'device_id': 
u'49c6d7b1-8399-4944-81ad-093b6e786db0'}, u'distributed': False, 
u'_snat_router_interfaces': [], u'_floatingip_agent_interfaces': [], 
u'_floatingips': [{u'router_id': u'49c6d7b1-8399-4944-81ad-093b6e786db0', 
u'status': u'DOWN', u'tenant_id': u'85d6051d040347e5bbd689348405faf0', 
u'floating_network_id': u'00ef84fe-880a-45c6-ae3d-967089b161ef', 
u'fixed_ip_address': u'192.168.21.3', u'floating_ip_address': u'10.5.150.1', 
u'port_id': u'a5d45770-98ea-4a2b-b839-5274e783abca', u'id': 
u'7067f627-91ea-4917-ab31-48570d3c397b'}], u'routes': [], 
u'external_gateway_info': {u'network_id': 
u'00ef84fe-880a-45c6-ae3d-967089b161ef', u'enable_snat': True, 
u'external_fixed_ips': [{u'subnet_id': u'431e736d-04d1-4817-b3c1-c9579b4b51f0', 
u'ip_address': u'10.5.150.0'}]}, u'ha': False, u'id': 
u'49c6d7b1-8399-4944-81ad-093b6e786db0', u'name': u'provider-router'}] 
fetch_and_sync_all_routers 
/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py:555
  2015-09-18 14:15:48.272 26554 DEBUG neutron.agent.l3.agent [-] 
periodic_sync_routers_task successfully completed fetch_and_sync_all_routers 
/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py:570
  2015-09-18

[Yahoo-eng-team] [Bug 1497293] [NEW] Nova misuses qemu-img for kernel and initrd

2015-09-18 Thread Andreas Färber
Public bug reported:

QEMU loads -kernel and -inird arguments as binary blobs, without
processing them through its block layer - that is only done for disk
images (i.e., -drive file=...). I.e., from QEMU's perspective, all
kernels and all initrds are raw format. QEMU's modular block drivers
allow to add format support, e.g., for gzip'ed images.

When a different format is reported in place of "raw", Nova will
unnecessarily try to convert the initramfs to raw format, which will
still report the same format as before, leading to an exception being
raised.

Therefore please do not run `qemu-img info` on files that are not disk
images.

The source of the problem seems to be that the same
libvirt_utils.fetch_image function is used for disk_images['kernel_id']
and disk_images['ramdisk_id'] as for disk_images['image_id']:

https://git.openstack.org/cgit/openstack/nova/tree/nova/virt/images.py?h=stable/kilo#n131

https://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/utils.py?h=stable/kilo#n504

https://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/driver.py?h=stable/kilo#n2741

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1497293

Title:
  Nova misuses qemu-img for kernel and initrd

Status in OpenStack Compute (nova):
  New

Bug description:
  QEMU loads -kernel and -inird arguments as binary blobs, without
  processing them through its block layer - that is only done for disk
  images (i.e., -drive file=...). I.e., from QEMU's perspective, all
  kernels and all initrds are raw format. QEMU's modular block drivers
  allow to add format support, e.g., for gzip'ed images.

  When a different format is reported in place of "raw", Nova will
  unnecessarily try to convert the initramfs to raw format, which will
  still report the same format as before, leading to an exception being
  raised.

  Therefore please do not run `qemu-img info` on files that are not disk
  images.

  The source of the problem seems to be that the same
  libvirt_utils.fetch_image function is used for
  disk_images['kernel_id'] and disk_images['ramdisk_id'] as for
  disk_images['image_id']:

  
https://git.openstack.org/cgit/openstack/nova/tree/nova/virt/images.py?h=stable/kilo#n131

  
https://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/utils.py?h=stable/kilo#n504

  
https://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/driver.py?h=stable/kilo#n2741

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1497293/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497295] [NEW] Incorrect actions on Admin -> Subnet Details

2015-09-18 Thread Rob Cresswell
Public bug reported:

The table actions generated on the Subnet Details page under Admin/
Networks incorrectly use the project URLS and views, rather than the
admin ones.

** Affects: horizon
 Importance: Undecided
 Assignee: Rob Cresswell (robcresswell)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Rob Cresswell (robcresswell)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1497295

Title:
  Incorrect actions on Admin -> Subnet Details

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The table actions generated on the Subnet Details page under Admin/
  Networks incorrectly use the project URLS and views, rather than the
  admin ones.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1497295/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497272] [NEW] L3 HA: Unstable rescheduling time

2015-09-18 Thread Ann Kamyshnikova
Public bug reported:

I have tested work of L3 HA on environment with 3 controllers and 1 compute 
(Kilo) with this simple scenario:
1) ping vm by floating ip
2) disable master l3-agent (which ha_state is active)
3) wait for pings to continue and another agent became active
4) check number of packages that were lost

My results are  following:
1) When max_l3_agents_per_router=2, 3 to 4 packages were lost.
2) When max_l3_agents_per_router=3 or 0 (meaning the router will be scheduled 
on every agent), 10 to 70 packages were lost.

I should mention that in both cases there was only one ha router.

It is expected that less packages will be lost when
max_l3_agents_per_router=3(0).

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497272

Title:
  L3 HA: Unstable rescheduling time

Status in neutron:
  New

Bug description:
  I have tested work of L3 HA on environment with 3 controllers and 1 compute 
(Kilo) with this simple scenario:
  1) ping vm by floating ip
  2) disable master l3-agent (which ha_state is active)
  3) wait for pings to continue and another agent became active
  4) check number of packages that were lost

  My results are  following:
  1) When max_l3_agents_per_router=2, 3 to 4 packages were lost.
  2) When max_l3_agents_per_router=3 or 0 (meaning the router will be scheduled 
on every agent), 10 to 70 packages were lost.

  I should mention that in both cases there was only one ha router.

  It is expected that less packages will be lost when
  max_l3_agents_per_router=3(0).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497272/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497253] [NEW] different availability zone for nova and cinder

2015-09-18 Thread Jiri Suchomel
Public bug reported:

When booting an instance from image, which should create new volume,
when AZ zone is not specified, instance could end in different AZ than
the image.

That doesn't hurt with cross_az_attach=true, but if this is set to
False, creating the volume will fail with

" Instance %(instance)s and volume %(vol)s are not in the same
availability_zone" error.


Nova actually decides at some point, which AZ it should use (when none was 
specified), so I think we just need to move this decision before the point when 
the volume is created, so nova can pass correct AZ value to cinder API.

** Affects: nova
 Importance: Undecided
 Assignee: Jiri Suchomel (jsuchome)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1497253

Title:
  different availability zone for nova and cinder

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  When booting an instance from image, which should create new volume,
  when AZ zone is not specified, instance could end in different AZ than
  the image.

  That doesn't hurt with cross_az_attach=true, but if this is set to
  False, creating the volume will fail with

  " Instance %(instance)s and volume %(vol)s are not in the same
  availability_zone" error.

  
  Nova actually decides at some point, which AZ it should use (when none was 
specified), so I think we just need to move this decision before the point when 
the volume is created, so nova can pass correct AZ value to cinder API.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1497253/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497218] [NEW] nova-api and nova-compute repeated examination “allow_resize_to_same_host”

2015-09-18 Thread jinquanni(ZTE)
Public bug reported:

1. version
kilo 2015.1.0

2. Relevant log files:

2.1 nova-scheduler.log

2015-09-18 12:57:03.981 21782 INFO nova.filters [req-87d25f64-fd40-49cc-
873c-6aad4da3ade9 9c67877ee37b47e989148a776862c7b8
40fc54dc632c4a02b44bf31d7ff15c82 - - -] Filter ComputeFilter returned 0
hosts for instance 02a31655-06da-47f7-a3e4-b3f654789cd2

2.2 nova-compute.log

2015-09-18 13:12:06.517 28500 ERROR nova.compute.manager 
[req-03981386-9782-49de-8134-aa5437adbbfd 9c67877ee37b47e989148a776862c7b8 
40fc54dc632c4a02b44bf31d7ff15c82 - - -] [instance: 
02a31655-06da-47f7-a3e4-b3f654789cd2] Setting instance vm_state to ERROR
2015-09-18 13:12:06.517 28500 TRACE nova.compute.manager [instance: 
02a31655-06da-47f7-a3e4-b3f654789cd2] Traceback (most recent call last):
2015-09-18 13:12:06.517 28500 TRACE nova.compute.manager [instance: 
02a31655-06da-47f7-a3e4-b3f654789cd2]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6748, in 
_error_out_instance_on_exception
2015-09-18 13:12:06.517 28500 TRACE nova.compute.manager [instance: 
02a31655-06da-47f7-a3e4-b3f654789cd2] yield
2015-09-18 13:12:06.517 28500 TRACE nova.compute.manager [instance: 
02a31655-06da-47f7-a3e4-b3f654789cd2]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4096, in 
prep_resize
2015-09-18 13:12:06.517 28500 TRACE nova.compute.manager [instance: 
02a31655-06da-47f7-a3e4-b3f654789cd2] filter_properties)
2015-09-18 13:12:06.517 28500 TRACE nova.compute.manager [instance: 
02a31655-06da-47f7-a3e4-b3f654789cd2]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4085, in 
prep_resize
2015-09-18 13:12:06.517 28500 TRACE nova.compute.manager [instance: 
02a31655-06da-47f7-a3e4-b3f654789cd2] node, clean_shutdown)
2015-09-18 13:12:06.517 28500 TRACE nova.compute.manager [instance: 
02a31655-06da-47f7-a3e4-b3f654789cd2]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 4033, in 
_prep_resize
2015-09-18 13:12:06.517 28500 TRACE nova.compute.manager [instance: 
02a31655-06da-47f7-a3e4-b3f654789cd2] raise 
exception.MigrationError(reason=msg)
2015-09-18 13:12:06.517 28500 TRACE nova.compute.manager [instance: 
02a31655-06da-47f7-a3e4-b3f654789cd2] MigrationError: Migration error: 
destination same as source!
3. Reproduce steps:

3.1 
I have one controller  and  one compute node。
create a VM 
3.2
resize the VM in the following different situations:


ID   |   controller'S nova.conf|  compute node'S nova.conf  
   |  resize'result  | log|

1|  allow_resize_to_same_host=True  |  allow_resize_to_same_host=True | 
success|  |

2|  allow_resize_to_same_host=false  |  allow_resize_to_same_host=True |
  failed  |  2.1   |
-
3|  allow_resize_to_same_host=True  |  allow_resize_to_same_host=false |
  failed |  2.2|
-

3.3 
in the codes, has repeated examination “allow_resize_to_same_host” in 
nova/compute/api.py and nova/compute/manager.py
nova/compute/api.py has follow codes:

if not CONF.allow_resize_to_same_host:
filter_properties['ignore_hosts'].append(instance.host)


nova/compute/manager.py has follow codes:   
elif same_host and not CONF.allow_resize_to_same_host:
self._set_instance_obj_error_state(context, instance)
msg = _('destination same as source!')
raise exception.MigrationError(reason=msg)

4 conclusion
modify “allow_resize_to_same_host” in nova.conf on controller need restart 
nova-api service
modify “allow_resize_to_same_host” in nova.conf on compute node need restart 
nova-scheduler service
you need modify multiple configs and restart multiple services on different 
host if you want resize a vm on same host
I think this way is very inconvenient and unnecessary

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1497218

Title:
  nova-api  and nova-compute  repeated examination
  “allow_resize_to_same_host”

Status in OpenStack Compute (nova):
  New

Bug description:
  1. version
  kilo 2015.1.0

  2. Releva

[Yahoo-eng-team] [Bug 1480021] Re: execute "glance image-create" is not return error

2015-09-18 Thread Kairat Kushaev
So I propose to mark this as invalid as per wangxiyuan comment.

** Changed in: glance
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1480021

Title:
  execute "glance image-create" is not return error

Status in Glance:
  Invalid

Bug description:
  glance version is 0.14.2

  If i create image from command "glance image-list" and no
  parameters,it is success.

  [root@xx ~(keystone_admin)]# glance image-create
  +--+--+
  | Property | Value|
  +--+--+
  | checksum | None |
  | container_format | None |
  | created_at   | 2015-07-31T01:55:34  |
  | deleted  | False|
  | deleted_at   | None |
  | disk_format  | None |
  | id   | 20767e02-5a45-4818-a976-3789634b5719 |
  | is_public| False|
  | min_disk | 0|
  | min_ram  | 0|
  | name | None |
  | owner| 1a999ee4874640ba902769311cf7727a |
  | protected| False|
  | size | 0|
  | status   | queued   |
  | updated_at   | 2015-07-31T01:55:34  |
  | virtual_size | None |
  +--+--+

  if size is zero and no parameters we should not allow create image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1480021/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491131] Re: Ipset race condition

2015-09-18 Thread OpenStack Infra
** Changed in: neutron
   Status: Invalid => In Progress

** Changed in: neutron
 Assignee: Cedric Brandily (cbrandily) => shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1491131

Title:
  Ipset race condition

Status in neutron:
  In Progress
Status in neutron juno series:
  Confirmed
Status in neutron kilo series:
  Confirmed

Bug description:
  Hello,

  We have been using ipsets in neutron since juno.  We have upgraded our
  install to kilo a month or so and we have experienced 3 issues with
  ipsets.

  The issues are as follows:
  1.) Iptables attempts to apply rules for an ipset that was not added
  2.) iptables attempt to apply rules for an ipset that was removed, but still 
refrenced in the iptables config
  3.) ipset churns trying to remove an ipset that has already been removed.

  For issue one and two I am unable to get the logs for these issues
  because neutron was dumping the full iptables-restore entries to log
  once every second for a few hours and eventually filled up the disk
  and we removed the file to get things working again.

  For issue 3.) I have the start of the logs here:
  2015-08-31 12:17:00.100 6840 INFO neutron.agent.common.ovs_lib 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Port 
29355e52-bae1-44b2-ace6-5bc7ce497d32 not present in bridge br-int
  2015-08-31 12:17:00.101 6840 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:00.101 6840 INFO neutron.agent.securitygroups_rpc 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Remove device filter for 
[u'2aa0f79d-4983-4c7a-b489-e0612c482e36']
  2015-08-31 12:17:00.861 4581 INFO neutron.agent.common.ovs_lib 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Port 
2aa0f79d-4983-4c7a-b489-e0612c482e36 not present in bridge br-int
  2015-08-31 12:17:00.862 4581 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:00.862 4581 INFO neutron.agent.securitygroups_rpc 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Remove device filter for 
[u'c616328a-e44c-4cf8-bc8e-83058c5635dd']
  2015-08-31 12:17:01.499 4581 INFO neutron.agent.securitygroups_rpc 
[req-b5b95389-52b6-4051-ab35-ae383df56a0b ] Security group member updated 
[u'b05f4fa6-f1ec-41c0-8ba6-80b859dc23b0']
  2015-08-31 12:17:01.500 6840 INFO neutron.agent.securitygroups_rpc 
[req-b5b95389-52b6-4051-ab35-ae383df56a0b ] Security group member updated 
[u'b05f4fa6-f1ec-41c0-8ba6-80b859dc23b0']
  2015-08-31 12:17:01.608 6840 INFO neutron.agent.common.ovs_lib 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Port 
2aa0f79d-4983-4c7a-b489-e0612c482e36 not present in bridge br-int
  2015-08-31 12:17:01.609 6840 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:01.609 6840 INFO neutron.agent.securitygroups_rpc 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Remove device filter for 
[u'c616328a-e44c-4cf8-bc8e-83058c5635dd']
  2015-08-31 12:17:02.358 4581 INFO neutron.agent.common.ovs_lib 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Port 
c616328a-e44c-4cf8-bc8e-83058c5635dd not present in bridge br-int
  2015-08-31 12:17:02.359 4581 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:02.359 4581 INFO neutron.agent.securitygroups_rpc 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Remove device filter for 
[u'fddff586-9903-47ad-92e1-b334e02e9d1c']
  2015-08-31 12:17:03.108 6840 INFO neutron.agent.common.ovs_lib 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Port 
c616328a-e44c-4cf8-bc8e-83058c5635dd not present in bridge br-int
  2015-08-31 12:17:03.109 6840 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:03.109 6840 INFO neutron.agent.securitygroups_rpc 
[req-2505ffdc-85a7-46f9-bb0f-3fb7fe3d3eed ] Remove device filter for 
[u'fddff586-9903-47ad-92e1-b334e02e9d1c']
  2015-08-31 12:17:03.855 4581 INFO neutron.agent.common.ovs_lib 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Port 
fddff586-9903-47ad-92e1-b334e02e9d1c not present in bridge br-int
  2015-08-31 12:17:03.855 4581 INFO 
neutron.plugins.openvswitch.agent.ovs_neutron_agent 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] port_unbound(): net_uuid None not 
in local_vlan_map
  2015-08-31 12:17:03.856 4581 INFO neutron.agent.securitygroups_rpc 
[req-4144d47e-0044-47b3-a302-d019e0a67aa0 ] Remove device filter for 
[u'3f706749-f8bb-41ab-aa4c-a0925dc67bd4']
  2015-08-31 12:17:03.919 4581 INFO neutron.agent.securit

[Yahoo-eng-team] [Bug 1497188] [NEW] some of api tests ignore CONF.network_feature_enabled.ipv6

2015-09-18 Thread YAMAMOTO Takashi
Public bug reported:

some of api tests ignore CONF.network_feature_enabled.ipv6.

** Affects: neutron
 Importance: Undecided
 Assignee: YAMAMOTO Takashi (yamamoto)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497188

Title:
  some of api tests ignore CONF.network_feature_enabled.ipv6

Status in neutron:
  In Progress

Bug description:
  some of api tests ignore CONF.network_feature_enabled.ipv6.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497188/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497186] [NEW] some of api tests assume address-scope extension

2015-09-18 Thread YAMAMOTO Takashi
Public bug reported:

some of api tests assume address-scope extension.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497186

Title:
  some of api tests assume address-scope extension

Status in neutron:
  New

Bug description:
  some of api tests assume address-scope extension.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497186/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497182] [NEW] some of api tests assume net-mtu extension

2015-09-18 Thread YAMAMOTO Takashi
Public bug reported:

some of api tests assume net-mtu extension.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497182

Title:
  some of api tests assume net-mtu extension

Status in neutron:
  New

Bug description:
  some of api tests assume net-mtu extension.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497182/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497185] [NEW] some of api tests assume address-scope extension

2015-09-18 Thread YAMAMOTO Takashi
Public bug reported:

some of api tests assume address-scope extension.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497185

Title:
  some of api tests assume address-scope extension

Status in neutron:
  New

Bug description:
  some of api tests assume address-scope extension.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497185/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497125] Re: fail to live-migrate a booted from volume vm which have a local disk.

2015-09-18 Thread Markus Zoeller (markus_z)
@ Hiroyuki Eguchi:

I assume that this is a thing which needs a blueprint [1] and a spec file [2] 
to discuss it.
I'll recommend to read [3] if not yet done. To focus here on bugs which are a 
failures/errors/faults
I close this one as "Invalid". The effort to implement the requested
feature is then driven only by the blueprint (and spec).

If there are any questions left, feel free to contact me (markus_z)
in the IRC channel #openstack-nova

[1] https://blueprints.launchpad.net/nova/
[2] https://github.com/openstack/nova-specs
[3] https://wiki.openstack.org/wiki/Blueprints

** Tags added: libvirt live-migration

** Changed in: nova
   Status: New => Invalid

** Changed in: nova
 Assignee: Hiroyuki Eguchi (h-eguchi) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1497125

Title:
  fail to live-migrate a booted from volume vm which have a local disk.

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  fail to live-migrate a booted from volume vm which have a local disk.

  
  message: "hostA is not on shared storage: Live migration can not be used 
without shared storage except a booted from volume VM which does not have a 
local disk."

  make it enable by copying local files (swap, ephemeral disk, config-
  drive) from the source host to the destination host in
  pre_live_migration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1497125/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496929] Re: instance launch failed: TooManyExternalNetworks: More than one external network exists

2015-09-18 Thread Andreas Scheuring
Looks like a configuration issue. Maybe one of the vmware neutron folks
can help out?

** Summary changed:

- instance luanch failed
+ instance launch failed: TooManyExternalNetworks: More than one external 
network exists

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1496929

Title:
  instance launch failed: TooManyExternalNetworks: More than one
  external network exists

Status in neutron:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  Hello, I followed the documentation  " http://docs.openstack.org/kilo
  /config-reference/content/vmware.html " to connect ESXi with OpenStack
  Juno, i put the following configuration on the compute node, nova.conf
  file :

  [DEFAULT]
  compute_driver=vmwareapi.VMwareVCDriver
   
  [vmware]
  host_ip=
  host_username=
  host_password=
  cluster_name=
  datastore_regex=

  And in the nova-compute.conf :

  [DEFAULT]
  compute_driver=vmwareapi.VMwareVCDriver

  
  But in vain, on the juno OpenStack Dashboard when i whant to launch instance, 
i have error " Error: Failed to launch instance "Test": Please try again later 
[Error: No valid host was found. ]. ", plz there is an idea for launce instance 
in my ESXi.

  attached the logs on the controller and compute node:

  ==> nova-conductor

  ERROR nova.scheduler.utils [req-618d4ee3-c936-4249-9f8c-7c266d5f9264 None] 
[instance: 0c1ee287-edfe-4258-bb43-db23338bbe90] Error from last host: 
ComputeNode (node domain-c65(Compute)): [u'Traceback (most recent call 
last):\n', u'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", 
line 2054, in _do_build_and_run_instance\nfilter_properties)\n', u'  File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2185, in 
_build_and_run_instance\ninstance_uuid=instance.uuid, 
reason=six.text_type(e))\n', u'RescheduledException: Build of instance 
0c1ee287-edfe-4258-bb43-db23338bbe90 was re-scheduled: Network could not be 
found for bridge br-int\n']
  2015-09-17 15:31:34.921 2432 WARNING nova.scheduler.driver 
[req-618d4ee3-c936-4249-9f8c-7c266d5f9264 None] [instance: 
0c1ee287-edfe-4258-bb43-db23338bbe90] NoValidHost exception with message: 'No 
valid host was found.'

  
  => neutron 
  2015-09-17 12:36:09.398 1840 ERROR oslo.messaging._drivers.common 
[req-775407a3-d756-4677-bdb9-0ddfe2fac50c ] Returning exception More than one 
external network exists to caller
  2015-09-17 12:36:09.398 1840 ERROR oslo.messaging._drivers.common 
[req-775407a3-d756-4677-bdb9-0ddfe2fac50c ] ['Traceback (most recent call 
last):\n', '  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 134, 
in _dispatch_and_reply\nincoming.message))\n', '  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 177, 
in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, args)\n', '  
File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch\nresult = getattr(endpoint, method)(ctxt, 
**new_args)\n', '  File 
"/usr/lib/python2.7/dist-packages/neutron/api/rpc/handlers/l3_rpc.py", line 
149, in get_external_network_id\nnet_id = 
self.plugin.get_external_network_id(context)\n', '  File 
"/usr/lib/python2.7/dist-packages/neutron/db/external_net_db.py", line 161, in 
get_external_network_id\nraise n_exc.TooManyExternalNetworks()\n', 
'TooManyExternalNetworks: More than one e
 xternal network exists\n']

  
  =>  compute Node / nova-compute

  2015-09-17 15:28:22.323 5944 ERROR oslo.vmware.common.loopingcall [-] in 
fixed duration looping call
  2015-09-17 15:31:33.550 5944 ERROR nova.compute.manager [-] [instance: 
0c1ee287-edfe-4258-bb43-db23338bbe90] Instance failed to spawn

  
  => nova-network / nova-compute

  2015-09-17 11:21:10.840 1363 ERROR oslo.messaging._drivers.impl_rabbit [-] 
AMQP server on ControllerNode01:5672 is unreachable: [Errno 111] ECONNREFUSED. 
Trying again in 3 seconds.
  2015-09-17 11:23:02.874 1363 ERROR nova.openstack.common.periodic_task [-] 
Error during VlanManager._disassociate_stale_fixed_ips: Timed out waiting for a 
reply to message ID b6d62061352e4590a37cbc0438ea3ef0

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1496929/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497151] [NEW] error messages in nova help list

2015-09-18 Thread jinquanni(ZTE)
Public bug reported:

1. version
kilo 2015.1.0

2. Relevant log files:
no log

3. Reproduce steps:

3.1
nova help list

--all-tenants [<0|1>] Display information from all tenants (Admin only).
--tenant []   Display information from single tenant 
(Adminonly).
   The --all-tenants option must also be provided.

3.2
nova list --tenant f7a1114e87d9439986a73e9d419a71f7 (This is one of my tenant 
id)

Expected result:

Should be prompt something like :“ you need to add parameters --all-
tenants.”

Actual result:

[root@devcontrol ~(keystone_admin)]# nova list --tenant 
f7a1114e87d9439986a73e9d419a71f7
+--+--+--+++-++
| ID   | Name | Tenant ID   
 | Status | Task State | Power State | Networks   |
+--+--+--+++-++
| 4fc84ebe-fee7-4c4e-86d8-7cf5a191135e | testflavor02 | 
f7a1114e87d9439986a73e9d419a71f7 | ACTIVE | -  | Running | 
net1=192.168.0.103 |
| bb596070-fca2-4cb8-917b-1374c78d1175 | testflavor03 | 
f7a1114e87d9439986a73e9d419a71f7 | ERROR  | -  | NOSTATE |  
  |
| 3e211f9b-e026-464b-aadc-1f00f5d1a69f | v3test   | 
f7a1114e87d9439986a73e9d419a71f7 | ACTIVE | -  | Running | 
net1=192.168.0.93  |
| 280cb34c-0548-4ed3-b0d0-f391e875101d | v5test   | 
f7a1114e87d9439986a73e9d419a71f7 | ACTIVE | -  | Running | 
net1=192.168.0.221 |
+--+--+--+++-++

4

The Actual result show that messages “--all-tenants option must also be 
provided.” in “nova help list”  is wrong
So,it should be delete

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: api

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1497151

Title:
  error messages in nova help list

Status in OpenStack Compute (nova):
  New

Bug description:
  1. version
  kilo 2015.1.0

  2. Relevant log files:
  no log

  3. Reproduce steps:

  3.1
  nova help list

  --all-tenants [<0|1>] Display information from all tenants (Admin 
only).
  --tenant []   Display information from single tenant 
(Adminonly).
     The --all-tenants option must also be provided.

  3.2
  nova list --tenant f7a1114e87d9439986a73e9d419a71f7 (This is one of my tenant 
id)

  Expected result:

  Should be prompt something like :“ you need to add parameters --all-
  tenants.”

  Actual result:

  [root@devcontrol ~(keystone_admin)]# nova list --tenant 
f7a1114e87d9439986a73e9d419a71f7
  
+--+--+--+++-++
  | ID   | Name | Tenant ID 
   | Status | Task State | Power State | Networks   |
  
+--+--+--+++-++
  | 4fc84ebe-fee7-4c4e-86d8-7cf5a191135e | testflavor02 | 
f7a1114e87d9439986a73e9d419a71f7 | ACTIVE | -  | Running | 
net1=192.168.0.103 |
  | bb596070-fca2-4cb8-917b-1374c78d1175 | testflavor03 | 
f7a1114e87d9439986a73e9d419a71f7 | ERROR  | -  | NOSTATE |  
  |
  | 3e211f9b-e026-464b-aadc-1f00f5d1a69f | v3test   | 
f7a1114e87d9439986a73e9d419a71f7 | ACTIVE | -  | Running | 
net1=192.168.0.93  |
  | 280cb34c-0548-4ed3-b0d0-f391e875101d | v5test   | 
f7a1114e87d9439986a73e9d419a71f7 | ACTIVE | -  | Running | 
net1=192.168.0.221 |
  
+--+--+--+++-++

  4

  The Actual result show that messages “--all-tenants option must also be 
provided.” in “nova help list”  is wrong
  So,it should be delete

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1497151/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497142] [NEW] cinderclient traces in tests output

2015-09-18 Thread Matthias Runge
Public bug reported:

in current checkout (Sep 18th, 7 UTC)  there are cinderclient traces
shown.


./run_tests.sh -N -P openstack_dashboard
nosetests openstack_dashboard --nocapture --nologcapture 
--cover-package=openstack_dashboard --cover-inclusive --all-modules 
--exclude-dir=openstack_dashboard/test/integration_tests --verbosity=1
Creating test database for alias 'default'...
..S..DEBUG:oslo_policy.openstack.common.fileutils:Reloading
 cached file 
/home/mrunge/work/horizon/openstack_dashboard/conf/keystone_policy.json
DEBUG:oslo_policy.policy:Reloaded policy file: 
/home/mrunge/work/horizon/openstack_dashboard/conf/keystone_policy.json
DEBUG:oslo_policy.openstack.common.fileutils:Reloading cached file 
/home/mrunge/work/horizon/openstack_dashboard/conf/nova_policy.json
DEBUG:oslo_policy.policy:Reloaded policy file: 
/home/mrunge/work/horizon/openstack_dashboard/conf/nova_policy.json
..DEBUG:cinderclient.client:Connection
 error: ('Connection aborted.', gaierror(-5, 'No address associated with 
hostname'))
.DEBUG:cinderclient.client:Connection error: ('Connection aborted.', 
gaierror(-5, 'No address associated with hostname'))
...DEBUG:cinderclient.client:Connection error: ('Connection aborted.', 
gaierror(-5, 'No address associated with hostname'))
.DEBUG:oslo_policy.policy:Reloaded
 policy file: 
/home/mrunge/work/horizon/openstack_dashboard/conf/keystone_policy.json
.DEBUG:oslo_policy.policy:Reloaded policy file: 
/home/mrunge/work/horizon/openstack_dashboard/conf/keystone_policy.json

** Affects: horizon
 Importance: High
 Status: New

** Changed in: horizon
Milestone: None => liberty-rc1

** Changed in: horizon
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1497142

Title:
  cinderclient traces in tests output

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  in current checkout (Sep 18th, 7 UTC)  there are cinderclient traces
  shown.

  
  ./run_tests.sh -N -P openstack_dashboard
  nosetests openstack_dashboard --nocapture --nologcapture 
--cover-package=openstack_dashboard --cover-inclusive --all-modules 
--exclude-dir=openstack_dashboard/test/integration_tests --verbosity=1
  Creating test database for alias 'default'...
  
..S..DEBUG:oslo_policy.openstack.common.fileutils:Reloading
 cached file 
/home/mrunge/work/horizon/openstack_dashboard/conf/keystone_policy.json
  DEBUG:oslo_policy.policy:Reloaded policy file: 
/home/mrunge/work/horizon/openstack_dashboard/conf/keystone_policy.json
  DEBUG:oslo_policy.openstack.common.fileutils:Reloading cached file 
/home/mrunge/work/horizon/openstack_dashboard/conf/nova_policy.json
  DEBUG:oslo_policy.policy:Reloaded policy file: 
/home/mrunge/work/horizon/openstack_dashboard/conf/nova_policy.json
  

[Yahoo-eng-team] [Bug 1497132] [NEW] tokenless auth is being too chatty on every call

2015-09-18 Thread Steve Martinelli
Public bug reported:

This logic in being run far too often:
https://github.com/openstack/keystone/blob/master/keystone/middleware/core.py#L253-L281

resulting in logs like the following:
2015-09-16 23:34:04.261007 108719 INFO keystone.middleware.core 
[req-f715dbbe-8dac-490e-a02c-4a430eefb1a0 - - - - -] Cannot find client issuer 
in env by the issuer attribute - SSL_CLIENT_I_DN.
2015-09-16 23:34:04.265523 108719 DEBUG keystone.middleware.core 
[req-f715dbbe-8dac-490e-a02c-4a430eefb1a0 - - - - -] There is either no auth 
token in the request or the certificate issuer is not trusted. No auth context 
will be set. process_request /opt/stack/keystone/keystone/middleware/core.py:307
2015-09-16 23:34:04.304670 108719 INFO keystone.common.wsgi 
[req-f715dbbe-8dac-490e-a02c-4a430eefb1a0 - - - - -] GET 
http://172.16.240.136:35357/v2.0/
2015-09-16 23:34:04.454396 108722 INFO keystone.middleware.core 
[req-3c130a0f-8147-46bd-83d3-c55d873ed3a6 - - - - -] Cannot find client issuer 
in env by the issuer attribute - SSL_CLIENT_I_DN.
2015-09-16 23:34:04.460344 108722 DEBUG keystone.middleware.core 
[req-3c130a0f-8147-46bd-83d3-c55d873ed3a6 - - - - -] There is either no auth 
token in the request or the certificate issuer is not trusted. No auth context 
will be set. process_request /opt/stack/keystone/keystone/middleware/core.py:307
2015-09-16 23:34:04.501183 108722 INFO keystone.common.wsgi 
[req-3c130a0f-8147-46bd-83d3-c55d873ed3a6 - - - - -] POST 
http://172.16.240.136:35357/v2.0/tokens
2015-09-16 23:34:05.002308 108721 INFO keystone.middleware.core 
[req-feba423f-afb7-4650-a6cc-a83242d69d39 - - - - -] Cannot find client issuer 
in env by the issuer attribute - SSL_CLIENT_I_DN.
2015-09-16 23:34:05.006774 108721 DEBUG keystone.middleware.core 
[req-feba423f-afb7-4650-a6cc-a83242d69d39 - - - - -] There is either no auth 
token in the request or the certificate issuer is not trusted. No auth context 
will be set. process_request /opt/stack/keystone/keystone/middleware/core.py:307
2015-09-16 23:34:05.053912 108721 INFO keystone.common.wsgi 
[req-feba423f-afb7-4650-a6cc-a83242d69d39 - - - - -] POST 
http://172.16.240.136:35357/v2.0/tokens
2015-09-16 23:34:05.290813 108720 INFO keystone.common.wsgi 
[req-9b96e50a-f4c9-4dc0-b3bf-7fc26f44c59a - - - - -] GET 
http://172.16.240.136:35357/
2015-09-16 23:34:05.304653 108718 DEBUG keystone.middleware.core 
[req-ff998f4d-0df5-4bc8-9ad5-dbc498b04dc7 - - - - -] RBAC: auth_context: 
{'is_delegated_auth': False, 'access_token_id': None, 'user_id': 
u'b95ea3fcaf8a49309ee2b406c02f383e', 'roles': [u'anotherrole', u'Member'], 
'trustee_id': None, 'trustor_id': None, 'consumer_id': None, 'token': 
, 'project_id': 
u'ae819f8aeda04d8488b4412baed1730b', 'trust_id': None} process_request 
/opt/stack/keystone/keystone/middleware/core.py:311
2015-09-16 23:34:05.306174 108718 INFO keystone.common.wsgi 
[req-ff998f4d-0df5-4bc8-9ad5-dbc498b04dc7 - - - - -] GET 
http://172.16.240.136:35357/v2.0/users
2015-09-16 23:34:05.306694 108718 DEBUG keystone.policy.backends.rules 
[req-ff998f4d-0df5-4bc8-9ad5-dbc498b04dc7 - - - - -] enforce admin_required: 
{'user_id': u'b95ea3fcaf8a49309ee2b406c02f383e', u'is_admin': 0, u'roles': 
[u'anotherrole', u'Member'], 'tenant_id': u'ae819f8aeda04d8488b4412baed1730b'} 
enforce /opt/stack/keystone/keystone/policy/backends/rules.py:76
2015-09-16 23:34:05.331264 108718 WARNING keystone.common.wsgi 
[req-ff998f4d-0df5-4bc8-9ad5-dbc498b04dc7 - - - - -] You are not authorized to 
perform the requested action: admin_required (Disable debug mode to suppress 
these details.)

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1497132

Title:
  tokenless auth is being too chatty on every call

Status in Keystone:
  New

Bug description:
  This logic in being run far too often:
  
https://github.com/openstack/keystone/blob/master/keystone/middleware/core.py#L253-L281

  resulting in logs like the following:
  2015-09-16 23:34:04.261007 108719 INFO keystone.middleware.core 
[req-f715dbbe-8dac-490e-a02c-4a430eefb1a0 - - - - -] Cannot find client issuer 
in env by the issuer attribute - SSL_CLIENT_I_DN.
  2015-09-16 23:34:04.265523 108719 DEBUG keystone.middleware.core 
[req-f715dbbe-8dac-490e-a02c-4a430eefb1a0 - - - - -] There is either no auth 
token in the request or the certificate issuer is not trusted. No auth context 
will be set. process_request /opt/stack/keystone/keystone/middleware/core.py:307
  2015-09-16 23:34:04.304670 108719 INFO keystone.common.wsgi 
[req-f715dbbe-8dac-490e-a02c-4a430eefb1a0 - - - - -] GET 
http://172.16.240.136:35357/v2.0/
  2015-09-16 23:34:04.454396 108722 INFO keystone.middleware.core 
[req-3c130a0f-8147-46bd-83d3-c55d873ed3a6 - - - - -] Cannot find client issuer 
in env by the issuer attribute - SSL_CLIENT_I_DN.
  2015-09-16 23:34:04.460344 108722 DEBUG keystone.middlewar