[Yahoo-eng-team] [Bug 1569779] Re: allow to investigate instance actions after instance deletion

2016-04-13 Thread zhaolihui
In mitaka,

create instance, command 'nova list' to view instance's uuid.
+--+---+++-++
| ID   | Name  | Status | Task State | Power 
State | Networks   |
+--+---+++-++
| 738c80f1-03d0-4a93-844b-bee22b3654de | first | ACTIVE | -  | Running  
   | private=fd33:f306:b926:0:f816:3eff:fed8:34b5, 10.0.0.3 |
+--+---+++-++

command 'nova instance-action-list 738c80f1-03d0-4a93-844b-bee22b3654de ' was 
returned:
++--+-++
| Action | Request_ID   | Message | Start_Time  
   |
++--+-++
| create | req-e5c1d5b4-590e-4f7f-ab2b-025302998bff | -   | 
2016-04-14T03:06:46.00 |
++--+-++

delete instance,  command 'nova instance-action-list 
738c80f1-03d0-4a93-844b-bee22b3654de ' was returned:
++--+-++
| Action | Request_ID   | Message | Start_Time  
   |
++--+-++
| create | req-e5c1d5b4-590e-4f7f-ab2b-025302998bff | -   | 
2016-04-14T03:06:46.00 |
| delete | req-16e7dc74-1449-4e4a-a662-a0279c88620f | -   | 
2016-04-14T03:18:34.00 |
++--+-++

there is no 404 error.

** Changed in: nova
   Status: New => Invalid

** Changed in: nova
 Assignee: zhaolihui (zhaolh) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1569779

Title:
  allow to investigate instance actions after instance deletion

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Right now if instance has been deleted, 'nova instance-action-list'
  returns 404. Due to very specific nature of 'action list' is is very
  nice to have ability to see action lists for deleted instances,
  especially deletion request.

  Can this feature be added to nova? Al least, for administrators.

  Thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1569779/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1570171] [NEW] ip_conntrack only delete one direction entry

2016-04-13 Thread yujie
Public bug reported:

The test was used neutron master.
I use devstack create one net and two vm on this net, vm1 fixed-ip is: 
10.0.0.3, vm2 fixed-ip is: 10.0.0.4.
Both vm bind sg1:
   rule1: ingress, any protocol, any remote ip prefix
   rule2: egress, any protocol, any remote ip prefix

1. vm1 ping vm2 and vm2 ping vm1, the conntrack will be:
$ sudo conntrack -L -p icmp
icmp 1 29 src=10.0.0.3 dst=10.0.0.4 type=8 code=0 id=21761 src=10.0.0.4 
dst=10.0.0.3 type=0 code=0 id=21761 mark=0 zone=4 use=1
icmp 1 29 src=10.0.0.4 dst=10.0.0.3 type=8 code=0 id=22017 src=10.0.0.3 
dst=10.0.0.4 type=0 code=0 id=22017 mark=0 zone=4 use=1
icmp 1 29 src=10.0.0.3 dst=10.0.0.4 type=8 code=0 id=21761 src=10.0.0.4 
dst=10.0.0.3 type=0 code=0 id=21761 mark=0 zone=3 use=1
icmp 1 29 src=10.0.0.4 dst=10.0.0.3 type=8 code=0 id=22017 src=10.0.0.3 
dst=10.0.0.4 type=0 code=0 id=22017 mark=0 zone=3 use=1
conntrack v1.4.1 (conntrack-tools): 4 flow entries have been shown.

2. vm2 unbind sg1, the conntrack turn to:
$ sudo conntrack -L -p icmp
icmp 1 29 src=10.0.0.3 dst=10.0.0.4 type=8 code=0 id=21761 src=10.0.0.4 
dst=10.0.0.3 type=0 code=0 id=21761 mark=0 zone=4 use=1
icmp 1 29 src=10.0.0.4 dst=10.0.0.3 type=8 code=0 id=22017 src=10.0.0.3 
dst=10.0.0.4 type=0 code=0 id=22017 mark=0 zone=4 use=1
icmp 1 29 src=10.0.0.4 dst=10.0.0.3 type=8 code=0 id=22017 src=10.0.0.3 
dst=10.0.0.4 type=0 code=0 id=22017 mark=0 zone=3 use=1
conntrack v1.4.1 (conntrack-tools): 3 flow entries have been shown.

Now vm1 could not connect vm2, which is right; but vm2 could still ping
vm1 successfully.  The entry "icmp 1 29 src=10.0.0.4 dst=10.0.0.3
type=8 code=0 id=22017 src=10.0.0.3 dst=10.0.0.4 type=0 code=0 id=22017
mark=0 zone=3 use=1" was not delete as expect.

** Affects: neutron
 Importance: Undecided
 Assignee: yujie (16189455-d)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => yujie (16189455-d)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1570171

Title:
  ip_conntrack only delete one direction entry

Status in neutron:
  New

Bug description:
  The test was used neutron master.
  I use devstack create one net and two vm on this net, vm1 fixed-ip is: 
10.0.0.3, vm2 fixed-ip is: 10.0.0.4.
  Both vm bind sg1:
     rule1: ingress, any protocol, any remote ip prefix
     rule2: egress, any protocol, any remote ip prefix

  1. vm1 ping vm2 and vm2 ping vm1, the conntrack will be:
  $ sudo conntrack -L -p icmp
  icmp 1 29 src=10.0.0.3 dst=10.0.0.4 type=8 code=0 id=21761 src=10.0.0.4 
dst=10.0.0.3 type=0 code=0 id=21761 mark=0 zone=4 use=1
  icmp 1 29 src=10.0.0.4 dst=10.0.0.3 type=8 code=0 id=22017 src=10.0.0.3 
dst=10.0.0.4 type=0 code=0 id=22017 mark=0 zone=4 use=1
  icmp 1 29 src=10.0.0.3 dst=10.0.0.4 type=8 code=0 id=21761 src=10.0.0.4 
dst=10.0.0.3 type=0 code=0 id=21761 mark=0 zone=3 use=1
  icmp 1 29 src=10.0.0.4 dst=10.0.0.3 type=8 code=0 id=22017 src=10.0.0.3 
dst=10.0.0.4 type=0 code=0 id=22017 mark=0 zone=3 use=1
  conntrack v1.4.1 (conntrack-tools): 4 flow entries have been shown.

  2. vm2 unbind sg1, the conntrack turn to:
  $ sudo conntrack -L -p icmp
  icmp 1 29 src=10.0.0.3 dst=10.0.0.4 type=8 code=0 id=21761 src=10.0.0.4 
dst=10.0.0.3 type=0 code=0 id=21761 mark=0 zone=4 use=1
  icmp 1 29 src=10.0.0.4 dst=10.0.0.3 type=8 code=0 id=22017 src=10.0.0.3 
dst=10.0.0.4 type=0 code=0 id=22017 mark=0 zone=4 use=1
  icmp 1 29 src=10.0.0.4 dst=10.0.0.3 type=8 code=0 id=22017 src=10.0.0.3 
dst=10.0.0.4 type=0 code=0 id=22017 mark=0 zone=3 use=1
  conntrack v1.4.1 (conntrack-tools): 3 flow entries have been shown.

  Now vm1 could not connect vm2, which is right; but vm2 could still
  ping vm1 successfully.  The entry "icmp 1 29 src=10.0.0.4
  dst=10.0.0.3 type=8 code=0 id=22017 src=10.0.0.3 dst=10.0.0.4 type=0
  code=0 id=22017 mark=0 zone=3 use=1" was not delete as expect.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1570171/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1570158] [NEW] memcache pool reap issue (stable/liberty)

2016-04-13 Thread Matt Fischer
Public bug reported:

There seems to be a code error in the memcache pool cleanup. I'm seeing
this on stable/liberty (built as of 2 weeks ago). I don't have a
specific reproducer for this, it just seems to happen. Looking at the
code I don't really understand how this can happen.

2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi 
[req-30e23a7d-754d-4a89-904f-1804a20029f2 - - - - -] deque index out of range
2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi Traceback (most recent 
call last):
2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi   File 
"/venv/local/lib/python2.7/site-packages/keystone/common/wsgi.py", line 248, in 
__call__
2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi result = 
method(context, **params)
2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi   File 
"/venv/local/lib/python2.7/site-packages/keystone/common/controller.py", line 
138, in inner
2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi 
context['subject_token_id']))
2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi   File 
"/venv/local/lib/python2.7/site-packages/keystone/token/provider.py", line 188, 
in validate_token
2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi token = 
self._validate_token(unique_id)
2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi   File 
"/venv/local/lib/python2.7/site-packages/dogpile/cache/region.py", line 1053, 
in decorate
2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi should_cache_fn)
2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi   File 
"/venv/local/lib/python2.7/site-packages/dogpile/cache/region.py", line 657, in 
get_or_create
2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi async_creator) as 
value:
2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi   File 
"/venv/local/lib/python2.7/site-packages/dogpile/core/dogpile.py", line 158, in 
__enter__
2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi return self._enter()
2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi   File 
"/venv/local/lib/python2.7/site-packages/dogpile/core/dogpile.py", line 91, in 
_enter
2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi value = value_fn()
2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi   File 
"/venv/local/lib/python2.7/site-packages/dogpile/cache/region.py", line 610, in 
get_value
2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi value = 
self.backend.get(key)
2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi   File 
"/venv/local/lib/python2.7/site-packages/dogpile/cache/backends/memcached.py", 
line 161, in get
2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi value = 
self.client.get(key)
2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi   File 
"/venv/local/lib/python2.7/site-packages/keystone/common/cache/backends/memcache_pool.py",
 line 36, in _run_method
2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi return 
getattr(client, __name)(*args, **kwargs)
2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/contextlib.py", line 24, in __exit__
2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi self.gen.next()
2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi   File 
"/venv/local/lib/python2.7/site-packages/keystone/common/cache/_memcache_pool.py",
 line 132, in acquire
2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi 
self._drop_expired_connections()
2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi   File 
"/venv/local/lib/python2.7/site-packages/keystone/common/cache/_memcache_pool.py",
 line 166, in _drop_expired_connections
2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi while self.queue and 
self.queue[0].ttl < now:
2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi IndexError: deque index 
out of range

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1570158

Title:
  memcache pool reap issue (stable/liberty)

Status in OpenStack Identity (keystone):
  New

Bug description:
  There seems to be a code error in the memcache pool cleanup. I'm
  seeing this on stable/liberty (built as of 2 weeks ago). I don't have
  a specific reproducer for this, it just seems to happen. Looking at
  the code I don't really understand how this can happen.

  2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi 
[req-30e23a7d-754d-4a89-904f-1804a20029f2 - - - - -] deque index out of range
  2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi Traceback (most recent 
call last):
  2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi   File 
"/venv/local/lib/python2.7/site-packages/keystone/common/wsgi.py", line 248, in 
__call__
  2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi result = 
method(context, **params)
  2016-04-13 20:50:29.124 46 ERROR keystone.common.wsgi   File 

[Yahoo-eng-team] [Bug 1550886] Re: L3 Agent's fullsync is raceful with creation of HA router

2016-04-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/257059
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=9c3c19f07ce52e139d431aec54341c38a183f0b7
Submitter: Jenkins
Branch:master

commit 9c3c19f07ce52e139d431aec54341c38a183f0b7
Author: Kevin Benton 
Date:   Thu Feb 18 03:48:29 2016 -0800

Add ALLOCATING state to routers

This patch adds a new ALLOCATING status to routers
to indicate that the routers are still being built on the
Neutron server. Any routers in this state are excluded in
router retrievals by the L3 agent since they are not yet
ready to be wired up.

This is necessary when a router is made up of several
distinct Neutron resources that cannot all be put
into a single transaction. This patch applies this new
state to HA routers while their internal HA ports and
networks are being created/deleted so the L3 HA agent
will never retrieve a partially formed HA router. It's
important to note that the ALLOCATING status carries over
until after the scheduling is done, which ensures that
routers that weren't fully scheduled will not be sent to
the agents.

An HA router is placed in this state only when it is being
created or converted to/from the HA state since this is
disruptive to the dataplane.

This patch also reverts the changes introduced in
Iadb5a69d4cbc2515fb112867c525676cadea002b since they will
be handled by the ALLOCATING logic instead.

Co-Authored-By: Ann Kamyshnikova 
Co-Authored-By: John Schwarz 

APIImpact
Closes-Bug: #1550886
Related-bug: #1499647
Change-Id: I22ff5a5a74527366da8f82982232d4e70e455570


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1550886

Title:
  L3 Agent's fullsync is raceful with creation of HA router

Status in neutron:
  Fix Released

Bug description:
  When creating an HA router, after the server creates all the DB
  objects (including the HA network and ports if it's the first one),
  the server continues on the schedule the router to (some of) the
  available agents.

  The race is achieved when an L3 agent router issues a sync_router
  request, which later down the line ends up in an
  auto_schedule_routers() call. If this happens before the above
  schedule (of the create_router()) is complete, the server will refuse
  to schedule the router to the other intended L3 agents, resulting is
  less agents being scheduled.

  The only way to fix this is either restarting one of the L3 agents
  which didn't get scheduled, or recreating the router. Either is a bad
  option.

  An example of the state:
  $ neutron l3-agent-list-hosting-router router2
  
+--+-++---+--+
  | id   | host| 
admin_state_up | alive | ha_state |
  
+--+-++---+--+
  | d05da32b-34e7-4c7f-b0dd-938328a0c0ed | vpn-6-12| True   
| :-)   | active   |
  
+--+-++---+--+
  (only 1 of the agent got scheduled with the router, even though there are 3 
suitable agents that normally get scheduled without the race.)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1550886/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1570122] [NEW] ipv6 prefix delegated subnets are not accessable external of the router they are attached.

2016-04-13 Thread Matthew Thode
Public bug reported:

currently ip6tables in the qrouter namespace has the following rule.
This causes unmarked packets to drop.

-A neutron-l3-agent-scope -o qr-ca9ffa4f-fd -m mark ! --mark
0x401/0x -j DROP

It seems that prefix delegated subnets don't get that mark set on
incoming trafic from the gateway port, I had to add my own rule to do
that.

ip6tables -t mangle -A neutron-l3-agent-scope -i qg-ac290c4b-4f -j MARK
--set-xmark 0x401/0x

At the moment that is probably too permissive, it should likely be
limited based on the prefix delegated. with a '-d dead:beef:cafe::/64'
or whatever the delegation is (tested this and it does work).

** Affects: neutron
 Importance: Undecided
 Status: Confirmed


** Tags: ipv6 l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1570122

Title:
  ipv6 prefix delegated subnets are not accessable external of the
  router they are attached.

Status in neutron:
  Confirmed

Bug description:
  currently ip6tables in the qrouter namespace has the following rule.
  This causes unmarked packets to drop.

  -A neutron-l3-agent-scope -o qr-ca9ffa4f-fd -m mark ! --mark
  0x401/0x -j DROP

  It seems that prefix delegated subnets don't get that mark set on
  incoming trafic from the gateway port, I had to add my own rule to do
  that.

  ip6tables -t mangle -A neutron-l3-agent-scope -i qg-ac290c4b-4f -j
  MARK --set-xmark 0x401/0x

  At the moment that is probably too permissive, it should likely be
  limited based on the prefix delegated. with a '-d dead:beef:cafe::/64'
  or whatever the delegation is (tested this and it does work).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1570122/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1570113] [NEW] compute nodes do not get dvrha router update from server

2016-04-13 Thread Adolfo Duarte
Public bug reported:

branch: master (April 13th 2016):  commit
2a305c563073a3066aac3f07aab3c895ec2cd2fb

Topology: 
1 controller (neutronserver)
3 network nodes (l3_agent in dvr_snat mode)
2 compute nodes (l3_agent in dvr mode)

behavior: when a dvr/ha router is created and a vm instantiated on the
compute node, the l3_agent on the compute node does not instantiate a
local router. (the qr-namespace along with all the interfaces).

expected: when a dvr/ha router is created and a vm is instantiated on a
compute node, the l3_agent running on that compute node should
instantiate a local router.  The qr-router-id namespace along with all
the appropriate interfaces should be present locally on the compute
node. Similar in behavior to a dvr only router. (without ha).

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1570113

Title:
  compute nodes do not get dvrha router update from server

Status in neutron:
  New

Bug description:
  branch: master (April 13th 2016):  commit
  2a305c563073a3066aac3f07aab3c895ec2cd2fb

  Topology: 
  1 controller (neutronserver)
  3 network nodes (l3_agent in dvr_snat mode)
  2 compute nodes (l3_agent in dvr mode)

  behavior: when a dvr/ha router is created and a vm instantiated on the
  compute node, the l3_agent on the compute node does not instantiate a
  local router. (the qr-namespace along with all the interfaces).

  expected: when a dvr/ha router is created and a vm is instantiated on
  a compute node, the l3_agent running on that compute node should
  instantiate a local router.  The qr-router-id namespace along with all
  the appropriate interfaces should be present locally on the compute
  node. Similar in behavior to a dvr only router. (without ha).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1570113/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1529836] Re: Fix deprecated library function (os.popen()).

2016-04-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/303211
Committed: 
https://git.openstack.org/cgit/openstack/zaqar-ui/commit/?id=8dac58c47e1f0330407852e64771f88a585a8d82
Submitter: Jenkins
Branch:master

commit 8dac58c47e1f0330407852e64771f88a585a8d82
Author: shu-mutou 
Date:   Fri Apr 8 13:14:50 2016 +0900

Replace deprecated library function os.popen() with subprocess

os.popen() is deprecated since python 2.6.
Resolved with use of subprocess module.

Change-Id: I1f06fb713b25c42f975e525711afaf2cf4a58a5a
Closes-Bug: #1529836


** Changed in: zaqar-ui
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1529836

Title:
  Fix deprecated library function (os.popen()).

Status in bilean:
  In Progress
Status in Blazar:
  In Progress
Status in Ceilometer:
  Fix Released
Status in ceilometer-powervm:
  Fix Released
Status in Cinder:
  Fix Released
Status in congress:
  In Progress
Status in devstack:
  In Progress
Status in Glance:
  In Progress
Status in glance_store:
  Fix Released
Status in group-based-policy-specs:
  In Progress
Status in heat:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in horizon-cisco-ui:
  Confirmed
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystoneauth:
  Fix Released
Status in keystonemiddleware:
  Fix Released
Status in Kwapi:
  In Progress
Status in Manila:
  Fix Released
Status in Murano:
  Fix Released
Status in networking-powervm:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in nova-powervm:
  Fix Released
Status in python-heatclient:
  Fix Released
Status in python-keystoneclient:
  Fix Released
Status in Python client library for Zaqar:
  In Progress
Status in Sahara:
  Fix Released
Status in senlin:
  Fix Released
Status in OpenStack Object Storage (swift):
  In Progress
Status in tempest:
  Fix Released
Status in Zaqar-ui:
  Fix Released

Bug description:
  Deprecated library function os.popen is still in use at some places.
  Need to replace it using subprocess module.

To manage notifications about this bug go to:
https://bugs.launchpad.net/bilean/+bug/1529836/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1570107] [NEW] block device mapping : booting up attached volumes with no boot index when multiple bootable volumes are used

2016-04-13 Thread ESC Dev
Public bug reported:

Description
===

provide 2 block device mappings, both bootable volumes with only 1 of
them having a boot_index (i.e. only 1 specified as boot device) and both
have different bus types. Observe booted instance is actually an
instance of the image with no boot_index!

Steps to reproduce
==

Created volume from cirros image
Created volume from ubuntu image

Attempted to create instance sourced from ubuntu volume and also have the 
cirros volume attached
- set boot index to 0 for ubuntu volume
- set no boot index for cirros (in trial 2, i set it to -1 explicitly so it 
would only be an attached volume) 
- bus type provided for ubuntu volume was 'ide'
- bus type provided for cirros volume was 'virtio'
- send server payload
- observed the instance is booted from the cirros volume despite having no boot 
index!

sample payload :

{"server":{"flavorRef":"8e41c8ce-8bca-4012-92f0-6d83dc4e7dfe","name
":"vol-test_cirros-bootable_0_cb3378f2-01d1-4662-8c9e-
df6cff05c056","networks":[{"port":"1c11a3f9-47e6-4374-9890-54b5dd91f7c2"}],"block_device_mapping_v2":[{"uuid":"03b3c938-ed05-41a3-af92-5344431892ae","source_type":"volume","destination_type":"volume","delete_on_termination":"false","disk_bus":
"virtio" },{"uuid":"7cebbbc0-113e-
452b-b577-9043140656bf","boot_index":"0","source_type":"volume","destination_type":"volume","delete_on_termination":"false","disk_bus":
"ide" },{"uuid":"5870a3d6-f475-4312-b26f-
5162d76b9074","source_type":"volume","destination_type":"volume","delete_on_termination":"false","disk_bus":
"virtio" }]}


NOTE : when the ubuntu volume is the only volume and the bus type is IDE, the 
created instance is an ubuntu instance
Also, if both volumes have the bus types set to virtio, the ubuntu instance is 
correctly booted.
Appears to be some bus type related issue, but it wasn't documented anywhere 
and there's no error.


Expected result
===

Boot from the volume with boot index provided or fail with error


Actual result
=

Boots from the other volume, even though it has no boot index.


Environment
===
Openstack Kilo

hypervisor : libvirt + KVM
storage : LVM
networking : neutron with openvswitch

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1570107

Title:
  block device mapping : booting up attached volumes with no boot index
  when multiple bootable volumes are used

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===

  provide 2 block device mappings, both bootable volumes with only 1 of
  them having a boot_index (i.e. only 1 specified as boot device) and
  both have different bus types. Observe booted instance is actually an
  instance of the image with no boot_index!

  Steps to reproduce
  ==

  Created volume from cirros image
  Created volume from ubuntu image

  Attempted to create instance sourced from ubuntu volume and also have the 
cirros volume attached
  - set boot index to 0 for ubuntu volume
  - set no boot index for cirros (in trial 2, i set it to -1 explicitly so it 
would only be an attached volume) 
  - bus type provided for ubuntu volume was 'ide'
  - bus type provided for cirros volume was 'virtio'
  - send server payload
  - observed the instance is booted from the cirros volume despite having no 
boot index!

  sample payload :

  {"server":{"flavorRef":"8e41c8ce-8bca-4012-92f0-6d83dc4e7dfe","name
  ":"vol-test_cirros-bootable_0_cb3378f2-01d1-4662-8c9e-
  
df6cff05c056","networks":[{"port":"1c11a3f9-47e6-4374-9890-54b5dd91f7c2"}],"block_device_mapping_v2":[{"uuid":"03b3c938-ed05-41a3-af92-5344431892ae","source_type":"volume","destination_type":"volume","delete_on_termination":"false","disk_bus":
  "virtio" },{"uuid":"7cebbbc0-113e-
  
452b-b577-9043140656bf","boot_index":"0","source_type":"volume","destination_type":"volume","delete_on_termination":"false","disk_bus":
  "ide" },{"uuid":"5870a3d6-f475-4312-b26f-
  
5162d76b9074","source_type":"volume","destination_type":"volume","delete_on_termination":"false","disk_bus":
  "virtio" }]}



  NOTE : when the ubuntu volume is the only volume and the bus type is IDE, the 
created instance is an ubuntu instance
  Also, if both volumes have the bus types set to virtio, the ubuntu instance 
is correctly booted.
  Appears to be some bus type related issue, but it wasn't documented anywhere 
and there's no error.

  
  Expected result
  ===

  Boot from the volume with boot index provided or fail with error

  
  Actual result
  =

  Boots from the other volume, even though it has no boot index.

  
  Environment
  ===
  Openstack Kilo

  hypervisor : libvirt + KVM
  storage : LVM
  networking : neutron with openvswitch

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1569289] Re: glance should support the root-tar disk format

2016-04-13 Thread Launchpad Bug Tracker
This bug was fixed in the package glance - 2:12.0.0-0ubuntu2

---
glance (2:12.0.0-0ubuntu2) xenial; urgency=medium

  *  debian/patches/add-root-tar-support.patch: Add support for
 root-tar images for backwards compatiblity for nova-lxd
 (LP: #1569289).

 -- Chuck Short   Wed, 13 Apr 2016 10:13:40 -0400

** Changed in: glance (Ubuntu)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1569289

Title:
  glance should support the root-tar disk format

Status in Glance:
  New
Status in glance package in Ubuntu:
  Fix Released

Bug description:
  'root-tar' (indicating a root filesystem in tar archive format) has
  been used for full system containers in OpenStack both under the
  libvirt/lxc driver and under the new lxd driver (nova-lxd package).

  Users are working around the lack of 'root-tar' default support in
  glance by uploading images in this format as 'raw' (which is
  incorrect) or updating the list of supported disk formats using the
  glance-api configuration file to include 'root-tar'.

  We should consider adding this to the list of defaults for supported
  disk formats.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1569289/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496873] Re: nova-compute leaves an open file descriptor after failed check for direct IO support

2016-04-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/224764
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=ea42c340c99b92e4710fbd5b31ddb884d590851b
Submitter: Jenkins
Branch:master

commit ea42c340c99b92e4710fbd5b31ddb884d590851b
Author: Roman Podoliaka 
Date:   Thu Sep 17 13:17:55 2015 +0300

libvirt: delete the last file link in _supports_direct_io()

In _supports_direct_io() check, if open() with O_DIRECT succeeds, but
write() of a 512-bytes aligned block fails, the file descriptor
remains opened after unlink(), which means the temporary file will
persist until nova-compute is stopped.

TrivialFix

Closes-Bug: #1496873

Change-Id: I9c1809e35d9641c0d7bfb331de75e6dabf1bdcfe


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1496873

Title:
  nova-compute leaves an open file descriptor after failed check for
  direct IO support

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  On my Kilo environment I noticed that nova-compute has an open file
  descriptor of a deleted file:

  nova-compute 14204 nova   21w   REG  252,00
  117440706 /var/lib/nova/instances/.directio.test (deleted)

  According to logs the check if FS supports direct IO failed:

  2015-09-15 22:11:33.171 14204 DEBUG nova.virt.libvirt.driver [req-
  f11861ed-bcd8-46cb-8d0b-b7736cce7f80 59d099e0cc1c44e991a02a68dbbb1815
  5e6f6da2b2d74a108ccdead3b30f0bcf - - -] Path '/var/lib/nova/instances'
  does not support direct I/O: '[Errno 22] Invalid argument'
  _supports_direct_io /usr/lib/python2.7/dist-
  packages/nova/virt/libvirt/driver.py:2588

  Looks like nova-compute doesn't clean up the file descriptors
  properly, which means the file will persist, until nova-compute is
  stopped.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1496873/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1568292] Re: broken review priorities link in devref

2016-04-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/304175
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=4994b411da0cc29b6333555858dea29f0e1976a7
Submitter: Jenkins
Branch:master

commit 4994b411da0cc29b6333555858dea29f0e1976a7
Author: Diana Clarke 
Date:   Mon Apr 11 10:24:01 2016 -0400

Minor updates to the how_to_get_involved docs

- fixed a broken link
- updated a couple of links from mitaka -> newton
- added some missing apostrophes
- added some custom gerrit review dashboards

Change-Id: I1cc5087e7073244b13eaf9e8044e79a0fbe4aa9d
Closes-Bug: #1568292


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1568292

Title:
  broken review priorities link in devref

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The 2nd to last section here:

  http://docs.openstack.org/developer/nova/how_to_get_involved.html#why-
  do-code-reviews-if-i-am-not-in-nova-core

  There is a busted link:

  https://etherpad.openstack.org/p/ -nova-priorities-tracking

  That should be this now: https://etherpad.openstack.org/p/newton-nova-
  priorities-tracking

  Also, in the "TODO - I think there is more to add here" section, we
  could list these gerrit review dashboards for nova:

  1. from the IRC channel topic: https://goo.gl/1vTS0Z

  2. small bug fixes: http://ow.ly/WAw1J

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1568292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569469] Re: smartos datasource stacktrace if no system-product-name in dmi data

2016-04-13 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.7~bzr1208-0ubuntu1

---
cloud-init (0.7.7~bzr1208-0ubuntu1) xenial; urgency=medium

  * New upstream snapshot.
- phone_home: allow usage of fqdn (LP: #1566824) [Ollie Armstrong]
- chef: straighten out validation_cert and validation_key (LP: #1568940)
- skip bridges when generating fallback networking (LP: #1569974)
- rh_subscription: only check subscription if configured (LP: #1536706)
- SmartOS, CloudSigma: fix error when dmi data is not availble
  (LP: #1569469)
- DataSourceNoCloud: fix check_instance_id when upgraded (LP: #1568150)
- lxd: adds basic support for dpkg based lxd-bridge
  configuration. (LP: #1569018)
- centos: Ensure that a resolve conf object is written as a string.
  (LP: #1479988)

 -- Scott Moser   Wed, 13 Apr 2016 13:19:03 -0400

** Changed in: cloud-init (Ubuntu)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1569469

Title:
  smartos datasource stacktrace if no system-product-name in dmi data

Status in cloud-init:
  Confirmed
Status in cloud-init package in Ubuntu:
  Fix Released

Bug description:
  if there is no product name in dmi data, cloud-init log will show

  DataSourceCloudSigma.py[WARNING]: failed to get hypervisor product name via 
dmi data
  util.py[WARNING]: Getting data from  failed
  util.py[DEBUG]: Getting data from  failed#
  Traceback (most recent call last):#
File "/usr/lib/python3/dist-packages/cloudinit/sources/__init__.py", line 
266, in find_source#
  if s.get_data():#
File 
"/usr/lib/python3/dist-packages/cloudinit/sources/DataSourceSmartOS.py", line 
250, in get_data#
  if 'smartdc' not in system_type.lower():#
  AttributeError: 'NoneType' object has no attribute 'lower'

  
  Fix is simple enough.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1569469/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569974] Re: networking fallback should ignore bridges

2016-04-13 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.7~bzr1208-0ubuntu1

---
cloud-init (0.7.7~bzr1208-0ubuntu1) xenial; urgency=medium

  * New upstream snapshot.
- phone_home: allow usage of fqdn (LP: #1566824) [Ollie Armstrong]
- chef: straighten out validation_cert and validation_key (LP: #1568940)
- skip bridges when generating fallback networking (LP: #1569974)
- rh_subscription: only check subscription if configured (LP: #1536706)
- SmartOS, CloudSigma: fix error when dmi data is not availble
  (LP: #1569469)
- DataSourceNoCloud: fix check_instance_id when upgraded (LP: #1568150)
- lxd: adds basic support for dpkg based lxd-bridge
  configuration. (LP: #1569018)
- centos: Ensure that a resolve conf object is written as a string.
  (LP: #1479988)

 -- Scott Moser   Wed, 13 Apr 2016 13:19:03 -0400

** Changed in: cloud-init (Ubuntu)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1569974

Title:
  networking fallback should ignore bridges

Status in cloud-init:
  Confirmed
Status in cloud-init package in Ubuntu:
  Fix Released

Bug description:
  Serge pointed me at an an issue with cloud-init's fallback networking.
  It will consider bridges as possible devices to dhcp on.
  Since these bridges are configured only by things like lxd (not 
/etc/network/interfaces) they're not going to be good devices to 'dhcp' on and 
get a fallback networking configuration.

  My short solution is to just ignore bridge devices when searching.

  ProblemType: Bug
  DistroRelease: Ubuntu 16.04
  Package: cloud-init 0.7.7~bzr1200-0ubuntu1 [modified: 
lib/systemd/system/cloud-init-local.service 
usr/lib/python3/dist-packages/cloudinit/net/__init__.py]
  ProcVersionSignature: Ubuntu 4.4.0-18.34-generic 4.4.6
  Uname: Linux 4.4.0-18-generic x86_64
  ApportVersion: 2.20.1-0ubuntu1
  Architecture: amd64
  Date: Wed Apr 13 14:22:57 2016
  PackageArchitecture: all
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
  SourcePackage: cloud-init
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1569974/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1568150] Re: xenial lxc containers not starting

2016-04-13 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.7~bzr1208-0ubuntu1

---
cloud-init (0.7.7~bzr1208-0ubuntu1) xenial; urgency=medium

  * New upstream snapshot.
- phone_home: allow usage of fqdn (LP: #1566824) [Ollie Armstrong]
- chef: straighten out validation_cert and validation_key (LP: #1568940)
- skip bridges when generating fallback networking (LP: #1569974)
- rh_subscription: only check subscription if configured (LP: #1536706)
- SmartOS, CloudSigma: fix error when dmi data is not availble
  (LP: #1569469)
- DataSourceNoCloud: fix check_instance_id when upgraded (LP: #1568150)
- lxd: adds basic support for dpkg based lxd-bridge
  configuration. (LP: #1569018)
- centos: Ensure that a resolve conf object is written as a string.
  (LP: #1479988)

 -- Scott Moser   Wed, 13 Apr 2016 13:19:03 -0400

** Changed in: cloud-init (Ubuntu)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1568150

Title:
  xenial lxc containers not starting

Status in cloud-init:
  Fix Committed
Status in juju-core:
  Incomplete
Status in cloud-init package in Ubuntu:
  Fix Released

Bug description:
  When deploying a xenial lxc container to a xenial host, the container
  fails during cloud-init with the following error in the container's
  /var/log/cloud-init-output.log:

  2016-04-08 21:07:05,190 - util.py[WARNING]: failed of stage init-local
  failed run of stage init-local
  
  Traceback (most recent call last):
File "/usr/bin/cloud-init", line 515, in status_wrapper
  ret = functor(name, args)
File "/usr/bin/cloud-init", line 250, in main_init
  init.fetch(existing=existing)
File "/usr/lib/python3/dist-packages/cloudinit/stages.py", line 318, in 
fetch
  return self._get_data_source(existing=existing)
File "/usr/lib/python3/dist-packages/cloudinit/stages.py", line 227, in 
_get_data_source
  ds.check_instance_id(self.cfg)):
File 
"/usr/lib/python3/dist-packages/cloudinit/sources/DataSourceNoCloud.py", line 
220, in check_instance_id
  dirs=self.seed_dirs)
  AttributeError: 'DataSourceNoCloudNet' object has no attribute 'seed_dirs'

  Trusty containers start just fine.

  Using juju 1.25.5 and MAAS 1.9.2

  Commands to reproduce:

  juju bootstrap --constraints "tags=juju" --upload-tools --show-log --debug
  juju set-constraints "tags="
  juju add-machine --series xenial
  juju deploy --to lxc:1 local:xenial/ubuntu ubuntu

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1568150/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566824] Re: phone_home module should allow fqdn to be posted

2016-04-13 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.7~bzr1208-0ubuntu1

---
cloud-init (0.7.7~bzr1208-0ubuntu1) xenial; urgency=medium

  * New upstream snapshot.
- phone_home: allow usage of fqdn (LP: #1566824) [Ollie Armstrong]
- chef: straighten out validation_cert and validation_key (LP: #1568940)
- skip bridges when generating fallback networking (LP: #1569974)
- rh_subscription: only check subscription if configured (LP: #1536706)
- SmartOS, CloudSigma: fix error when dmi data is not availble
  (LP: #1569469)
- DataSourceNoCloud: fix check_instance_id when upgraded (LP: #1568150)
- lxd: adds basic support for dpkg based lxd-bridge
  configuration. (LP: #1569018)
- centos: Ensure that a resolve conf object is written as a string.
  (LP: #1479988)

 -- Scott Moser   Wed, 13 Apr 2016 13:19:03 -0400

** Changed in: cloud-init (Ubuntu)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1566824

Title:
  phone_home module should allow fqdn to be posted

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released

Bug description:
  Currently the phone_home module allows for the hostname of a machine
  to be posted but not the FQDN.

  I'm happy to make this change myself but I cannot work out how to
  submit the patch through launchpad.  I'll update this if I work it
  out.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1566824/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569018] Re: [FFe] support lxc bridge configuration

2016-04-13 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.7~bzr1208-0ubuntu1

---
cloud-init (0.7.7~bzr1208-0ubuntu1) xenial; urgency=medium

  * New upstream snapshot.
- phone_home: allow usage of fqdn (LP: #1566824) [Ollie Armstrong]
- chef: straighten out validation_cert and validation_key (LP: #1568940)
- skip bridges when generating fallback networking (LP: #1569974)
- rh_subscription: only check subscription if configured (LP: #1536706)
- SmartOS, CloudSigma: fix error when dmi data is not availble
  (LP: #1569469)
- DataSourceNoCloud: fix check_instance_id when upgraded (LP: #1568150)
- lxd: adds basic support for dpkg based lxd-bridge
  configuration. (LP: #1569018)
- centos: Ensure that a resolve conf object is written as a string.
  (LP: #1479988)

 -- Scott Moser   Wed, 13 Apr 2016 13:19:03 -0400

** Changed in: cloud-init (Ubuntu)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1569018

Title:
  [FFe] support lxc bridge configuration

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released

Bug description:
  recent changes to lxd added configuration of the lxd bridge.
  We'd like to enable that in cloud-init.

  The change allows the user to put config in cloud-config that will
  basically map to the debconf keys and thus configure their lxd bridge
  and lxd init on first boot  making it immediately useful.

  it should be safe for regression in that if there is no lxd bridge
  config, no change in code path will be taken.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1569018/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1534625] Re: [Pluggable IPAM] Ipam driver is not called on update subnet if allocation pools are blank

2016-04-13 Thread John Belamaric
** Changed in: networking-infoblox
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1534625

Title:
  [Pluggable IPAM] Ipam driver is not called on update subnet if
  allocation pools are blank

Status in networking-infoblox:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  Currently ipam driver is called on subnet_update only if allocation_pools set 
in subnet dict. See [1].
  This happens because original design expectation is that the only information 
ipam driver 
  has to update is allocation pools.
  For reference ipam driver that is absolutely correct.

  But 3rd party ipam vendors might want to utilize this call to keep track of 
other fields updates in subnet.
  So my suggestion here is to call update_subnet in ipam_driver on each subnet 
update
  and let ipam driver decide if it has to do some processing or not.

  In Infoblox we need it to keep track of dns_nameservers on subnet_update.
  Currently we can obtain this dns_nameservers  info on subnet update only if 
allocation_pools present in subnet_update request.

  Is it a bug from neutron perspective?

  [1]
  
https://github.com/openstack/neutron/blob/master/neutron/db/ipam_pluggable_backend.py#L356

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-infoblox/+bug/1534625/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1534625] Re: [Pluggable IPAM] Ipam driver is not called on update subnet if allocation pools are blank

2016-04-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/272125
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=6ed8f45fdf529bacae32b074844aa1320b005b51
Submitter: Jenkins
Branch:master

commit 6ed8f45fdf529bacae32b074844aa1320b005b51
Author: Pavel Bondar 
Date:   Thu Jan 21 17:01:22 2016 +0300

Always call ipam driver on subnet update

Previously ipam driver was not called on subnet update
if allocation_pools are not in request.
Changing it to call ipam driver each time subnet update is requested.
If subnet_update is called without allocation_pools, then old allocation
pools are passed to ipam driver.

Contains a bit of refactoring to make that happen:

- validate_allocation_pools is already called during update
subnet workflow in update_subnet method, so just removing it;

- reworked update_db_subnet workflow;

previous workflow was:
call driver allocation -> make local allocation -> rollback driver
allocation in except block if local allocation failed.

new workflow:
make local allocation -> call driver allocation.
By changing order of execution we eliminating need of rollback in this
method, since failure in local allocation is rolled back by database
transaction rollback.

- make a copy of incoming subnet dict;
_update_subnet_allocation_pools from ipam_backend_mixin removes
'allocation_pools' from subnet_dict, so create an unchanged copy to
pass it to ipam driver

Change-Id: Ie4bd85d2ff2ab39bf803c5ef6c6ead151dbb74d7
Closes-Bug: #1534625


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1534625

Title:
  [Pluggable IPAM] Ipam driver is not called on update subnet if
  allocation pools are blank

Status in networking-infoblox:
  New
Status in neutron:
  Fix Released

Bug description:
  Currently ipam driver is called on subnet_update only if allocation_pools set 
in subnet dict. See [1].
  This happens because original design expectation is that the only information 
ipam driver 
  has to update is allocation pools.
  For reference ipam driver that is absolutely correct.

  But 3rd party ipam vendors might want to utilize this call to keep track of 
other fields updates in subnet.
  So my suggestion here is to call update_subnet in ipam_driver on each subnet 
update
  and let ipam driver decide if it has to do some processing or not.

  In Infoblox we need it to keep track of dns_nameservers on subnet_update.
  Currently we can obtain this dns_nameservers  info on subnet update only if 
allocation_pools present in subnet_update request.

  Is it a bug from neutron perspective?

  [1]
  
https://github.com/openstack/neutron/blob/master/neutron/db/ipam_pluggable_backend.py#L356

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-infoblox/+bug/1534625/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1553389] Re: Branding: Table Action dropdown hovers theme issue

2016-04-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/288653
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=07d748e3e3b05f4a9c550b05c3e8df69fd712099
Submitter: Jenkins
Branch:master

commit 07d748e3e3b05f4a9c550b05c3e8df69fd712099
Author: Diana Whitten 
Date:   Fri Mar 4 12:09:19 2016 -0700

Branding: Table Action dropdown hovers theme issue

Table Action dropdown hovers should inherit from the theme better.
Right now, they only toggle the font color, which actually doesn't
look very good in themes that have a primary color that clashes with
the danger color.

Turned this into a mixin, for ease of customization.  Material has
been augmented to show how to use this.

Closes-bug: #1553389
Change-Id: I832ce7c778a6cca96219f9bcb7fbc03c891d5d07


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1553389

Title:
  Branding: Table Action dropdown hovers theme issue

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Branding: Table Action dropdown hovers should inherit from the theme
  better.   Right now, they only toggle the font color, which actually
  doesn't look very good in themes that have a primary color that
  clashes with the danger color.

  See Darkly screenshot here:
  https://i.imgur.com/7uGXEVt.png

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1553389/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1570049] [NEW] Tox ignores following docstrings error codes D202, D203, D205, D400, D401

2016-04-13 Thread Navid Pustchi
Public bug reported:

Currently tox ignores the following docstring errors:

D202: No blank lines allowed after docstring.
D203: 1 blank required before class docstring.
D205: Blank line required between one-line summary and description.
D400: First line should end with a period.
D401: First line should be in imperative mood.

These ignores[1] must be removed and make keystone compliant.

1)https://github.com/openstack/keystone/blob/master/tox.ini#L124-L128

** Affects: keystone
 Importance: Medium
 Assignee: Navid Pustchi (npustchi)
 Status: New

** Description changed:

- Currently tox ignores the following docstring serrors:
+ Currently tox ignores the following docstring errors:
  
  D202: No blank lines allowed after docstring.
  D203: 1 blank required before class docstring.
  D205: Blank line required between one-line summary and description.
  D400: First line should end with a period.
  D401: First line should be in imperative mood.
  
  These ignores[1] must be removed and make keystone compliant.
  
  1)https://github.com/openstack/keystone/blob/master/tox.ini#L124-L128

** Changed in: keystone
   Importance: Undecided => Medium

** Changed in: keystone
 Assignee: (unassigned) => Navid Pustchi (npustchi)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1570049

Title:
  Tox ignores following docstrings error codes D202,D203,D205,D400,D401

Status in OpenStack Identity (keystone):
  New

Bug description:
  Currently tox ignores the following docstring errors:

  D202: No blank lines allowed after docstring.
  D203: 1 blank required before class docstring.
  D205: Blank line required between one-line summary and description.
  D400: First line should end with a period.
  D401: First line should be in imperative mood.

  These ignores[1] must be removed and make keystone compliant.

  1)https://github.com/openstack/keystone/blob/master/tox.ini#L124-L128

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1570049/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259292] Re: Some tests use assertEqual(observed, expected) , the argument order is wrong

2016-04-13 Thread Tim Hinrichs
** Changed in: congress
Milestone: None => mitaka

** Changed in: congress
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1259292

Title:
  Some tests use assertEqual(observed, expected) , the argument order is
  wrong

Status in Barbican:
  In Progress
Status in Ceilometer:
  Invalid
Status in Cinder:
  Fix Released
Status in congress:
  Fix Released
Status in Designate:
  Fix Released
Status in Glance:
  Fix Released
Status in heat:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Identity (keystone):
  Fix Released
Status in Magnum:
  Fix Released
Status in Manila:
  Fix Released
Status in Mistral:
  Fix Released
Status in Murano:
  Fix Released
Status in OpenStack Compute (nova):
  Won't Fix
Status in python-ceilometerclient:
  Invalid
Status in python-cinderclient:
  Fix Released
Status in python-designateclient:
  Fix Committed
Status in python-mistralclient:
  Fix Released
Status in python-solumclient:
  Fix Released
Status in Python client library for Zaqar:
  Fix Released
Status in Sahara:
  Fix Released
Status in zaqar:
  Fix Released

Bug description:
  The test cases will produce a confusing error message if the tests
  ever fail, so this is worth fixing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1259292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569997] Re: heat stack delete fails when token expires

2016-04-13 Thread Tracy Jones
changed keystone to use trusts and it worked!

http://hardysteven.blogspot.co.uk/2014/04/heat-auth-model-updates-
part-1-trusts.html

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1569997

Title:
  heat stack delete fails when token expires

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  If i create a heat stack and happen to delete the stack at a time when
  the keystone token is expiring the heat stack delete fails.  I
  originally found this when deleting a very large heat stack, but you
  can repo it by changing your keystone expiration to 300 seconds.

  
  1. create a stack with 5 servers and some volumes
  2. change the keystone.conf and restart keystone service
  change to 5 minutes
  [token]
  expiration = 300

  3. log on Horizon and wait to the 4th minute and then
  trigger stack-delete from horizon.

  4 stack-delete failed, detailed logs are as attached:

  This time it's deleting nova server error. So this problem is not cinder 
specific, should be a problem with other resources as well.
   
  804 2016-04-13 08:51:47.725 1401 INFO heat.engine.resource [-] DELETE: 
TemplateResource "instance" [092fc06e-0e84-43c2-b247-e4f6b4a41a93] Stack 
"test" [a74ccdd9-dc00-441a-a5e5-7bb964b1ed81]
  805 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource Traceback (most 
recent call last):
  806 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/dist-packages/heat/engine/resource.py", line 544, in 
_action_recorder
  807 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource yield
  808 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/dist-packages/heat/engine/resource.py", line 988, in 
delete
  809 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource yield 
self.action_handler_task(action, *action_args)
  810 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/dist-packages/heat/engine/scheduler.py", line 313, in 
wrapper
  811 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource step = 
next(subtask)
  812 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/dist-packages/heat/engine/resource.py", line 588, in 
action_handler_task
  813 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource while not 
check(handler_data):
  814 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/dist-packages/heat/engine/resources/stack_resource.py", 
line 429, in check_delete_complete
  815 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource 
show_deleted=True)
  816 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/dist-packages/heat/engine/resources/stack_resource.py", 
line 340, in _check_status_complete
  817 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource action=action)
  818 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource ResourceFailure: 
BadRequest: resources.instance.resources.server5: Expecting to find 
username or userId in passwordCredentials - the server could not comply with 
the request since it is either malformed or otherwise incorrect. The client 
is assumed to be in error. (HTTP 400)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1569997/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1533322] Re: "extra_resources" is hidden in ComputeNode

2016-04-13 Thread Gage Hugo
Marked as invalid due to the response from Jay Pipes:

The extra_resources attribute was removed because the extensible
resource tracker functionality was deprecated and removed from Nova.  If
there is some functionality you need to include in the scheduler, please
propose it as a nova-specs blueprint and come on to IRC Freenode
#openstack-nova channel to discuss how to implement this functionality.

Going to abandon the change for now.

** Changed in: nova
   Status: In Progress => Invalid

** Changed in: nova
 Assignee: Gage Hugo (gh159m) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1533322

Title:
  "extra_resources" is hidden in ComputeNode

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  "extra_resources" is a text field defined in ComputeNode model:
  
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/models.py#L165

  But it's not defined in objects.compute_node.ComputeNode:
  https://github.com/openstack/nova/blob/master/nova/objects/compute_node.py#L52

  True that it's not used anywhere in projects, but it's critical for
  some customized schedulers.

  No clue shows this field is about to deprecate, can we expose this
  field in objects.compute_node.ComputeNode?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1533322/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1570026] [NEW] Add an option for WSGI pool size

2016-04-13 Thread OpenStack Infra
Public bug reported:

https://review.openstack.org/278007
Dear bug triager. This bug was created since a commit was marked with DOCIMPACT.
Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

commit 9d573387f1e33ce85269d3ed9be501717eed4807
Author: Mike Bayer 
Date:   Tue Feb 9 13:10:57 2016 -0500

Add an option for WSGI pool size

Neutron currently hardcodes the number of
greenlets used to process requests in a process to 1000.
As detailed in
http://lists.openstack.org/pipermail/openstack-dev/2015-December/082717.html

this can cause requests to wait within one process
for available database connection while other processes
remain available.

By adding a wsgi_default_pool_size option functionally
identical to that of Nova, we can lower the number of
greenlets per process to be more in line with a typical
max database connection pool size.

DocImpact: a previously unused configuration value
   wsgi_default_pool_size is now used to affect
   the number of greenlets used by the server. The
   default number of greenlets also changes from 1000
   to 100.
Change-Id: I94cd2f9262e0f330cf006b40bb3c0071086e5d71

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: doc neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1570026

Title:
  Add an option for WSGI pool size

Status in neutron:
  New

Bug description:
  https://review.openstack.org/278007
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 9d573387f1e33ce85269d3ed9be501717eed4807
  Author: Mike Bayer 
  Date:   Tue Feb 9 13:10:57 2016 -0500

  Add an option for WSGI pool size
  
  Neutron currently hardcodes the number of
  greenlets used to process requests in a process to 1000.
  As detailed in
  
http://lists.openstack.org/pipermail/openstack-dev/2015-December/082717.html
  
  this can cause requests to wait within one process
  for available database connection while other processes
  remain available.
  
  By adding a wsgi_default_pool_size option functionally
  identical to that of Nova, we can lower the number of
  greenlets per process to be more in line with a typical
  max database connection pool size.
  
  DocImpact: a previously unused configuration value
 wsgi_default_pool_size is now used to affect
 the number of greenlets used by the server. The
 default number of greenlets also changes from 1000
 to 100.
  Change-Id: I94cd2f9262e0f330cf006b40bb3c0071086e5d71

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1570026/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569789] Re: Should change the spell of opsrofiler to osprofiler

2016-04-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/305077
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=67d7f551cd54f2ec567cb3f04da232de95367d82
Submitter: Jenkins
Branch:master

commit 67d7f551cd54f2ec567cb3f04da232de95367d82
Author: Pankaj Mishra 
Date:   Wed Apr 13 15:27:15 2016 +0530

Changed the spelling of opsrofiler to osprofiler.

Change-Id: Ifbaa9819a42a6f81f38dfa155026b9497bea1abb
Closes-Bug: #1569789


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1569789

Title:
  Should change the spell of opsrofiler to osprofiler

Status in Glance:
  Fix Released

Bug description:
  Change the spell of opsrofiler to osprofiler in the below URL
  http://docs.openstack.org/developer/glance/configuring.html#

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1569789/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1564911] Re: Network interfaces file name is not accepted by ifup

2016-04-13 Thread Scott Moser
Hannes,
Thanks.


** Changed in: cloud-init
   Importance: Undecided => Wishlist

** Changed in: cloud-init
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1564911

Title:
  Network interfaces file name is not accepted by ifup

Status in cloud-init:
  Won't Fix

Bug description:
  The change introduced in 1189 ( https://bazaar.launchpad.net/~cloud-
  init-dev/cloud-init/trunk/revision/1189#cloudinit/distros/debian.py )
  breaks ifup at least on trusty because ifup only reads files in
  /etc/network/interfaces.d that match ^[a-zA-Z0-9_-]+$.  Changing the
  filename fixed the problem for me.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1564911/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566824] Re: phone_home module should allow fqdn to be posted

2016-04-13 Thread Scott Moser
** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1566824

Title:
  phone_home module should allow fqdn to be posted

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Confirmed

Bug description:
  Currently the phone_home module allows for the hostname of a machine
  to be posted but not the FQDN.

  I'm happy to make this change myself but I cannot work out how to
  submit the patch through launchpad.  I'll update this if I work it
  out.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1566824/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567231] Re: glance using get_transport and not the correct get_notifications_transport for oslo.messaging configuration

2016-04-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/300313
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=a31b963c1bc6f52cbfd71e29dfa54509b5deadcc
Submitter: Jenkins
Branch:master

commit a31b963c1bc6f52cbfd71e29dfa54509b5deadcc
Author: Juan Antonio Osorio Robles 
Date:   Fri Apr 1 08:51:54 2016 +0300

Use messaging notifications transport instead of default

The usage of oslo_messaging.get_transport is not meant for
notifications; And oslo_messaging.get_notification_transport is
favored for those means. So this change introduces the usage of that
function.

If the settings for the notifications are not set with the
configuration that's under the oslo_messaging_notifications group,
this will fall back to the old settings which are under the DEFAULT
group; just like this used to work.

Closes-Bug: #1567231

Change-Id: I137d0f98533d8121334e8538d321fb6c7495f35f


** Changed in: glance
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1567231

Title:
  glance using get_transport and not the correct
  get_notifications_transport for oslo.messaging configuration

Status in Glance:
  Fix Released

Bug description:
  This applies mostly for code-correctness. configuring notifications
  using the standard options in the default group is meant to be
  deprecated in favor of options in the group
  oslo_messaging_notifications[1]. So basically what this enables is to
  be able to use that group to configure the notifications. As a
  fallback, the default group will still work though.

  Usually this is useful to be able to configure a separate vhost and
  user for notifications and for RPCs; But glance doesn't really have
  this problem as it's using oslo.messaging only for notifications.

  [1]
  
http://docs.openstack.org/developer/oslo.messaging/opts.html#oslo_messaging_notifications.transport_url

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1567231/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1568150] Re: xenial lxc containers not starting

2016-04-13 Thread Scott Moser
** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: cloud-init
   Importance: Undecided => High

** Changed in: cloud-init
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1568150

Title:
  xenial lxc containers not starting

Status in cloud-init:
  Fix Committed
Status in juju-core:
  Incomplete
Status in cloud-init package in Ubuntu:
  Confirmed

Bug description:
  When deploying a xenial lxc container to a xenial host, the container
  fails during cloud-init with the following error in the container's
  /var/log/cloud-init-output.log:

  2016-04-08 21:07:05,190 - util.py[WARNING]: failed of stage init-local
  failed run of stage init-local
  
  Traceback (most recent call last):
File "/usr/bin/cloud-init", line 515, in status_wrapper
  ret = functor(name, args)
File "/usr/bin/cloud-init", line 250, in main_init
  init.fetch(existing=existing)
File "/usr/lib/python3/dist-packages/cloudinit/stages.py", line 318, in 
fetch
  return self._get_data_source(existing=existing)
File "/usr/lib/python3/dist-packages/cloudinit/stages.py", line 227, in 
_get_data_source
  ds.check_instance_id(self.cfg)):
File 
"/usr/lib/python3/dist-packages/cloudinit/sources/DataSourceNoCloud.py", line 
220, in check_instance_id
  dirs=self.seed_dirs)
  AttributeError: 'DataSourceNoCloudNet' object has no attribute 'seed_dirs'

  Trusty containers start just fine.

  Using juju 1.25.5 and MAAS 1.9.2

  Commands to reproduce:

  juju bootstrap --constraints "tags=juju" --upload-tools --show-log --debug
  juju set-constraints "tags="
  juju add-machine --series xenial
  juju deploy --to lxc:1 local:xenial/ubuntu ubuntu

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1568150/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565746] Re: Use growpart even if it doesn't support --update

2016-04-13 Thread Scott Moser
** Changed in: cloud-init
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1565746

Title:
  Use growpart even if it doesn't support --update

Status in cloud-init:
  Invalid

Bug description:
  cc_growpart is only using growpart if it supports --update. I have to
  spawn some older linuxes ( like ubuntu precise ) that don't support
  this and I would like to have recent cloud-init for it. My workaround
  is to use growpart even if it doesn't support --update but create
  /var/run/reboot-required like the gdisk-resizer does. This way growing
  the disk always works but some machines require a reboot afterwards.
  Does it make sense to you to pull this into trunk?

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1565746/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569997] [NEW] heat stack delete fails when token expires

2016-04-13 Thread Tracy Jones
Public bug reported:

If i create a heat stack and happen to delete the stack at a time when
the keystone token is expiring the heat stack delete fails.  I
originally found this when deleting a very large heat stack, but you can
repo it by changing your keystone expiration to 300 seconds.


1. create a stack with 5 servers and some volumes
2. change the keystone.conf and restart keystone service
change to 5 minutes
[token]
expiration = 300

3. log on Horizon and wait to the 4th minute and then
trigger stack-delete from horizon.

4 stack-delete failed, detailed logs are as attached:

This time it's deleting nova server error. So this problem is not cinder 
specific, should be a problem with other resources as well.
 
804 2016-04-13 08:51:47.725 1401 INFO heat.engine.resource [-] DELETE: 
TemplateResource "instance" [092fc06e-0e84-43c2-b247-e4f6b4a41a93] Stack 
"test" [a74ccdd9-dc00-441a-a5e5-7bb964b1ed81]
805 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource Traceback (most 
recent call last):
806 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/dist-packages/heat/engine/resource.py", line 544, in 
_action_recorder
807 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource yield
808 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/dist-packages/heat/engine/resource.py", line 988, in 
delete
809 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource yield 
self.action_handler_task(action, *action_args)
810 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/dist-packages/heat/engine/scheduler.py", line 313, in 
wrapper
811 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource step = 
next(subtask)
812 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/dist-packages/heat/engine/resource.py", line 588, in 
action_handler_task
813 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource while not 
check(handler_data):
814 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/dist-packages/heat/engine/resources/stack_resource.py", 
line 429, in check_delete_complete
815 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource 
show_deleted=True)
816 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/dist-packages/heat/engine/resources/stack_resource.py", 
line 340, in _check_status_complete
817 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource action=action)
818 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource ResourceFailure: 
BadRequest: resources.instance.resources.server5: Expecting to find 
username or userId in passwordCredentials - the server could not comply with 
the request since it is either malformed or otherwise incorrect. The client 
is assumed to be in error. (HTTP 400)

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "error.log"
   https://bugs.launchpad.net/bugs/1569997/+attachment/4635829/+files/error.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1569997

Title:
  heat stack delete fails when token expires

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If i create a heat stack and happen to delete the stack at a time when
  the keystone token is expiring the heat stack delete fails.  I
  originally found this when deleting a very large heat stack, but you
  can repo it by changing your keystone expiration to 300 seconds.

  
  1. create a stack with 5 servers and some volumes
  2. change the keystone.conf and restart keystone service
  change to 5 minutes
  [token]
  expiration = 300

  3. log on Horizon and wait to the 4th minute and then
  trigger stack-delete from horizon.

  4 stack-delete failed, detailed logs are as attached:

  This time it's deleting nova server error. So this problem is not cinder 
specific, should be a problem with other resources as well.
   
  804 2016-04-13 08:51:47.725 1401 INFO heat.engine.resource [-] DELETE: 
TemplateResource "instance" [092fc06e-0e84-43c2-b247-e4f6b4a41a93] Stack 
"test" [a74ccdd9-dc00-441a-a5e5-7bb964b1ed81]
  805 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource Traceback (most 
recent call last):
  806 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/dist-packages/heat/engine/resource.py", line 544, in 
_action_recorder
  807 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource yield
  808 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/dist-packages/heat/engine/resource.py", line 988, in 
delete
  809 2016-04-13 08:51:47.725 1401 TRACE heat.engine.resource yield 
self.action_handler_task(action, *action_args)
  810 2016-04-13 08:51:47.725 1401 TRACE 

[Yahoo-eng-team] [Bug 1569974] Re: networking fallback should ignore bridges

2016-04-13 Thread Scott Moser
** Also affects: cloud-init
   Importance: Undecided
   Status: New

** Changed in: cloud-init
   Status: New => Confirmed

** Changed in: cloud-init
   Importance: Undecided => Medium

** Changed in: cloud-init (Ubuntu)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1569974

Title:
  networking fallback should ignore bridges

Status in cloud-init:
  Confirmed
Status in cloud-init package in Ubuntu:
  Confirmed

Bug description:
  Serge pointed me at an an issue with cloud-init's fallback networking.
  It will consider bridges as possible devices to dhcp on.
  Since these bridges are configured only by things like lxd (not 
/etc/network/interfaces) they're not going to be good devices to 'dhcp' on and 
get a fallback networking configuration.

  My short solution is to just ignore bridge devices when searching.

  ProblemType: Bug
  DistroRelease: Ubuntu 16.04
  Package: cloud-init 0.7.7~bzr1200-0ubuntu1 [modified: 
lib/systemd/system/cloud-init-local.service 
usr/lib/python3/dist-packages/cloudinit/net/__init__.py]
  ProcVersionSignature: Ubuntu 4.4.0-18.34-generic 4.4.6
  Uname: Linux 4.4.0-18-generic x86_64
  ApportVersion: 2.20.1-0ubuntu1
  Architecture: amd64
  Date: Wed Apr 13 14:22:57 2016
  PackageArchitecture: all
  ProcEnviron:
   TERM=xterm-256color
   PATH=(custom, no user)
  SourcePackage: cloud-init
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1569974/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569941] [NEW] missed assertTrue in instance tests to validate notification messages

2016-04-13 Thread Sergei Chipiga
Public bug reported:

There are some missed assertTrue inside
openstack_dashboard/test/integration_tests/pages/project/compute/instancespage.py
for notification messages, which may lead to malfunction of the tests.

** Affects: horizon
 Importance: Undecided
 Assignee: Sergei Chipiga (schipiga)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1569941

Title:
  missed assertTrue in instance tests to validate notification messages

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  There are some missed assertTrue inside
  
openstack_dashboard/test/integration_tests/pages/project/compute/instancespage.py
  for notification messages, which may lead to malfunction of the tests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1569941/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569942] [NEW] Network wizard allocation pools breaks Disable Gateway checkbox and field behavior

2016-04-13 Thread Timur Sufiev
Public bug reported:

I found that 'Disable Gateway' checkbox works weirdly with new 'Allocate
Network address from a pool' feature: when I first change it from manual
mode to pool mode, Gateway Address field disappears, but once I check
and uncheck 'Disable Gateway' checkbox field it reappears again.

Moreover, if I check 'Disable Gateway' while in pool mode, then switch
to manual mode, Gateway Address field reappears even with 'Disable
Gateway' checkbox checked.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: neutron

** Tags added: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1569942

Title:
  Network wizard allocation pools breaks Disable Gateway checkbox and
  field behavior

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I found that 'Disable Gateway' checkbox works weirdly with new
  'Allocate Network address from a pool' feature: when I first change it
  from manual mode to pool mode, Gateway Address field disappears, but
  once I check and uncheck 'Disable Gateway' checkbox field it reappears
  again.

  Moreover, if I check 'Disable Gateway' while in pool mode, then switch
  to manual mode, Gateway Address field reappears even with 'Disable
  Gateway' checkbox checked.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1569942/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569937] [NEW] TypeError: Incorrect padding when setting metadata_encryption_key

2016-04-13 Thread Tom Patzig
Public bug reported:

We are on liberty. We have show_multiple_locations set, to allow swift
locations for images. But when setting metadata_encryption_key to a
random 32 char string, glance throws an internal server error, e.g. on
image-list:

2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi 
[req-fad0e0cb-1094-40ee-a314-ac189c660329 ed12c3c9e4144827bc2b041da22c94b8 
7f0b5f95e9f24cf3924e5aba39fddeca - - -] Caught error: Incorrect padding
2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi Traceback (most recent 
call last):
2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi   File 
"/usr/lib/python2.7/site-packages/glance/common/wsgi.py", line 879, in __call__
2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi request, 
**action_args)
2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi   File 
"/usr/lib/python2.7/site-packages/glance/common/wsgi.py", line 907, in dispatch
2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi return method(*args, 
**kwargs)
2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi   File 
"/usr/lib/python2.7/site-packages/glance/api/v2/images.py", line 116, in index
2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi 
member_status=member_status)
2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi   File 
"/usr/lib/python2.7/site-packages/glance/api/authorization.py", line 113, in 
list
2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi images = 
self.image_repo.list(*args, **kwargs)
2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi   File 
"/usr/lib/python2.7/site-packages/glance/domain/proxy.py", line 89, in list
2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi items = 
self.base.list(*args, **kwargs)
2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi   File 
"/usr/lib/python2.7/site-packages/glance/api/policy.py", line 123, in list
2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi return 
super(ImageRepoProxy, self).list(*args, **kwargs)
2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi   File 
"/usr/lib/python2.7/site-packages/glance/domain/proxy.py", line 89, in list
2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi items = 
self.base.list(*args, **kwargs)
2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi   File 
"/usr/lib/python2.7/site-packages/glance/domain/proxy.py", line 89, in list
2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi items = 
self.base.list(*args, **kwargs)
2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi   File 
"/usr/lib/python2.7/site-packages/glance/domain/proxy.py", line 89, in list
2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi items = 
self.base.list(*args, **kwargs)
2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi   File 
"/usr/lib/python2.7/site-packages/glance/db/__init__.py", line 185, in list
2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi image = 
self._format_image_from_db(db_image, db_image['tags'])
2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi   File 
"/usr/lib/python2.7/site-packages/glance/db/__init__.py", line 201, in 
_format_image_from_db
2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi l['url'] = 
crypt.urlsafe_decrypt(key, l['url'])
2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi   File 
"/usr/lib/python2.7/site-packages/glance/common/crypt.py", line 74, in 
urlsafe_decrypt
2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi ciphertext = 
base64.urlsafe_b64decode(six.binary_type(ciphertext))
2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi   File 
"/usr/lib64/python2.7/base64.py", line 112, in urlsafe_b64decode
2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi return b64decode(s, 
'-_')
2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi   File 
"/usr/lib64/python2.7/base64.py", line 76, in b64decode
2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi raise TypeError(msg)
2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi TypeError: Incorrect 
padding

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1569937

Title:
  TypeError: Incorrect padding when setting metadata_encryption_key

Status in Glance:
  New

Bug description:
  We are on liberty. We have show_multiple_locations set, to allow swift
  locations for images. But when setting metadata_encryption_key to a
  random 32 char string, glance throws an internal server error, e.g. on
  image-list:

  2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi 
[req-fad0e0cb-1094-40ee-a314-ac189c660329 ed12c3c9e4144827bc2b041da22c94b8 
7f0b5f95e9f24cf3924e5aba39fddeca - - -] Caught error: Incorrect padding
  2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi Traceback (most recent 
call last):
  2016-04-13 11:11:00.461 1592 ERROR glance.common.wsgi   File 
"/usr/lib/python2.7/site-packages/glance/common/wsgi.py", line 879, in __call__
  

[Yahoo-eng-team] [Bug 1569921] [NEW] l3 agent terminates on start if the server is not running

2016-04-13 Thread Sreekumar S
Public bug reported:

Neutron L3 agent fails to start if the neutron server is not running.
This is particularly annoying when we run multiple compute nodes with
only the agent and connecting it to nw/controller node.

There's been a partial fix which alleviates this to some extent with bug
#1368795, but still it gets timed out in cases where we need to restart
the agents in the nw & compute nodes.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1569921

Title:
  l3 agent terminates on start if the server is not running

Status in neutron:
  New

Bug description:
  Neutron L3 agent fails to start if the neutron server is not running.
  This is particularly annoying when we run multiple compute nodes with
  only the agent and connecting it to nw/controller node.

  There's been a partial fix which alleviates this to some extent with
  bug #1368795, but still it gets timed out in cases where we need to
  restart the agents in the nw & compute nodes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1569921/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569918] [NEW] Allowed_address_pair fixed_ip configured with FloatingIP after getting associated with a VM port does not work with DVR routers

2016-04-13 Thread Swaminathan Vasudevan
Public bug reported:

Allowed_address_pair fixed_ip when configured with FloatingIP after the
port is associated with the VM port is not reachable from DVR router.

The current code only supports adding in the proper ARP update and port
host binding inheritence for the Allowed_address_pair port only if the
port has a FloatingIP configured before it is associated with a VM port.

When the floatingIP is added later,  it fails.

How to reproduce.

1. Create networks
2. Create vrrp-net.
3. Create vrrp-subnet.
4. Create a DVR router.
5. Attach the vrrp-subnet to the router.
6. Create a VM on the vrrp-subnet
7. Create a VRRP port.
8. Attach the VRRP port with the VM.
9. Now assign a FloatingIP to the VRRP port.
10. Now check the ARP table entry in the router_namespace and also the VRRP 
port details. The VRRP port is still unbound and so the DVR cannot handle 
unbound ports.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1569918

Title:
  Allowed_address_pair fixed_ip configured with FloatingIP after getting
  associated with a VM port does not work with DVR routers

Status in neutron:
  New

Bug description:
  Allowed_address_pair fixed_ip when configured with FloatingIP after
  the port is associated with the VM port is not reachable from DVR
  router.

  The current code only supports adding in the proper ARP update and
  port host binding inheritence for the Allowed_address_pair port only
  if the port has a FloatingIP configured before it is associated with a
  VM port.

  When the floatingIP is added later,  it fails.

  How to reproduce.

  1. Create networks
  2. Create vrrp-net.
  3. Create vrrp-subnet.
  4. Create a DVR router.
  5. Attach the vrrp-subnet to the router.
  6. Create a VM on the vrrp-subnet
  7. Create a VRRP port.
  8. Attach the VRRP port with the VM.
  9. Now assign a FloatingIP to the VRRP port.
  10. Now check the ARP table entry in the router_namespace and also the VRRP 
port details. The VRRP port is still unbound and so the DVR cannot handle 
unbound ports.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1569918/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1568208] Re: TypeError: cannot concatenate 'str' and 'OptGroup' objects during Guru Meditation Report run

2016-04-13 Thread Roman Podoliaka
** Changed in: nova
 Assignee: (unassigned) => Roman Podoliaka (rpodolyaka)

** Changed in: nova
   Status: New => In Progress

** Project changed: oslo.reports => oslo.config

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1568208

Title:
  TypeError: cannot concatenate 'str' and 'OptGroup' objects during Guru
  Meditation Report run

Status in OpenStack Compute (nova):
  In Progress
Status in oslo.config:
  In Progress

Bug description:
  I noticed a trace in a recent tempest job run [1] like the following.
  From what I can tell, somehow rootkey here is an OptGroup object
  instead of the expected str.

  Traceback (most recent call last):
File 
"/usr/local/lib/python2.7/dist-packages/oslo_reports/guru_meditation_report.py",
 line 180, in handle_signal
  res = cls(version, frame).run()
File 
"/usr/local/lib/python2.7/dist-packages/oslo_reports/guru_meditation_report.py",
 line 228, in run
  return super(GuruMeditation, self).run()
File "/usr/local/lib/python2.7/dist-packages/oslo_reports/report.py", line 
77, in run
  return "\n".join(six.text_type(sect) for sect in self.sections)
File "/usr/local/lib/python2.7/dist-packages/oslo_reports/report.py", line 
77, in 
  return "\n".join(six.text_type(sect) for sect in self.sections)
File "/usr/local/lib/python2.7/dist-packages/oslo_reports/report.py", line 
102, in __str__
  return self.view(self.generator())
File 
"/usr/local/lib/python2.7/dist-packages/oslo_reports/views/text/header.py", 
line 36, in __call__
  return six.text_type(self.header) + "\n" + six.text_type(model)
File "/usr/local/lib/python2.7/dist-packages/oslo_reports/models/base.py", 
line 73, in __str__
  return self.attached_view(self_cpy)
File 
"/usr/local/lib/python2.7/dist-packages/oslo_reports/views/text/generic.py", 
line 153, in __call__
  return "\n".join(serialize(model, None, -1))
File 
"/usr/local/lib/python2.7/dist-packages/oslo_reports/views/text/generic.py", 
line 124, in serialize
  res.extend(serialize(root[key], key, indent + 1))
File 
"/usr/local/lib/python2.7/dist-packages/oslo_reports/views/text/generic.py", 
line 113, in serialize
  res.append((self.indent_str * indent) + rootkey)
  TypeError: cannot concatenate 'str' and 'OptGroup' objects
  Unable to run Guru Meditation Report!

  
  [1] 
http://logs.openstack.org/41/302341/5/check/gate-tempest-dsvm-full/f51065e/logs/screen-n-cpu.txt.gz?#_2016-04-08_21_08_08_994

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1568208/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569875] [NEW] the interval between two neutron metering reports are not correct

2016-04-13 Thread MingShuang Xian
Public bug reported:

the interval between two neutron metering reports are not  correct. For
example, we have below configuration in file metering_agent.ini

# Interval between two metering measures
measure_interval = 30

# Interval between two metering reports
report_interval = 600

then go to the ceilometer to dump all neutron metering samples, the
reports interval will be 630 seconds. Below are some dump result:

ceilometer sample-list -m bandwidth -q 
'resource_id=2cd7f8be-8dee-4c9c-9e58-6975fe83944a'
+--+---+---++--++
| Resource ID  | Name  | Type  | Volume | Unit | 
Timestamp  |
+--+---+---++--++
| 2cd7f8be-8dee-4c9c-9e58-6975fe83944a | bandwidth | delta | 0.0| B| 
2016-04-13T12:32:42.353000 |
| 2cd7f8be-8dee-4c9c-9e58-6975fe83944a | bandwidth | delta | 0.0| B| 
2016-04-13T12:22:12.416000 |
| 2cd7f8be-8dee-4c9c-9e58-6975fe83944a | bandwidth | delta | 0.0| B| 
2016-04-13T12:11:42.415000 |
| 2cd7f8be-8dee-4c9c-9e58-6975fe83944a | bandwidth | delta | 0.0| B| 
2016-04-13T12:01:12.396000 |
| 2cd7f8be-8dee-4c9c-9e58-6975fe83944a | bandwidth | delta | 0.0| B| 
2016-04-13T11:50:42.446000 |
| 2cd7f8be-8dee-4c9c-9e58-6975fe83944a | bandwidth | delta | 0.0| B| 
2016-04-13T11:40:12.355000 |
| 2cd7f8be-8dee-4c9c-9e58-6975fe83944a | bandwidth | delta | 0.0| B| 
2016-04-13T11:29:42.361000 |
| 2cd7f8be-8dee-4c9c-9e58-6975fe83944a | bandwidth | delta | 0.0| B| 
2016-04-13T11:19:12.377000 |
| 2cd7f8be-8dee-4c9c-9e58-6975fe83944a | bandwidth | delta | 0.0| B| 
2016-04-13T11:08:42.353000 |
| 2cd7f8be-8dee-4c9c-9e58-6975fe83944a | bandwidth | delta | 0.0| B| 
2016-04-13T10:58:12.389000 |
| 2cd7f8be-8dee-4c9c-9e58-6975fe83944a | bandwidth | delta | 0.0| B| 
2016-04-13T10:47:42.409000 |

>From above ceilometer result we can know the actual report interval is
report_interval+measure_interval

This issue is found in all openstack version

** Affects: neutron
 Importance: Undecided
 Assignee: MingShuang Xian (xianms)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => MingShuang Xian (xianms)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1569875

Title:
  the interval between two neutron metering reports are not  correct

Status in neutron:
  New

Bug description:
  the interval between two neutron metering reports are not  correct.
  For example, we have below configuration in file metering_agent.ini

  # Interval between two metering measures
  measure_interval = 30

  # Interval between two metering reports
  report_interval = 600

  then go to the ceilometer to dump all neutron metering samples, the
  reports interval will be 630 seconds. Below are some dump result:

  ceilometer sample-list -m bandwidth -q 
'resource_id=2cd7f8be-8dee-4c9c-9e58-6975fe83944a'
  
+--+---+---++--++
  | Resource ID  | Name  | Type  | Volume | Unit | 
Timestamp  |
  
+--+---+---++--++
  | 2cd7f8be-8dee-4c9c-9e58-6975fe83944a | bandwidth | delta | 0.0| B| 
2016-04-13T12:32:42.353000 |
  | 2cd7f8be-8dee-4c9c-9e58-6975fe83944a | bandwidth | delta | 0.0| B| 
2016-04-13T12:22:12.416000 |
  | 2cd7f8be-8dee-4c9c-9e58-6975fe83944a | bandwidth | delta | 0.0| B| 
2016-04-13T12:11:42.415000 |
  | 2cd7f8be-8dee-4c9c-9e58-6975fe83944a | bandwidth | delta | 0.0| B| 
2016-04-13T12:01:12.396000 |
  | 2cd7f8be-8dee-4c9c-9e58-6975fe83944a | bandwidth | delta | 0.0| B| 
2016-04-13T11:50:42.446000 |
  | 2cd7f8be-8dee-4c9c-9e58-6975fe83944a | bandwidth | delta | 0.0| B| 
2016-04-13T11:40:12.355000 |
  | 2cd7f8be-8dee-4c9c-9e58-6975fe83944a | bandwidth | delta | 0.0| B| 
2016-04-13T11:29:42.361000 |
  | 2cd7f8be-8dee-4c9c-9e58-6975fe83944a | bandwidth | delta | 0.0| B| 
2016-04-13T11:19:12.377000 |
  | 2cd7f8be-8dee-4c9c-9e58-6975fe83944a | bandwidth | delta | 0.0| B| 
2016-04-13T11:08:42.353000 |
  | 2cd7f8be-8dee-4c9c-9e58-6975fe83944a | bandwidth | delta | 0.0| B| 
2016-04-13T10:58:12.389000 |
  | 2cd7f8be-8dee-4c9c-9e58-6975fe83944a | bandwidth | delta | 0.0| B| 
2016-04-13T10:47:42.409000 |

  From above ceilometer result we can know the actual report interval is
  report_interval+measure_interval

  This issue is found in all openstack version

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1569875/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post 

[Yahoo-eng-team] [Bug 1568208] Re: TypeError: cannot concatenate 'str' and 'OptGroup' objects during Guru Meditation Report run

2016-04-13 Thread Roman Podoliaka
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1568208

Title:
  TypeError: cannot concatenate 'str' and 'OptGroup' objects during Guru
  Meditation Report run

Status in OpenStack Compute (nova):
  New
Status in oslo.reports:
  In Progress

Bug description:
  I noticed a trace in a recent tempest job run [1] like the following.
  From what I can tell, somehow rootkey here is an OptGroup object
  instead of the expected str.

  Traceback (most recent call last):
File 
"/usr/local/lib/python2.7/dist-packages/oslo_reports/guru_meditation_report.py",
 line 180, in handle_signal
  res = cls(version, frame).run()
File 
"/usr/local/lib/python2.7/dist-packages/oslo_reports/guru_meditation_report.py",
 line 228, in run
  return super(GuruMeditation, self).run()
File "/usr/local/lib/python2.7/dist-packages/oslo_reports/report.py", line 
77, in run
  return "\n".join(six.text_type(sect) for sect in self.sections)
File "/usr/local/lib/python2.7/dist-packages/oslo_reports/report.py", line 
77, in 
  return "\n".join(six.text_type(sect) for sect in self.sections)
File "/usr/local/lib/python2.7/dist-packages/oslo_reports/report.py", line 
102, in __str__
  return self.view(self.generator())
File 
"/usr/local/lib/python2.7/dist-packages/oslo_reports/views/text/header.py", 
line 36, in __call__
  return six.text_type(self.header) + "\n" + six.text_type(model)
File "/usr/local/lib/python2.7/dist-packages/oslo_reports/models/base.py", 
line 73, in __str__
  return self.attached_view(self_cpy)
File 
"/usr/local/lib/python2.7/dist-packages/oslo_reports/views/text/generic.py", 
line 153, in __call__
  return "\n".join(serialize(model, None, -1))
File 
"/usr/local/lib/python2.7/dist-packages/oslo_reports/views/text/generic.py", 
line 124, in serialize
  res.extend(serialize(root[key], key, indent + 1))
File 
"/usr/local/lib/python2.7/dist-packages/oslo_reports/views/text/generic.py", 
line 113, in serialize
  res.append((self.indent_str * indent) + rootkey)
  TypeError: cannot concatenate 'str' and 'OptGroup' objects
  Unable to run Guru Meditation Report!

  
  [1] 
http://logs.openstack.org/41/302341/5/check/gate-tempest-dsvm-full/f51065e/logs/screen-n-cpu.txt.gz?#_2016-04-08_21_08_08_994

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1568208/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569850] [NEW] Trace in update_dhcp_port when subnet is deleted concurrently

2016-04-13 Thread venkata anil
Public bug reported:

Looks like patches in https://bugs.launchpad.net/neutron/+bug/1253344
not handling Trace in update_dhcp_port when subnet is deleted
concurrently.

Trace http://logs.openstack.org/16/281116/16/gate/gate-tempest-dsvm-
neutron-
full/e5974dd/logs/screen-q-svc.txt.gz?level=WARNING#_2016-04-12_18_58_48_195
is seen on gate while running tempest tests.

plugin and agent should be both updated to handle this trace.

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New


** Tags: l3-ipam-dhcp

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

** Tags added: l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1569850

Title:
  Trace in update_dhcp_port  when subnet is deleted concurrently

Status in neutron:
  New

Bug description:
  Looks like patches in https://bugs.launchpad.net/neutron/+bug/1253344
  not handling Trace in update_dhcp_port when subnet is deleted
  concurrently.

  Trace http://logs.openstack.org/16/281116/16/gate/gate-tempest-dsvm-
  neutron-
  full/e5974dd/logs/screen-q-svc.txt.gz?level=WARNING#_2016-04-12_18_58_48_195
  is seen on gate while running tempest tests.

  plugin and agent should be both updated to handle this trace.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1569850/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1547406] Re: eslint should be upgraded

2016-04-13 Thread Itxaka Serrano
Hello Jeffrey,

seems that this was done on https://review.openstack.org/#/c/282249 and
its already merged. I forgot to attach the bug number to the patch as I
was removed from the bug on a mistake.

Ill take the bug and set it as fix releases, sorry for the invonvenience
and thanks for the work done!

** Changed in: horizon
 Assignee: Jeffrey Augustine (ja224e) => Itxaka Serrano (itxakaserrano)

** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1547406

Title:
  eslint should be upgraded

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The eslint package is frozen to version 1.2.1, which was released the
  20th of August of 2015.

  The newer versions (last 1.x version is 1.10.3) fixes a lot of issues
  and a lot of checks which are not working correctly or which are
  totally missing:

  http://eslint.org/docs/rules/operator-linebreak.html

  Which is activated on eslint-config-openstack but was never checked:
  
https://github.com/openstack/horizon/blob/master/horizon%2Fstatic%2Fhorizon%2Fjs%2Fhorizon.quota.js#L103

  
  Also it provides a new config flag "--max-warnings" which would be incredibly 
useful so we can put a limit to the number of warning that we get and enforce a 
new check.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1547406/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569827] [NEW] LBaaS agent floods log when stats socket is not found

2016-04-13 Thread Christian Zunker
Public bug reported:

The LBaaS agent creates a lot of log messages, when a new lb-pool is
created.

As soon as I create a lb-pool:
neutron lb-pool-create --lb-method ROUND_ROBIN --name log-test-lb --protocol 
TCP --subnet-id a6ce9a77-53ca-4704-aaf4-fc255cc5fa74
The log file /var/log/neutron/neutron-lbaas-agent.log starts to fill up with 
messages like these:
2016-04-13 12:56:08.922 15373 WARNING 
neutron.services.loadbalancer.drivers.haproxy.namespace_driver [-] Stats socket 
not found for pool 37cbf817-f1ac-4d47-9a04-93c911d0afdd

The message is correct, as the file /var/lib/neutron/lbaas/37cbf817
-f1ac-4d47-9a04-93c911d0afdd/sock is not present. But the message
repeats every 10s.

The messages stop as soon as the lb-pool gets a VIP. At this step the
file /var/lib/neutron/lbaas/37cbf817-f1ac-4d47-9a04-93c911d0afdd/sock is
present. I would expect the lbaas agent to verify the sock file could
really be present before issuing the message.

Version:
Openstack Juno  on SLES 11 SP3.
The package version of openstack-neutron-lbaas-agent is 2014.2.2.dev26-0.11.2

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1569827

Title:
  LBaaS agent floods log when stats socket is not found

Status in neutron:
  New

Bug description:
  The LBaaS agent creates a lot of log messages, when a new lb-pool is
  created.

  As soon as I create a lb-pool:
  neutron lb-pool-create --lb-method ROUND_ROBIN --name log-test-lb --protocol 
TCP --subnet-id a6ce9a77-53ca-4704-aaf4-fc255cc5fa74
  The log file /var/log/neutron/neutron-lbaas-agent.log starts to fill up with 
messages like these:
  2016-04-13 12:56:08.922 15373 WARNING 
neutron.services.loadbalancer.drivers.haproxy.namespace_driver [-] Stats socket 
not found for pool 37cbf817-f1ac-4d47-9a04-93c911d0afdd

  The message is correct, as the file /var/lib/neutron/lbaas/37cbf817
  -f1ac-4d47-9a04-93c911d0afdd/sock is not present. But the message
  repeats every 10s.

  The messages stop as soon as the lb-pool gets a VIP. At this step the
  file /var/lib/neutron/lbaas/37cbf817-f1ac-4d47-9a04-93c911d0afdd/sock
  is present. I would expect the lbaas agent to verify the sock file
  could really be present before issuing the message.

  Version:
  Openstack Juno  on SLES 11 SP3.
  The package version of openstack-neutron-lbaas-agent is 2014.2.2.dev26-0.11.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1569827/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569814] [NEW] Incorrect deprecation warning for IdentityDriverV8

2016-04-13 Thread Henry Nash
Public bug reported:

Commit ad7a7bd6ee36a7af61f88d98038d83aba25a9743
(https://review.openstack.org/#/c/296140/) moved driver interfaces for
core Identity into their own module (base.py in the backend directory).
For compatibility it included a class definition of IdentityDriverV8 in
the original location (core.py), with a deprecation warning.

@versionutils.deprecated(
versionutils.deprecated.MITAKA,
what='keystone.identity.IdentityDriverV8',
in_favor_of='keystone.identity.backends.base.IdentityDriverV8',
remove_in=+1)
class IdentityDriverV8(identity_interface.IdentityDriverV8):
pass

There appear to be two things wrong with this:

1) I don't believe this merged for Mitaka. Hence we can't retrospectively start 
the deprecation in Mitaka
2) I thought we were committing to 2 cycles for deprecation in general.

Hence I think this should be NEWTON + 2.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1569814

Title:
  Incorrect deprecation warning for IdentityDriverV8

Status in OpenStack Identity (keystone):
  New

Bug description:
  Commit ad7a7bd6ee36a7af61f88d98038d83aba25a9743
  (https://review.openstack.org/#/c/296140/) moved driver interfaces for
  core Identity into their own module (base.py in the backend
  directory). For compatibility it included a class definition of
  IdentityDriverV8 in the original location (core.py), with a
  deprecation warning.

  @versionutils.deprecated(
  versionutils.deprecated.MITAKA,
  what='keystone.identity.IdentityDriverV8',
  in_favor_of='keystone.identity.backends.base.IdentityDriverV8',
  remove_in=+1)
  class IdentityDriverV8(identity_interface.IdentityDriverV8):
  pass

  There appear to be two things wrong with this:

  1) I don't believe this merged for Mitaka. Hence we can't retrospectively 
start the deprecation in Mitaka
  2) I thought we were committing to 2 cycles for deprecation in general.

  Hence I think this should be NEWTON + 2.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1569814/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569795] [NEW] Network connection is interrupted for a while after ovs-agent restart

2016-04-13 Thread patrick
Public bug reported:

Problem:

After restarting neutron-openvswitch-agent, the tentant network
connection would be interrupted for a while.

The cause is that the flow is not added immediately in default table of
br-tun to resubmit to the right tunneling table before the flow is
removed by cleanup_stale_flows for its stale cookie.

The network connection would be recovered after fdb_add is finished in
neutron-openvswitch-agent.

Affected Neutron version:
Liberty

Possible Solution:

Remove cleanup_stale_flows() since the stale flows would finally be
cleaned out by ovs-agent.

Thanks.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1569795

Title:
  Network connection is interrupted for a while after ovs-agent restart

Status in neutron:
  New

Bug description:
  Problem:

  After restarting neutron-openvswitch-agent, the tentant network
  connection would be interrupted for a while.

  The cause is that the flow is not added immediately in default table
  of br-tun to resubmit to the right tunneling table before the flow is
  removed by cleanup_stale_flows for its stale cookie.

  The network connection would be recovered after fdb_add is finished in
  neutron-openvswitch-agent.

  Affected Neutron version:
  Liberty

  Possible Solution:

  Remove cleanup_stale_flows() since the stale flows would finally be
  cleaned out by ovs-agent.

  Thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1569795/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569130] Re: test_timestamp_core not safe with other default service plugins

2016-04-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/304363
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=6797638b9ec0b201efe44be1ea63b6cfe2064596
Submitter: Jenkins
Branch:master

commit 6797638b9ec0b201efe44be1ea63b6cfe2064596
Author: Kevin Benton 
Date:   Sat Apr 9 13:22:28 2016 -0700

Only load timestamp service plugin in timestamp tests

Do not allow the timestamp plugin tests to load all of the default
service plugins to prevent possible resources from being leaked
from those plugins.

Change-Id: Ic31cc86efe8b7415e2a4769d5c2f542d1e7097f5
Closes-Bug: #1569130


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1569130

Title:
  test_timestamp_core not safe with other default service plugins

Status in neutron:
  Fix Released

Bug description:
  The test_timestamp_core test case overrides setup_coreplugin so the
  default service plugins are loaded during that test's execution. These
  plugins may have side effects like registering dictionary extension
  functions, etc that could bleed over into other tests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1569130/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569789] [NEW] Should change the spell of opsrofiler to osprofiler

2016-04-13 Thread Pankaj Mishra
Public bug reported:

Change the spell of opsrofiler to osprofiler in the below URL
http://docs.openstack.org/developer/glance/configuring.html#

** Affects: glance
 Importance: Undecided
 Assignee: Pankaj Mishra (pankaj-mishra)
 Status: In Progress

** Changed in: glance
 Assignee: (unassigned) => Pankaj Mishra (pankaj-mishra)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1569789

Title:
  Should change the spell of opsrofiler to osprofiler

Status in Glance:
  In Progress

Bug description:
  Change the spell of opsrofiler to osprofiler in the below URL
  http://docs.openstack.org/developer/glance/configuring.html#

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1569789/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569430] Re: When trying to launch an instance, dashboard spits out errors retrieving settings and user information

2016-04-13 Thread James Page
This was incorrectly raised against Horizon, rather than the Juju charm
for deploying horizon.

** Also affects: openstack-dashboard (Juju Charms Collection)
   Importance: Undecided
   Status: New

** Changed in: horizon
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1569430

Title:
  When trying to launch an instance, dashboard spits out errors
  retrieving settings and user information

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in openstack-dashboard package in Juju Charms Collection:
  New

Bug description:
  When attempting to launch an instance in openstack-dashboard under
  Mitaka, the errors in the attached file appear and no information is
  available to be input into the instance window.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1569430/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569242] Re: api error on running command : nova baremetal-node-list

2016-04-13 Thread Sylvain Bauza
As explained in IRC, that looks like a configuration problem :

- 
https://github.com/openstack/python-ironicclient/blob/master/ironicclient/client.py#L78
 takes the endpoint value from ironic_url in parameters
- whose value is taken from a config opt 
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/baremetal_nodes.py#L60
- where 
https://github.com/openstack/nova/blob/master/nova/conf/ironic.py#L29-L31 is 
default to None (meaning you have to set it)


** Changed in: python-ironicclient
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1569242

Title:
  api error on running command : nova baremetal-node-list

Status in OpenStack Compute (nova):
  Invalid
Status in python-ironicclient:
  Invalid

Bug description:
   nova baremetal-node-list
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-bebb641d-18e6-4016-8939-860216fcc754)

  This is the nova-api log

  2016-04-12 14:55:36.671 ERROR nova.api.openstack.extensions 
[req-f6b69035-d1ee-487a-9380-b5c715cbb368 admin admin] Unexpected exception in 
API method
  2016-04-12 14:55:36.671 TRACE nova.api.openstack.extensions Traceback (most 
recent call last):
  2016-04-12 14:55:36.671 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/extensions.py", line 478, in wrapped
  2016-04-12 14:55:36.671 TRACE nova.api.openstack.extensions return 
f(*args, **kwargs)
  2016-04-12 14:55:36.671 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/baremetal_nodes.py", line 90, in 
index
  2016-04-12 14:55:36.671 TRACE nova.api.openstack.extensions icli = 
_get_ironic_client()
  2016-04-12 14:55:36.671 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/baremetal_nodes.py", line 61, in 
_get_ironic_client
  2016-04-12 14:55:36.671 TRACE nova.api.openstack.extensions icli = 
ironic_client.get_client(CONF.ironic.api_version, **kwargs)
  2016-04-12 14:55:36.671 TRACE nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/ironicclient/client.py", line 137, in 
get_client
  2016-04-12 14:55:36.671 TRACE nova.api.openstack.extensions raise 
exc.AmbiguousAuthSystem(exception_msg)
  2016-04-12 14:55:36.671 TRACE nova.api.openstack.extensions 
AmbiguousAuthSystem: Must provide Keystone credentials or user-defined endpoint 
and token
  2016-04-12 14:55:36.671 TRACE nova.api.openstack.extensions 
  2016-04-12 14:55:36.746 INFO nova.api.openstack.wsgi 
[req-f6b69035-d1ee-487a-9380-b5c715cbb368 admin admin] HTTP exception thrown: 
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.
  
  2016-04-12 14:55:36.750 DEBUG nova.api.openstack.wsgi 
[req-f6b69035-d1ee-487a-9380-b5c715cbb368 admin admin] Returning 500 to user: 
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.
   from (pid=3874) __call__ 
/opt/stack/nova/nova/api/openstack/wsgi.py:1070
  2016-04-12 14:55:36.766 INFO nova.osapi_compute.wsgi.server 
[req-f6b69035-d1ee-487a-9380-b5c715cbb368 admin admin] 192.168.122.72 "GET 
/v2.1/os-baremetal-nodes HTTP/1.1" status: 500 len: 513 time: 1.7956100

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1569242/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569783] [NEW] JS dev dependencies are outdated

2016-04-13 Thread Rob Cresswell
Public bug reported:

The JS dev dependencies are outdated. This is specifically important for
Jasmine, since it should mirror the xstatic package.

** Affects: horizon
 Importance: Wishlist
 Assignee: Rob Cresswell (robcresswell)
 Status: In Progress

** Changed in: horizon
   Importance: Undecided => Wishlist

** Changed in: horizon
 Assignee: (unassigned) => Rob Cresswell (robcresswell)

** Changed in: horizon
Milestone: None => newton-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1569783

Title:
  JS dev dependencies are outdated

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The JS dev dependencies are outdated. This is specifically important
  for Jasmine, since it should mirror the xstatic package.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1569783/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569242] Re: api error on running command : nova baremetal-node-list

2016-04-13 Thread Sylvain Bauza
The stack shows an issue with the ironicclient. That can possibly be a
configuration problem, but either way it's not related to Nova.

** Also affects: python-ironicclient
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1569242

Title:
  api error on running command : nova baremetal-node-list

Status in OpenStack Compute (nova):
  Invalid
Status in python-ironicclient:
  New

Bug description:
   nova baremetal-node-list
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-bebb641d-18e6-4016-8939-860216fcc754)

  This is the nova-api log

  2016-04-12 14:55:36.671 ERROR nova.api.openstack.extensions 
[req-f6b69035-d1ee-487a-9380-b5c715cbb368 admin admin] Unexpected exception in 
API method
  2016-04-12 14:55:36.671 TRACE nova.api.openstack.extensions Traceback (most 
recent call last):
  2016-04-12 14:55:36.671 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/extensions.py", line 478, in wrapped
  2016-04-12 14:55:36.671 TRACE nova.api.openstack.extensions return 
f(*args, **kwargs)
  2016-04-12 14:55:36.671 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/baremetal_nodes.py", line 90, in 
index
  2016-04-12 14:55:36.671 TRACE nova.api.openstack.extensions icli = 
_get_ironic_client()
  2016-04-12 14:55:36.671 TRACE nova.api.openstack.extensions   File 
"/opt/stack/nova/nova/api/openstack/compute/baremetal_nodes.py", line 61, in 
_get_ironic_client
  2016-04-12 14:55:36.671 TRACE nova.api.openstack.extensions icli = 
ironic_client.get_client(CONF.ironic.api_version, **kwargs)
  2016-04-12 14:55:36.671 TRACE nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/ironicclient/client.py", line 137, in 
get_client
  2016-04-12 14:55:36.671 TRACE nova.api.openstack.extensions raise 
exc.AmbiguousAuthSystem(exception_msg)
  2016-04-12 14:55:36.671 TRACE nova.api.openstack.extensions 
AmbiguousAuthSystem: Must provide Keystone credentials or user-defined endpoint 
and token
  2016-04-12 14:55:36.671 TRACE nova.api.openstack.extensions 
  2016-04-12 14:55:36.746 INFO nova.api.openstack.wsgi 
[req-f6b69035-d1ee-487a-9380-b5c715cbb368 admin admin] HTTP exception thrown: 
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.
  
  2016-04-12 14:55:36.750 DEBUG nova.api.openstack.wsgi 
[req-f6b69035-d1ee-487a-9380-b5c715cbb368 admin admin] Returning 500 to user: 
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.
   from (pid=3874) __call__ 
/opt/stack/nova/nova/api/openstack/wsgi.py:1070
  2016-04-12 14:55:36.766 INFO nova.osapi_compute.wsgi.server 
[req-f6b69035-d1ee-487a-9380-b5c715cbb368 admin admin] 192.168.122.72 "GET 
/v2.1/os-baremetal-nodes HTTP/1.1" status: 500 len: 513 time: 1.7956100

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1569242/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569779] [NEW] allow to investigate instance actions after instance deletion

2016-04-13 Thread George Shuklin
Public bug reported:

Right now if instance has been deleted, 'nova instance-action-list'
returns 404. Due to very specific nature of 'action list' is is very
nice to have ability to see action lists for deleted instances,
especially deletion request.

Can this feature be added to nova? Al least, for administrators.

Thanks.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1569779

Title:
  allow to investigate instance actions after instance deletion

Status in OpenStack Compute (nova):
  New

Bug description:
  Right now if instance has been deleted, 'nova instance-action-list'
  returns 404. Due to very specific nature of 'action list' is is very
  nice to have ability to see action lists for deleted instances,
  especially deletion request.

  Can this feature be added to nova? Al least, for administrators.

  Thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1569779/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569507] Re: test_setup_reserved_with_isolated_metadata_enable

2016-04-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/304917
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=61e76a34f322828572882ef66598b3da58ef36bc
Submitter: Jenkins
Branch:master

commit 61e76a34f322828572882ef66598b3da58ef36bc
Author: Henry Gessau 
Date:   Wed Apr 13 00:45:10 2016 +

Revert "Add 169.254.169.254 when enable force_metadata"

This reverts commit 45bec12cfc27578597cd197334979d0aee73a2cf.

Closes-bug: #1569507

Change-Id: Ie605f3b848a35b5db545c69038249155477d71e0


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1569507

Title:
  test_setup_reserved_with_isolated_metadata_enable

Status in neutron:
  Fix Released

Bug description:
  http://logs.openstack.org/74/268274/35/check/gate-neutron-
  python27/3de5871/testr_results.html.gz

  ft54.5: 
neutron.tests.unit.agent.linux.test_dhcp.TestDeviceManager.test_setup_reserved_with_isolated_metadata_enable_StringException:
 Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File "neutron/tests/unit/agent/linux/test_dhcp.py", line 2122, in 
test_setup_reserved_with_isolated_metadata_enable
  self._test_setup_reserved(enable_isolated_metadata=True)
File "neutron/tests/unit/agent/linux/test_dhcp.py", line 2101, in 
_test_setup_reserved
  mgr.setup(network)
File "neutron/agent/linux/dhcp.py", line 1219, in setup
  port = self.setup_dhcp_port(network)
File "neutron/agent/linux/dhcp.py", line 1191, in setup_dhcp_port
  for fixed_ip in dhcp_port.fixed_ips]
  TypeError: 'Mock' object is not iterable

  
  To reproduce locally,

  python -m testtools.run
  neutron.tests.unit.agent.linux.test_dhcp.TestDeviceManager

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1569507/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1508442] Re: LOG.warn is deprecated

2016-04-13 Thread Julien Danjou
** Changed in: gnocchi
   Status: Fix Committed => Fix Released

** Changed in: gnocchi
Milestone: None => 2.1.0

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1508442

Title:
  LOG.warn is deprecated

Status in anvil:
  In Progress
Status in Aodh:
  Fix Released
Status in Astara:
  Fix Released
Status in Barbican:
  Fix Released
Status in bilean:
  Fix Released
Status in Blazar:
  In Progress
Status in Ceilometer:
  Fix Released
Status in cloud-init:
  In Progress
Status in cloudkitty:
  Fix Released
Status in congress:
  Fix Released
Status in Designate:
  Fix Released
Status in django-openstack-auth:
  Fix Released
Status in django-openstack-auth-kerberos:
  In Progress
Status in DragonFlow:
  Fix Released
Status in ec2-api:
  Fix Released
Status in Evoque:
  In Progress
Status in gce-api:
  Fix Released
Status in Gnocchi:
  Fix Released
Status in heat:
  Fix Released
Status in heat-cfntools:
  In Progress
Status in OpenStack Identity (keystone):
  Fix Released
Status in KloudBuster:
  Fix Released
Status in kolla:
  Fix Released
Status in Magnum:
  Fix Released
Status in Manila:
  Fix Released
Status in Mistral:
  Fix Committed
Status in networking-arista:
  In Progress
Status in networking-calico:
  In Progress
Status in networking-cisco:
  In Progress
Status in networking-fujitsu:
  Fix Released
Status in networking-odl:
  In Progress
Status in networking-ofagent:
  In Progress
Status in networking-plumgrid:
  In Progress
Status in networking-powervm:
  Fix Released
Status in networking-vsphere:
  Fix Released
Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  In Progress
Status in nova-powervm:
  Fix Released
Status in nova-solver-scheduler:
  In Progress
Status in octavia:
  Fix Released
Status in openstack-ansible:
  In Progress
Status in oslo.cache:
  Fix Released
Status in Packstack:
  Fix Released
Status in python-dracclient:
  Fix Released
Status in python-magnumclient:
  Fix Released
Status in RACK:
  In Progress
Status in python-watcherclient:
  In Progress
Status in OpenStack Search (Searchlight):
  Fix Released
Status in shaker:
  In Progress
Status in Solum:
  Fix Released
Status in tempest:
  In Progress
Status in tripleo:
  In Progress
Status in trove-dashboard:
  Fix Released
Status in watcher:
  Fix Released
Status in zaqar:
  In Progress

Bug description:
  LOG.warn is deprecated in Python 3 [1] . But it still used in a few
  places, non-deprecated LOG.warning should be used instead.

  Note: If we are using logger from oslo.log, warn is still valid [2],
  but I agree we can switch to LOG.warning.

  [1]https://docs.python.org/3/library/logging.html#logging.warning
  [2]https://github.com/openstack/oslo.log/blob/master/oslo_log/log.py#L85

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1508442/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569739] [NEW] launch instance Error: Unable to create the server.

2016-04-13 Thread Mikko Elomaa
Public bug reported:

Trying to launch new instance from dashboard
Fresh two centos 7 node installation
http://docs.openstack.org/mitaka/install-guide-rdo/

Error message: Error: Unable to create the server.

Nova-api.log attachd


1.
openstack-nova-common-13.0.0-1.el7.noarch
openstack-nova-conductor-13.0.0-1.el7.noarch
python-novaclient-3.3.0-1.el7.noarch
python-nova-13.0.0-1.el7.noarch
openstack-nova-novncproxy-13.0.0-1.el7.noarch
openstack-nova-cert-13.0.0-1.el7.noarch
openstack-nova-api-13.0.0-1.el7.noarch
openstack-nova-console-13.0.0-1.el7.noarch
openstack-nova-scheduler-13.0.0-1.el7.noarch

2.
KVM

3.
Neutron

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: "nova-api-log.txt"
   
https://bugs.launchpad.net/bugs/1569739/+attachment/4635166/+files/nova-api-log.txt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1569739

Title:
  launch instance Error: Unable to create the server.

Status in OpenStack Compute (nova):
  New

Bug description:
  Trying to launch new instance from dashboard
  Fresh two centos 7 node installation
  http://docs.openstack.org/mitaka/install-guide-rdo/

  Error message: Error: Unable to create the server.

  Nova-api.log attachd

  
  1.
  openstack-nova-common-13.0.0-1.el7.noarch
  openstack-nova-conductor-13.0.0-1.el7.noarch
  python-novaclient-3.3.0-1.el7.noarch
  python-nova-13.0.0-1.el7.noarch
  openstack-nova-novncproxy-13.0.0-1.el7.noarch
  openstack-nova-cert-13.0.0-1.el7.noarch
  openstack-nova-api-13.0.0-1.el7.noarch
  openstack-nova-console-13.0.0-1.el7.noarch
  openstack-nova-scheduler-13.0.0-1.el7.noarch

  2.
  KVM

  3.
  Neutron

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1569739/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1428946] Re: add keystone service id to observer audit

2016-04-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/303963
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=3ff7f133472a015ec28f806d353f7d5d6570476f
Submitter: Jenkins
Branch:master

commit 3ff7f133472a015ec28f806d353f7d5d6570476f
Author: Ryosuke Mizuno 
Date:   Mon Apr 11 17:07:29 2016 +0900

Add keystone service ID to observer audit

Information of the observer in the CADF notification was  only two of
the "typeURI" and "id".
So, add the ID of the identity service as a "identity_id".

Change-Id: I579ad3051f784a6411f6f7af636c2c91d75c7425
Closes-Bug: #1428946


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1428946

Title:
  add keystone service id to observer audit

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  When a CADF notification is sent off, the 'observer' portion looks
  like the following:

  "observer": {
  "typeURI": "service/security",
  "id": "openstack:3d4a50a9-2b59-438b-bf19-c231f9c7625a"
  },

  The ID field should be the ID of the keystone/identity service.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1428946/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569122] Re: return 500 when referring an image with status "SAVING"

2016-04-13 Thread Sirisha
Hi,

I think the bug is valid in nova itself


** Changed in: nova
 Assignee: (unassigned) => Sirisha (sirisha-1)

** Changed in: nova
   Status: Invalid => Confirmed

** Changed in: python-glanceclient
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1569122

Title:
  return 500 when referring an image with status "SAVING"

Status in OpenStack Compute (nova):
  Confirmed
Status in python-glanceclient:
  Invalid

Bug description:
  When I referred an image which is "SAVING" state by using 'nova' command,
  it returns 500 error.

  [How to reproduce]
  1. Create empty image
  $ glance image-create --name test --visibility public --disk-format ari

  +--+--+
  | Property | Value|
  +--+--+
  | checksum | None |
  | container_format | None |
  | created_at   | 2016-04-12T01:27:49Z |
  | disk_format  | ari  |
  | id   | 6d1ce183-e6ed-4dd0-a799-2f1b5abc6b3b |
  | min_disk | 0|
  | min_ram  | 0|
  | name | test |
  | owner| 662765a438ed40a9bc85b82e9f2a2cab |
  | protected| False|
  | size | None |
  | status   | queued   |
  | tags | []   |
  | updated_at   | 2016-04-12T01:27:49Z |
  | virtual_size | None |
  | visibility   | public   |
  +--+--+

  2. Confirm the image status.
  $ nova image-list
  +--+--+++
  | ID   | Name | Status | Server |
  +--+--+++
  | 6d1ce183-e6ed-4dd0-a799-2f1b5abc6b3b | test | SAVING ||
  +--+--+++

  3. Execute 'nova image-show' or 'nova image-delete'
  (This is example of 'nova image-show')
  $ nova --debug image-show 6d1ce183-e6ed-4dd0-a799-2f1b5abc6b3b
  DEBUG (extension:157) found extension EntryPoint.parse('v2token = 
keystoneauth1.loading._plugins.identity.v2:Token')
  DEBUG (extension:157) found extension EntryPoint.parse('admin_token = 
keystoneauth1.loading._plugins.admin_token:AdminToken')
  DEBUG (extension:157) found extension EntryPoint.parse('v3oidcauthcode = 
keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAuthorizationCode')
  DEBUG (extension:157) found extension EntryPoint.parse('v2password = 
keystoneauth1.loading._plugins.identity.v2:Password')
  DEBUG (extension:157) found extension EntryPoint.parse('v3password = 
keystoneauth1.loading._plugins.identity.v3:Password')
  DEBUG (extension:157) found extension EntryPoint.parse('v3oidcpassword = 
keystoneauth1.loading._plugins.identity.v3:OpenIDConnectPassword')
  DEBUG (extension:157) found extension EntryPoint.parse('token = 
keystoneauth1.loading._plugins.identity.generic:Token')
  DEBUG (extension:157) found extension EntryPoint.parse('v3token = 
keystoneauth1.loading._plugins.identity.v3:Token')
  DEBUG (extension:157) found extension EntryPoint.parse('password = 
keystoneauth1.loading._plugins.identity.generic:Password')
  DEBUG (session:248) REQ: curl -g -i -X GET http://192.168.3.223:5000/v2.0 -H 
"Accept: application/json" -H "User-Agent: keystoneauth1/2.3.0 
python-requests/2.9.1 CPython/2.7.6"
  INFO (connectionpool:207) Starting new HTTP connection (1): 192.168.3.223
  DEBUG (connectionpool:387) "GET /v2.0 HTTP/1.1" 200 339
  DEBUG (session:277) RESP: [200] Content-Length: 339 Vary: X-Auth-Token 
Keep-Alive: timeout=5, max=100 Server: Apache/2.4.7 (Ubuntu) Connection: 
Keep-Alive Date: Tue, 12 Apr 2016
  01:28:44 GMT Content-Type: application/json x-openstack-request-id: 
req-bc96ffbe-fe92-4fbe-931e-51c071766609
  RESP BODY: {"version": {"status": "stable", "updated": 
"2014-04-17T00:00:00Z", "media-types": [{"base": "application/json", "type": 
"application/vnd.openstack.identity-v2.0+json"
  }], "id": "v2.0", "links": [{"href": "http://192.168.3.223:5000/v2.0/;, 
"rel": "self"}, {"href": "http://docs.openstack.org/;, "type": "text/html", 
"rel": "describedby"}]}}

  DEBUG (v2:63) Making authentication request to 
http://192.168.3.223:5000/v2.0/tokens
  DEBUG (connectionpool:387) "POST /v2.0/tokens HTTP/1.1" 200 3569
  DEBUG (session:248) REQ: curl -g -i -X GET 

[Yahoo-eng-team] [Bug 1421239] Re: LBaaS- overwrite when associate 2 different Vip's to same external IP

2016-04-13 Thread Eran Kuris
** Changed in: neutron
   Status: Incomplete => Invalid

** Changed in: neutron
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1421239

Title:
  LBaaS- overwrite when associate  2 different Vip's  to same external
  IP

Status in neutron:
  New

Bug description:
  Version :
  [root@puma15 ~]# rpm -qa |grep neutron
  openstack-neutron-openvswitch-2014.2.1-6.el7ost.noarch
  python-neutronclient-2.3.9-1.el7ost.noarch
  openstack-neutron-2014.2.1-6.el7ost.noarch
  python-neutron-2014.2.1-6.el7ost.noarch
  openstack-neutron-ml2-2014.2.1-6.el7ost.noarch

  
  I created in LBaaS  2  VIP's .
  I  associate one vip (192.168.1.8) to external IP  (10.35.166.4).
  When I try to associate the second vip(192.168.1.9) to same external IP 
10.35.166.4  I success but it overwrite the first vip  , there is no warning 
message.
  We should prevent it as we cannot associate VM to external IP that already 
associated to other VM (we get an error message : Conflict (HTTP 409) 
(Request-ID: req-35eeddc9-d264-4fc5-a336-d201e4a66231 ])

  **If we not prevent it we should send warning message about it .

  How to Reproduce :
  Create in your tenant 2 VM's  with internal network & external network.

  1. Create LB pool on subnet of tenant A
  >

  2. Log into VM1 and VM2, enable httpd
  

  3. Create 2 members into webpool (use VM’s Ips)
   --protocol-port 80 webpool>

  4.Create a Healthmonitor and associated it with the webpool

  

  Run:
  

   webpool

  5.Create a VIP for the webpool
   webpool>

  6.Check HAproxy created (for example)
  

  7. Create a floating IP from external network
  

  8.Check port id of LB VIP
  
  

  9.Associate floating IP with the LB VIP
   

  10.From external network, try to connect the floating IP by curl
  http://IP>

  create for same VM's  VIP  to other service FTP  or SSH  for example
  and try to associate it to same external floating ip that we used
  already.

  see the command I entered :
  __

  [root@puma15 ~(keystone_eran1)]# neutron port-list
  
+--+--+---+-+
  | id   | name 
| mac_address   | fixed_ips 
  |
  
+--+--+---+-+
  | 17552712-9b66-466b-8836-7e09a15f439a |  
| fa:16:3e:6d:93:c3 | {"subnet_id": "cff1967c-dd18-4247-849d-97c363c39b4f", 
"ip_address": "10.35.166.5"}  |
  | 2668a4e8-7ddf-4242-813b-5febe7269d1b |  
| fa:16:3e:9d:d0:cd | {"subnet_id": "c86dbc71-78f8-47a6-9531-fd84a78c5fed", 
"ip_address": "192.168.1.3"}  |
  | 3f9691df-9e33-4ea2-812a-83d51fdf45a1 |  
| fa:16:3e:12:18:fb | {"subnet_id": "c86dbc71-78f8-47a6-9531-fd84a78c5fed", 
"ip_address": "192.168.1.6"}  |
  | 4004c3d0-390a-4db6-9c2c-b96b4a920af8 |  
| fa:16:3e:df:08:6f | {"subnet_id": "c86dbc71-78f8-47a6-9531-fd84a78c5fed", 
"ip_address": "192.168.1.1"}  |
  | 460dafb7-3d05-4494-a83d-416b82b4eace | 
vip-adc9c688-97d2-425b-abd0-29505ac10a76 | fa:16:3e:65:8d:ff | {"subnet_id": 
"c86dbc71-78f8-47a6-9531-fd84a78c5fed", "ip_address": "192.168.1.56"} |
  | 6eba1c27-d202-470b-b89b-73a569d9da00 |  
| fa:16:3e:4d:b9:18 | {"subnet_id": "c86dbc71-78f8-47a6-9531-fd84a78c5fed", 
"ip_address": "192.168.1.7"}  |
  | 73149116-32e8-4960-a0a5-286bdb117d96 | 
vip-bc9c9025-db2c-40cd-8663-50c687639ff1 | fa:16:3e:75:53:33 | {"subnet_id": 
"c86dbc71-78f8-47a6-9531-fd84a78c5fed", "ip_address": "192.168.1.55"} |
  | 8f6ba5ab-12d8-4b1c-b78f-60bf90210dc9 |  
| fa:16:3e:ee:a7:a3 | {"subnet_id": "65bfd326-7912-4ce6-91fd-04d81476d20e", 
"ip_address": "192.168.2.4"}  |
  | aad2870a-4f18-4471-985f-943911dfaaf3 |  
| fa:16:3e:d9:d7:85 | {"subnet_id": "65bfd326-7912-4ce6-91fd-04d81476d20e", 
"ip_address": "192.168.2.3"}  |
  | b359ba1f-dde1-4576-9904-644acb032c4f |  
| fa:16:3e:92:d9:8c | {"subnet_id": "cff1967c-dd18-4247-849d-97c363c39b4f", 
"ip_address": "10.35.166.4"}  |
  | b471ddb4-d1b8-4df0-b098-60a2a45a1d4f |  
| fa:16:3e:8c:5b:4c | {"subnet_id": "65bfd326-7912-4ce6-91fd-04d81476d20e", 
"ip_address": "192.168.2.1"}  |
  | c0fe10e3-32b8-4373-b848-8d9ba5879a41 |  
| 

[Yahoo-eng-team] [Bug 1569714] [NEW] Don't select the user's primary project on editing User

2016-04-13 Thread Calvin
Public bug reported:

When editing user in the 'Identity/Users' Page ,  the select control of
primary project  don't select the user's primary project.

I debugged and found that 'user.project_id' in Line 136 of File
horizon/openstack_dashboard/dashboards/identity/users/views.py always
return None.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1569714

Title:
  Don't select the user's primary project on editing User

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When editing user in the 'Identity/Users' Page ,  the select control
  of primary project  don't select the user's primary project.

  I debugged and found that 'user.project_id' in Line 136 of File
  horizon/openstack_dashboard/dashboards/identity/users/views.py always
  return None.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1569714/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569713] [NEW] user.project_id

2016-04-13 Thread Calvin
Public bug reported:

When editing user info in the 'Identity/Users' Page ,  the select
control of primary project  don't select the user's primary project.

I debugged and found that 'user.project_id' in Line 136 of File
horizon/openstack_dashboard/dashboards/identity/users/views.py always
return None.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1569713

Title:
  user.project_id

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When editing user info in the 'Identity/Users' Page ,  the select
  control of primary project  don't select the user's primary project.

  I debugged and found that 'user.project_id' in Line 136 of File
  horizon/openstack_dashboard/dashboards/identity/users/views.py always
  return None.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1569713/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1567413] Re: Keystone fetches data from Memcache even if caching is explicitly turned off

2016-04-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/304688
Committed: 
https://git.openstack.org/cgit/openstack/oslo.cache/commit/?id=ea191cacb14818989564ffe1f3727f28be3c3a21
Submitter: Jenkins
Branch:master

commit ea191cacb14818989564ffe1f3727f28be3c3a21
Author: Morgan Fainberg 
Date:   Tue Apr 12 08:09:17 2016 -0700

If caching is globally disabled force dogpile to use the null backend

Due to the way caching is disabled the SHOULD_CACHE_FN() is used
to determine if new values are stored in the cache backend, and is
only called when the regeneration is required. If dogpile is
configured to connect to a memcache, redis, etc to store data
it is possible with caching disabled to still pull values from
the cache (SHOULD_CACHE_FN has no bearing on reads).

The issue described only impacts the use of the memoization
decorator.

This change forces dogpile to use the null backend if caching is
globally disabled to ensure no data is read from the external
cache. This will not affect subsystem disabling of cache.

Even with cache disabled but reads coming from the external cache,
there stale data is not a concern as invalidates will still be
processed and the data from the backend will eventually timeout
in most cases.

Change-Id: I845b6cc18faa2be516676eeacc574473ca84c995
Closes-Bug: #1567413


** Changed in: oslo.cache
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1567413

Title:
  Keystone fetches data from Memcache even if caching is explicitly
  turned off

Status in OpenStack Identity (keystone):
  Invalid
Status in oslo.cache:
  Fix Released

Bug description:
  == Abstract ==

  I'm profiling Keystone using OSprofiler tool and the appropriate
  Keystone+OSprofiler integration changes -
  
https://review.openstack.org/#/q/status:open+project:openstack/keystone+branch:master+topic
  :osprofiler-support-in-keystone - currently on review. The idea was to
  analyse how does Keystone use DB/Cache layers.

  == Expected vs Observed==

  I'm turning off cache via setting

  [cache]
  enabled = False

  I'm expecting all data to be fetched from DB in this case, but I still
  see gets from Memcache. I mean, *real* gets, not just tries to grab
  values, but real operations happening with values got from memcache
  here
  
https://bitbucket.org/zzzeek/dogpile.cache/src/c6913eb143b24b4a886124ff0da5c935ea34e3ac/dogpile/cache/region.py?at=master
  =file-view-default#region.py-617

  Adding OSprofiler HTML report from token issue API call.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1567413/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1565108] Re: "unexpected error" attempting to rename a project when name is already in use

2016-04-13 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/301418
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=c1be6883f250e6bc0ad1b43eb516186f74a477f1
Submitter: Jenkins
Branch:master

commit c1be6883f250e6bc0ad1b43eb516186f74a477f1
Author: liyingjun 
Date:   Thu Mar 31 08:13:49 2016 +0800

Fix KeyError when rename to a name is already in use

When a user attempts to rename a project via the PATCH
v3/projects/{project_id} API, and the new name is already in-use, rather
than return a nice error explaining that the name is in use, keystone
blows up and raises `KeyError: 'is_domain'` in
_generate_project_name_conflict_msg.

Change-Id: I56fcd8fe1258e2d1de3e541144649ef619f86a7b
Closes-bug: #1565108


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1565108

Title:
  "unexpected error" attempting to rename a project when name is already
  in use

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  When a user attempts to rename a project via the PATCH
  v3/projects/{project_id} API, and the new name is already in-use,
  rather than return a nice error explaining that the name is in use,
  keystone blows up and returns an HTTP 500:

  # curl -k -1 -i -X PATCH 
https://localhost:5000/v3/projects/e4e12eb216ca471cb2646bcdbdcc3ddc -H 
"User-Agent: python-keystoneclient" -H "Content-Type: application/json" -H 
"Accept: application/json" -H "X-Auth-Token: 04a2539bb3144af6867d8c7e50b15607" 
-d '{"project": {"name": "Project_B"}}'
  HTTP/1.1 500 Internal Server Error
  Date: Fri, 01 Apr 2016 21:42:50 GMT
  Server: Apache
  Vary: X-Auth-Token
  x-openstack-request-id: req-8d9904e7-5775-42ec-8fc5-773f7f38cbf2
  Content-Length: 143
  Connection: close
  Content-Type: application/json

  {"error": {"message": "An unexpected error prevented the server from
  fulfilling your request.", "code": 500, "title": "Internal Server
  Error"}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1565108/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566188] Re: keystone client reports 500 error if database service is not running

2016-04-13 Thread Morgan Fainberg
The database being offline is both unexpected and totally an "ISE"
(internal server error). This would in fact be what I'd expect from a
web service.

I'm marking this as "wont fix"

** Changed in: keystone
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1566188

Title:
  keystone client reports 500 error if database service is not running

Status in OpenStack Identity (keystone):
  Won't Fix

Bug description:
  When running keystone command to authenticate from cinderclient,
  keystone reports 500 internal server error.

  > 
/usr/local/lib/python2.7/dist-packages/keystoneclient/adapter.py(116)get_token()
  -> return self.session.get_token(auth or self.auth)
  (Pdb) 
  InternalServerError: Internal...24d54)',)
  > 
/usr/local/lib/python2.7/dist-packages/keystoneclient/adapter.py(116)get_token()
  -> return self.session.get_token(auth or self.auth)
  (Pdb) locals()
  {'self': , 
'__exception__': (, 
InternalServerError(u'An unexpected error prevented the server from fulfilling 
your request. (HTTP 500) (Request-ID: 
req-e4fbb478-79f4-4529-b061-512b29324d54)',)), 'auth': None}

  It should report 400 error with proper details..

  
  Steps to reproduce:

  1. stop mysql.
  2. run cinder list or any command.(It internally call keystone client to 
authenticate the request)
  3. check output..

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1566188/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569693] [NEW] Ports not recreated during neutron agent restart

2016-04-13 Thread Sreekumar S
Public bug reported:

Ports are not recreated during neutron agent restart with l2_pop=False.
When the agent is restarted after deleting br-int and br-tun, the bridges are 
recreated but the tunnel ports and some of the integration bridge ports are not 
recreated.
When l2_pop is turned True, and the same procedure is retried, the bridges and 
tunnel ports to all compute nodes running instances are recreated properly.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1569693

Title:
  Ports not recreated during neutron agent restart

Status in neutron:
  New

Bug description:
  Ports are not recreated during neutron agent restart with l2_pop=False.
  When the agent is restarted after deleting br-int and br-tun, the bridges are 
recreated but the tunnel ports and some of the integration bridge ports are not 
recreated.
  When l2_pop is turned True, and the same procedure is retried, the bridges 
and tunnel ports to all compute nodes running instances are recreated properly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1569693/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp