[Yahoo-eng-team] [Bug 1596813] [NEW] stable/mitaka branch creation request for networking-midonet

2016-06-28 Thread YAMAMOTO Takashi
Public bug reported:

Please cut stable/mitaka branch for networking-midonet
on commit 797a2a892e08f3e8162f5f168a7c5e3420f961bc.

** Affects: networking-midonet
 Importance: High
 Assignee: YAMAMOTO Takashi (yamamoto)
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: release-subproject

** Tags added: release-subproject

** Changed in: networking-midonet
Milestone: None => 2.0.0

** Changed in: networking-midonet
   Importance: Undecided => High

** Changed in: networking-midonet
 Assignee: (unassigned) => YAMAMOTO Takashi (yamamoto)

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1596813

Title:
  stable/mitaka branch creation request for networking-midonet

Status in networking-midonet:
  New
Status in neutron:
  New

Bug description:
  Please cut stable/mitaka branch for networking-midonet
  on commit 797a2a892e08f3e8162f5f168a7c5e3420f961bc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1596813/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449062] Re: qemu-img calls need to be restricted by ulimit (CVE-2015-5162)

2016-06-28 Thread Launchpad Bug Tracker
This bug was fixed in the package python-oslo.concurrency -
3.7.1-0ubuntu1

---
python-oslo.concurrency (3.7.1-0ubuntu1) xenial; urgency=medium

  * New upstream point release (LP: #1449062).

 -- Corey Bryant   Mon, 13 Jun 2016 12:34:15
-0400

** Changed in: python-oslo.concurrency (Ubuntu Xenial)
   Status: Fix Committed => Fix Released

** Changed in: python-oslo.concurrency (Ubuntu Xenial)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1449062

Title:
  qemu-img calls need to be restricted by ulimit (CVE-2015-5162)

Status in Cinder:
  New
Status in Glance:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Security Advisory:
  Confirmed
Status in python-oslo.concurrency package in Ubuntu:
  Fix Released
Status in python-oslo.concurrency source package in Xenial:
  Fix Released
Status in python-oslo.concurrency source package in Yakkety:
  Fix Released

Bug description:
  Reported via private E-mail from Richard W.M. Jones.

  Turns out qemu image parser is not hardened against malicious input
  and can be abused to allocated an arbitrary amount of memory and/or
  dump a lot of information when used with "--output=json".

  The solution seems to be: limit qemu-img ressource using ulimit.

  Example of abuse:

  -- afl1.img --

  $ /usr/bin/time qemu-img info afl1.img
  image: afl1.img
  [...]
  0.13user 0.19system 0:00.36elapsed 92%CPU (0avgtext+0avgdata 
642416maxresident)k
  0inputs+0outputs (0major+156927minor)pagefaults 0swaps

  The original image is 516 bytes, but it causes qemu-img to allocate
  640 MB.

  -- afl2.img --

  $ qemu-img info --output=json afl2.img | wc -l
  589843

  This is a 200K image which causes qemu-img info to output half a
  million lines of JSON (14 MB of JSON).

  Glance runs the --output=json variant of the command.

  -- afl3.img --

  $ /usr/bin/time qemu-img info afl3.img
  image: afl3.img
  [...]
  0.09user 0.35system 0:00.47elapsed 94%CPU (0avgtext+0avgdata 
1262388maxresident)k
  0inputs+0outputs (0major+311994minor)pagefaults 0swaps

  qemu-img allocates 1.3 GB (actually, a bit more if you play with
  ulimit -v).  It appears that you could change it to allocate
  arbitrarily large amounts of RAM.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1449062/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596829] [NEW] String interpolation should be delayed at logging calls

2016-06-28 Thread Takashi NATSUME
Public bug reported:

String interpolation should be delayed to be handled by the logging
code, rather than being done at the point of the logging call.

Wrong: LOG.debug('Example: %s' % 'bad')
Right: LOG.debug('Example: %s', 'good')

See the following guideline.

* http://docs.openstack.org/developer/oslo.i18n/guidelines.html#adding-
variables-to-log-messages

The rule for it should be added to hacking checks.

** Affects: nova
 Importance: Undecided
 Assignee: Takashi NATSUME (natsume-takashi)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Takashi NATSUME (natsume-takashi)

** Changed in: nova
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1596829

Title:
  String interpolation should be delayed at logging calls

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  String interpolation should be delayed to be handled by the logging
  code, rather than being done at the point of the logging call.

  Wrong: LOG.debug('Example: %s' % 'bad')
  Right: LOG.debug('Example: %s', 'good')

  See the following guideline.

  * http://docs.openstack.org/developer/oslo.i18n/guidelines.html
  #adding-variables-to-log-messages

  The rule for it should be added to hacking checks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1596829/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1576093] Re: [SRU] block migration fail with libvirt since version 1.2.17

2016-06-28 Thread Launchpad Bug Tracker
This bug was fixed in the package nova - 2:13.0.0-0ubuntu5

---
nova (2:13.0.0-0ubuntu5) xenial; urgency=medium

  * d/p/fix-block-migration.patch: Cherry pick and rebase patch from
stable/mitaka gerrit review to fix block migration (LP: #1576093).
  * Note: Skipping 0ubuntu3/4 because they were yakkety-only releases.

 -- Corey Bryant   Thu, 16 Jun 2016 11:52:04
-0400

** Changed in: nova (Ubuntu Xenial)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1576093

Title:
  [SRU] block migration fail with libvirt since version  1.2.17

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive mitaka series:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Xenial:
  Fix Released
Status in nova source package in Yakkety:
  Fix Released

Bug description:
  [Impact]

  Try to do block migration but fail and libvirt reports that

  "Selecting disks to migrate is not implemented for tunneled migration"

  Nova:  a4e15e329f9adbcfe72fbcd6acb94f0743ad02f8

  libvirt: 1.3.1

  [Test Case]

  reproduce:

  default devstack setting and do block migration (no shared
  instance_dir and shared instance storage used)

  [Regression Potential]

  Patch was cherry picked from upstream gerrit review, which was cherry
  picked with little change from master branch.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1576093/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588731] Re: net_helpers.async_ping() is unreliable

2016-06-28 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/325170
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=2dcacaae88970365350200ca58982a18e21c703d
Submitter: Jenkins
Branch:master

commit 2dcacaae88970365350200ca58982a18e21c703d
Author: Hynek Mlnarik 
Date:   Fri Jun 3 11:30:17 2016 +0200

Fix of ping usage in net_helpers.async_ping()

net_helpers.async_ping() should pass when each of the ICMP echo requests
is matched by a corresponding ICMP echo reply, and it should fail if a
response is not received for some request. The async_ping implementation
relies on the ping exit code: when ping returns nonzero exit code,
RuntimeException would be consequentially thrown and async_ping assertion
would thus fail.

Current implementation of net_helpers.async_ping() is broken due to its
usage of -c parameter of ping command and assumption that if _some_ of
the ICMP replies does not arrive, ping would return nonzero exit code.
However, Linux ping works in the way that if _at least one_ reply is
received from any number of ICMP ping requests, result code is 0
(success) and thus no RuntimeException is thrown. This commit fixes
assert_ping to be a reliable assertion guaranteeing that no ping request
stays without reply. For simple bash reproducer and more thorough
discussion of possible solutions, see the bug description.

Closes-Bug: #1588731
Change-Id: I9257b94a8ebbfaf1c4266c1f8ce3097657bacee5


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1588731

Title:
  net_helpers.async_ping() is unreliable

Status in neutron:
  Fix Released

Bug description:
  Current implementation of net_helpers.async_ping() is broken due its
  usage of -c parameter of ping and expectation that if some of the ICMP
  replies does not arrive, RuntimeException would be thrown. Linux ping
  works in the way that if at least one reply is received from any
  number of ICMP ping requests, result code is 0 (success) and no
  RuntimeException is thrown.

  Shell reproducer of current net_helpers.async_ping() behaviour:

  ip a add 10.20.30.5/24 dev lo ; \
  ( sleep 0.5 ; ip a del 10.20.30.5/24 dev lo ; sleep 1 ; ip a add 
10.20.30.5/24 dev lo ; sleep 2 ; ip a del 10.20.30.5/24 dev lo ) & \
  ping 10.20.30.5 -W 1 -c 3 ; \
  echo "ping return code = $?"

  The return code is always 0 although one of the ICMP replies is lost.

  Man page suggests to use -w parameter. However this does not help:
  When using -w parameter, it is still possible that one ICMP reply is
  missed (even when using -c) while ping resulting in 0: e.g. "ping -c 3
  -w 3" would send _four_ icmp requests and receive three responses if
  e.g. second response is missed and the other responses would be fast
  enough. Because three responses would be received, ping would return 0
  status code even though there was a single packet loss, and that would
  lead to false conclusion that ping test passes correctly.

  Shell reproducer of net_helpers.async_ping() behaviour with -w:

  ip a add 10.20.30.5/24 dev lo ; \
  ( sleep 0.5 ; ip a del 10.20.30.5/24 dev lo ; sleep 1 ; ip a add 
10.20.30.5/24 dev lo ; sleep 2 ; ip a del 10.20.30.5/24 dev lo ) & \
  ping 10.20.30.5 -W 1 -c 3 -w 3 ; \
  echo "ping return code = $?"

  The return code is 0 and 1 roughly at similar rate, hence using -w is
  not an option for reliable net_helpers.async_ping().

  Hence net_helpers.async_ping() needs to use ping only for a single
  ICMP request/reply test, only in that case the result code is
  reliable. Multiple ICMP requests/replies need to be handled in the
  code.

  This happens with ping at least from iputils-s20121221 and
  iputils-s20140519. It seems a ping issue as one would expect that -c
  would limit the number of ICMP requests sent. Yet neutron tests should
  account for this behaviour.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1588731/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596859] [NEW] networking-midonet 1.0.1 stable/liberty release request

2016-06-28 Thread YAMAMOTO Takashi
Public bug reported:

please push 1.0.1 tag on the tip of stable/liberty.
namely, commit 776dba1ef18bd9d438a85f1ae076866340c2a654 .

** Affects: networking-midonet
 Importance: Medium
 Assignee: YAMAMOTO Takashi (yamamoto)
 Status: New

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: release-subproject

** Tags added: release-subproject

** Description changed:

- please push 1.0.0 tag on the tip of stable/liberty.
+ please push 1.0.1 tag on the tip of stable/liberty.
  namely, commit 776dba1ef18bd9d438a85f1ae076866340c2a654 .

** Changed in: networking-midonet
Milestone: None => 1.0.1

** Changed in: networking-midonet
   Importance: Undecided => Medium

** Changed in: networking-midonet
 Assignee: (unassigned) => YAMAMOTO Takashi (yamamoto)

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1596859

Title:
  networking-midonet 1.0.1 stable/liberty release request

Status in networking-midonet:
  New
Status in neutron:
  New

Bug description:
  please push 1.0.1 tag on the tip of stable/liberty.
  namely, commit 776dba1ef18bd9d438a85f1ae076866340c2a654 .

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1596859/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596869] [NEW] APIv3 compatibility broken in Mitaka and Liberty

2016-06-28 Thread A.Ojea
Public bug reported:

Current API documentation [1] uses the fields   "domain": { "id":
"default" }, to select a domain.

This call works in Liberty as you can see in the following snippet:

curl -i   -H "Content-Type: application/json"   -d '
{ "auth": {
"identity": {
  "methods": ["password"],
  "password": {
"user": {
  "name": "admin",
  "domain": { "id": "default" },
  "password": "admin"
}
  }
},
"scope": {
  "project": {
"name": "admin",
"domain": { "id": "default" }
  }
}
  }
}'   http://172.17.0.3:5000/v3/auth/tokens ; echo
HTTP/1.1 201 Created
X-Subject-Token: 8e861d59fb1847a388b27ab7150f2d15
Vary: X-Auth-Token
X-Distribution: Ubuntu
Content-Type: application/json
Content-Length: 2794
X-Openstack-Request-Id: req-2fcb81ac-4adf-4d0d-85f9-41d355c0606d
Date: Tue, 28 Jun 2016 08:59:42 GMT

{"token": {"methods": ["password"], "roles": [{"id":
"b1abb292e4af4ead9a1b62b4a6e39ba4", "name": "__member__"}, {"id":
"f071d23c5131434e8823101f3b8e33db", "name": "admin"}], "expires_at":
"2016-06-28T09:59:42.646127Z", "project": {"domain": {"id": "default",
"name": "Default"}, "id": "890fc0394fe34024b62aab12fb335960", "name":
"admin"}, "catalog": [{"endpoints": [{"region_id": "regionOne", "url":
"http://172.17.0.3:35357/v2.0";, "region": "regionOne", "interface":
"admin", "id": "2f69ed69563c425ab58f4b5eee4ec8aa"}, {"region_id":
"regionOne", "url": "http://172.17.0.3:35357/v2.0";, "region":
"regionOne", "interface": "admin", "id":
"5b52d945507748e3a7000f2868f3cdad"}, {"region_id": "regionOne", "url":
"http://172.17.0.3:5000/v2.0";, "region": "regionOne", "interface":
"public", "id": "bb40e55ed8064a75babfb79ae7dbfc66"}, {"region_id":
"regionOne", "url": "http://172.17.0.3:5000/v2.0";, "region":
"regionOne", "interface": "internal", "id":
"d0b5b26e077a43c6b5bad790964f5fcc"}, {"region_id": "regionOne", "url":
"http://172.17.0.3:5000/v2.0";, "region": "regionOne", "interface":
"internal", "id": "ed802af8f93c4c54b8fc6cb300b7af6a"}, {"region_id":
"regionOne", "url": "http://172.17.0.3:5000/v2.0";, "region":
"regionOne", "interface": "public", "id":
"fd03e9bb9ebb43c896c5f61af88a81f9"}], "type": "identity", "id":
"e1f8d8c7e32d43ff91bb042565f4a3c0", "name": "keystone"}, {"endpoints":
[{"region_id": "regionOne", "url": "http://172.17.0.10:9696";, "region":
"regionOne", "interface": "public", "id":
"1094d0b04baf48da96649944385e8c5a"}, {"region_id": "regionOne", "url":
"http://172.17.0.10:9696";, "region": "regionOne", "interface": "admin",
"id": "877d4a208a85445b9421ce26121e550d"}, {"region_id": "regionOne",
"url": "http://172.17.0.10:9696";, "region": "regionOne", "interface":
"internal", "id": "92179423db2e44b2a86b878c5e616156"}, {"region_id":
"regionOne", "url": "http://172.17.0.10:9696";, "region": "regionOne",
"interface": "public", "id": "99be12d96dc24e3eb96183939f57b9e5"},
{"region_id": "regionOne", "url": "http://172.17.0.10:9696";, "region":
"regionOne", "interface": "admin", "id":
"e48d6c32b4394b5294f946ad812134b4"}, {"region_id": "regionOne", "url":
"http://172.17.0.10:9696";, "region": "regionOne", "interface":
"internal", "id": "fdc2ed6b8e5a45e694dcc0e8b8d1473c"}], "type":
"network", "id": "e58bb334e65a46af8860e811e711f79f", "name": "neutron"},
{"endpoints": [], "type": "identity", "id":
"f84ea2bba03746a3af3c73ddc4b8da74", "name": "keystone"}, {"endpoints":
[], "type": "network", "id": "ea500b49561e441daeeacab5d21c6930", "name":
"neutron"}], "extras": {}, "user": {"domain": {"id": "default", "name":
"Default"}, "id": "d1b7876ff28e4db29296797296daecfe", "name": "admin"},
"audit_ids": ["7p_bhw8tTvqAOjKRpkHE2Q"], "issued_at":
"2016-06-28T08:59:42.646167Z"}}

but it's turned out that in mitaka it fails if you use the id field with
the name of the domain:

curl -i   -H "Content-Type: application/json"   -d '
{ "auth": {
"identity": {
  "methods": ["password"],
  "password": {
"user": {
  "name": "admin",
  "domain": { "id": "default" },
  "password": "openstack"
}
  }
},
"scope": {
  "project": {
"name": "admin",
"domain": { "id": "default" }
  }
}
  }
}'   http://localhost:5000/v3/auth/tokens ; echo
HTTP/1.1 401 Unauthorized
Date: Tue, 28 Jun 2016 09:01:04 GMT
Server: Apache/2.4.7 (Ubuntu)
Vary: X-Auth-Token
X-Distribution: Ubuntu
x-openstack-request-id: req-4898044b-25d4-4b9d-96c4-d823c0107cb0
WWW-Authenticate: Keystone uri="http://localhost:5000";
Content-Length: 114
Content-Type: application/json

{"error": {"message": "The request you have made requires
authentication.", "code": 401, "title": "Unauthorized"}}

in order to work you need to use name instead id:

curl -i   -H "Content-Type: application/json"   -d '
{ "auth": {
"identity": {
  "methods": ["password"],
  "password": {
"user": {
  "name": "admin",
  "domain": { "name": "default" },
  "password": "openstack"
}
  }

[Yahoo-eng-team] [Bug 1596873] [NEW] [novncproxy] log for closing web is misleading

2016-06-28 Thread xhzhf
Public bug reported:

Description
===
When user close console web, log will record "Target closed vmsg".
In fact, Target is not closed.

Steps to reproduce
==
close web console windows

Expected result
===
the log shows  "Webclient closed vmsg"

Actual result
=
the log shows  "Target closed vmsg"

Environment
===
openstack-nova-novncproxy-13.0.0.0-1

Related code
==
file:nova/console/websocketproxy.py
function:NovaProxyRequestHandlerBase::new_websocket_client
...
# Start proxying
try:
self.do_proxy(tsock)
except Exception:
if tsock:
tsock.shutdown(socket.SHUT_RDWR)
tsock.close()
self.vmsg(_("%(host)s:%(port)s: Target closed") %
  {'host': host, 'port': port})
raise

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1596873

Title:
  [novncproxy] log for closing web is misleading

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  When user close console web, log will record "Target closed vmsg".
  In fact, Target is not closed.

  Steps to reproduce
  ==
  close web console windows

  Expected result
  ===
  the log shows  "Webclient closed vmsg"

  Actual result
  =
  the log shows  "Target closed vmsg"

  Environment
  ===
  openstack-nova-novncproxy-13.0.0.0-1

  Related code
  ==
  file:nova/console/websocketproxy.py
  function:NovaProxyRequestHandlerBase::new_websocket_client
  ...
  # Start proxying
  try:
  self.do_proxy(tsock)
  except Exception:
  if tsock:
  tsock.shutdown(socket.SHUT_RDWR)
  tsock.close()
  self.vmsg(_("%(host)s:%(port)s: Target closed") %
{'host': host, 'port': port})
  raise

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1596873/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1533441] Re: HA router can not be deleted in L3 agent after race between HA router creating and deleting

2016-06-28 Thread LIU Yulong
This bug comes again:
http://paste.openstack.org/show/523757/

** Changed in: neutron
   Status: Fix Released => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1533441

Title:
  HA router can not be deleted in L3 agent after race between HA router
  creating and deleting

Status in neutron:
  New
Status in neutron kilo series:
  New

Bug description:
  HA router can not be deleted in L3 agent after race between HA router
  creating and deleting

  Exception:
  1. Unable to process HA router %s without HA port (HA router initialize)

  2. AttributeError: 'NoneType' object has no attribute 'config' (HA
  router deleting procedure)

  
  With the newest neutron code, I find a infinite loop in _safe_router_removed.
  Consider a HA router without HA port was placed in the l3 agent,
  usually because of the race condition.

  Infinite loop steps:
  1. a HA router deleting RPC comes
  2. l3 agent remove it
  3. the RouterInfo will delete its the router 
namespace(self.router_namespace.delete())
  4. the HaRouter, ha_router.delete(), where the AttributeError: 'NoneType' or 
some error will be raised.
  5. _safe_router_removed return False
  6. self._resync_router(update)
  7. the router namespace is not existed, RuntimeError raised, go to 5, 
infinite loop 5 - 7

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1533441/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1594385] Re: Nova does not report signature verification failure

2016-06-28 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/332802
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=f39e660baf5543fa1acc95c106fd4d98a69e69a9
Submitter: Jenkins
Branch:master

commit f39e660baf5543fa1acc95c106fd4d98a69e69a9
Author: dane-fichter 
Date:   Mon Jun 27 12:55:02 2016 -0700

Improve image signature verification failure notification

This change modifies the exception handling when Nova's
image signature verification fails so that the build is
aborted and instance's fault shows the error.

Change-Id: I05d877fe92593edaaa8b93b87b4b787827cae8f0
Closes-bug: #1594385


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1594385

Title:
  Nova does not report signature verification failure

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When verifying Glance image signatures, Nova does not report a
  meaningful error message to the end user. Instead, Nova gives the end
  user a "No hosts available" message.

  How to verify this behavior:
  - Enable Nova's verify_glance_signatures configuration flag
  - Upload an image to Glance with incorrect or missing signature metadata
  - Attempt to boot an instance of this image via the Nova CLI

  You should get an error message with the text "No valid host was
  found. There are not enough hosts available". This does not describe
  the failure and will lead the end user to think there is an issue with
  the storage on the compute node.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1594385/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1568708] Re: translation sync broken due to wrong usage of venv

2016-06-28 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/330871
Committed: 
https://git.openstack.org/cgit/openstack/networking-midonet/commit/?id=81c01d5041ef8f0ae04a6bd50d516050272bb8a6
Submitter: Jenkins
Branch:master

commit 81c01d5041ef8f0ae04a6bd50d516050272bb8a6
Author: YAMAMOTO Takashi 
Date:   Fri Jun 17 12:34:08 2016 +0900

tox.ini: Stop using zuul-cloner for venv

This should fix translation job.

Closes-Bug: #1568708
Change-Id: Ife4779e763753839ef90e6657cfaf2823efc1851


** Changed in: networking-midonet
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1568708

Title:
  translation sync broken due to wrong usage of venv

Status in networking-midonet:
  Fix Released
Status in networking-ovn:
  Confirmed
Status in neutron:
  Fix Committed
Status in Sahara:
  Fix Released
Status in Sahara mitaka series:
  Fix Released

Bug description:
  post jobs cannot use zuul-cloner and have no access currently to
  upper-constraints. See how nova or neutron itself handle this.

  Right now the sync with translations fails:

  https://jenkins.openstack.org/job/neutron-fwaas-propose-translation-
  update/119/console

  Please fix venv tox environment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1568708/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1587858] Re: objects: added 'os_secure_boot' property to ImageMetaProps object

2016-06-28 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/331504
Committed: 
https://git.openstack.org/cgit/openstack/openstack-manuals/commit/?id=a5e92ee40e42f0c422f02856a0a60aa5d62bb035
Submitter: Jenkins
Branch:master

commit a5e92ee40e42f0c422f02856a0a60aa5d62bb035
Author: tanyy 
Date:   Mon Jun 20 10:20:50 2016 +0800

[cli-reference] add 'os_secure_boot' property to ImageMetaProps

Change-Id: I34b130525ee6a668d8ae95749628730eeccbbe45
Closes-Bug: #1587858


** Changed in: openstack-manuals
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1587858

Title:
  objects: added 'os_secure_boot' property to ImageMetaProps object

Status in OpenStack Compute (nova):
  Invalid
Status in openstack-manuals:
  Fix Released

Bug description:
  https://review.openstack.org/237593
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/nova" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit 3b0c9a3f14e11737fff47146b512d93116654234
  Author: Simona Iuliana Toader 
  Date:   Tue Oct 20 16:52:34 2015 +0300

  objects: added 'os_secure_boot' property to ImageMetaProps object
  
  Secure Boot feature will be enabled by setting the "os_secure_boot"
  image property to "required". Other options can be: "disabled" or
  "optional".
  
  DocImpact
  
  Partially Implements: blueprint hyper-v-uefi-secureboot
  
  Change-Id: Id53f934fccb020dcc6bae9e13f53cbb3df3dcd92

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1587858/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1595832] Re: lbaasv2:loadbalancer-create with tenant-id is not effective

2016-06-28 Thread Rossella Sblendido
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1595832

Title:
  lbaasv2:loadbalancer-create with tenant-id is not effective

Status in neutron:
  Invalid

Bug description:
  In tenant of admin ,create a loadbalancer with tenant demo
  +--+--+-+
  |id|   name   | enabled |
  +--+--+-+
  | 6403670bcb0f45cba4cb732a9a936da4 |  admin   |   True  |
  | f0e4abc0e4564b9db5a8e6f23c7244b9 |   demo   |   True  |
  | 1066af750dc54bcc985f4d6217aad3d4 | services |   True  |
  +--+--+-+
  [root@CrtlNode247 ~(keystone_admin)]# neutron lbaas-loadbalancer-create 
RF_INNER_NET_1subnet_net --tenant-id f0e4abc0e4564b9db5a8e6f23c7244b9 --name 
test
  Created a new loadbalancer:
  +-+--+
  | Field   | Value|
  +-+--+
  | admin_state_up  | True |
  | description |  |
  | id  | 228237fb-86e1-434c-99aa-c587f07f9c59 |
  | listeners   |  |
  | name| test |
  | operating_status| OFFLINE  |
  | provider| zxveglb  |
  | provisioning_status | PENDING_CREATE   |
  | tenant_id   | 6403670bcb0f45cba4cb732a9a936da4 |
  | vip_address | 192.0.1.2|
  | vip_port_id | 9105a86e-fa7f-4fbd-9591-6eddda9d1938 |
  | vip_subnet_id   | f432f80a-c265-47c8-ab36-a522122892bf |
  +-+--+
  The loadbalancer's tenant_id is 6403670bcb0f45cba4cb732a9a936da4,not the 
input f0e4abc0e4564b9db5a8e6f23c7244b9.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1595832/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1595762] Re: HTTPS connection failing for Docker >= 1.10

2016-06-28 Thread Rossella Sblendido
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1595762

Title:
  HTTPS connection failing for Docker >= 1.10

Status in neutron:
  Invalid

Bug description:
  We experience problems with outgoing HTTPS connections from Docker
  containers when running in OpenStack VMs.

  We assume this could be a bug in OpenStack because:
  - Ubuntu 14, 16 and CoreOS show the same problems
  - While there are no problems with Docker 1.6.2 and 1.9.1, it fails with 
Docker 1.10 and 1.11
  - The same containers work outside OpenStack
  - We found similar problem descriptions in the web that occured on other 
OpenStack providers

  The issue can easily be reproduced with:

  1.) Installing a docker version >= 1.10
  2.) docker run -it ubuntu apt-get update

  Expected output: Ubuntu updates its package list
  Actual output: Download does not start and runs into a timeout

  The same problem seems to occur with wget and curl and our Java
  application.

  Please note that plain HTTP works as expected, so does issuing the
  Https requests from the Docker host machine.

  Disabling network virtualization with Docker flag --net="host" fixes
  the problems with wget, curl and apt-get, unfortunately not with the
  Java app we're trying to deploy in OpenStack.

  For our current project this is actually a blocker since CoreOS comes
  bundled with a recent Docker version which is not trivial to
  downgrade.

  I can't see any version information in the Horizon interface of our
  provider, however I think I heard they are using Mitaka release.

  Links:
  - Related issue at Docker: https://github.com/docker/docker/issues/20178
  - ServerFault question by me: 
http://serverfault.com/questions/785768/https-request-fails-in-docker-1-10-with-virtualized-network
  - StackOverflow question by someone else: 
http://stackoverflow.com/questions/35300497/docker-container-not-connecting-to-https-endpoints

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1595762/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596813] Re: stable/mitaka branch creation request for networking-midonet

2016-06-28 Thread Rossella Sblendido
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1596813

Title:
  stable/mitaka branch creation request for networking-midonet

Status in networking-midonet:
  New

Bug description:
  Please cut stable/mitaka branch for networking-midonet
  on commit 797a2a892e08f3e8162f5f168a7c5e3420f961bc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1596813/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1595922] Re: Make cPickle py3 compatible

2016-06-28 Thread Ji.Wei
** No longer affects: swift

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1595922

Title:
  Make  cPickle py3 compatible

Status in Bandit:
  New
Status in Monasca:
  New
Status in python-rackclient:
  New
Status in swiftonfile:
  In Progress
Status in swiftonhpss:
  New
Status in taskflow:
  In Progress

Bug description:
  In PY3 ,

  Removed the cPickle module, you can use the pickle module instead.
  Eventually we will have a transparent and efficient module.

To manage notifications about this bug go to:
https://bugs.launchpad.net/bandit/+bug/1595922/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596922] [NEW] instance uuid not cleared in an ironic node

2016-06-28 Thread Yushiro FURUKAWA
Public bug reported:

Description
===
In nova with ironic environment, "instance UUID" still remains if nova boot
got an error.  As a result, the ironic node cannot delete by API and deploy
baremetal instance.


Steps to reproduce
==
1. create an ironic-node
2. create an ironic-port
3. nova boot for ironic instance with following illegal instance name.

   $ nova boot --flavor my-baremetal-agent --image $MY_IMAGE_UUID \
   --key-name default test3.141592 --nic net-id=$NET_UUID


Expected result
===
Nova got an error and state has changed "ERROR".  Then, tenant user deletes
nova's instance($nova delete ).  After that, "instance UUID"
of an ironic node should be cleared.


Actual result
=
After $nova delete , "instance UUID" still remains into the
ironic node database.

$ ironic node-delete 0ea7c2e3-e5be-4052-85ac-a7b1adf0f30b
Failed to delete node 0ea7c2e3-e5be-4052-85ac-a7b1adf0f30b: Node 
0ea7c2e3-e5be-4052-85ac-a7b1adf0f30b is associated with instance 
84911c1f-976b-4428-a509-8ab6cf04182a. (HTTP 409)


Environment
===
* Devstack all-in-one
* Nova's source code is for "May 18".

commit fe8a119e8d80de35d7f99e0c1d9a9e5095840146
Merge: b56d861 6f2a46f
Author: Jenkins 
Date:   Wed May 18 23:33:00 2016 +

Merge "Remove unused base_options param from
_get_image_defined_bdms"

2. Which hypervisor did you use?
   ironic

3. Which networking type did you use?
   (For example: nova-network, Neutron with OpenVSwitch, ...)
neutron + ML2 + openvswitch driver + linuxbridge driver

Logs

2016-06-28 21:03:04.670 ERROR nova.scheduler.utils 
[req-7db0e0e5-5d18-4a5a-9293-52dd2e0f4351 admin admin] [instance: 
84911c1f-976b-4428-a509-8ab6cf04182a] Error from last host: f
urukawa-dev-ironic (node 0ea7c2e3-e5be-4052-85ac-a7b1adf0f30b): [u'Traceback 
(most recent call last):\n', u'  File 
"/opt/stack/nova/nova/compute/manager.py", line 1749, in _do_bu
ild_and_run_instance\nfilter_properties)\n', u'  File 
"/opt/stack/nova/nova/compute/manager.py", line 1939, in 
_build_and_run_instance\ninstance_uuid=instance.uuid, reaso
n=six.text_type(e))\n', u"RescheduledException: Build of instance 
84911c1f-976b-4428-a509-8ab6cf04182a was re-scheduled: Invalid input for 
dns_name. Reason: 'test3.141592' not a
valid PQDN or FQDN. Reason: TLD '141592' must not be all numeric.\n"]
2016-06-28 21:03:04.689 DEBUG oslo_messaging._drivers.amqpdriver 
[req-7db0e0e5-5d18-4a5a-9293-52dd2e0f4351 admin admin] sending reply msg_id: 
dea4917aad334ffda701e6cd23cf6a4c rep
ly queue: reply_a163275e9821450cb6494257a3b9629f time elapsed: 0.0230762520805s 
from (pid=5506) _send_reply 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdri
ver.py:74
2016-06-28 21:03:04.688 DEBUG oslo_messaging._drivers.amqpdriver 
[req-7db0e0e5-5d18-4a5a-9293-52dd2e0f4351 admin admin] CALL msg_id: 
51218c1cd9ad403798f2d830d9d0abb3 exchange 'no
va' topic 'scheduler' from (pid=5507) _send 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:450
2016-06-28 21:03:04.741 DEBUG oslo_messaging._drivers.amqpdriver [-] received 
reply msg_id: 51218c1cd9ad403798f2d830d9d0abb3 from (pid=5507) __call__ 
/usr/local/lib/python2.7/dis
t-packages/oslo_messaging/_drivers/amqpdriver.py:298
2016-06-28 21:03:04.743 WARNING nova.scheduler.utils 
[req-7db0e0e5-5d18-4a5a-9293-52dd2e0f4351 admin admin] Failed to 
compute_task_build_instances: No valid host was found. There
 are not enough hosts available.
Traceback (most recent call last):

  File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", 
line 200, in inner
return func(*args, **kwargs)

  File "/opt/stack/nova/nova/scheduler/manager.py", line 104, in 
select_destinations
dests = self.driver.select_destinations(ctxt, spec_obj)

  File "/opt/stack/nova/nova/scheduler/filter_scheduler.py", line 74, in 
select_destinations
raise exception.NoValidHost(reason=reason)

NoValidHost: No valid host was found. There are not enough hosts
available.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: ironic

** Attachment added: "Operation log"
   https://bugs.launchpad.net/bugs/1596922/+attachment/4691573/+files/opelog.txt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1596922

Title:
  instance uuid not cleared  in an ironic node

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  In nova with ironic environment, "instance UUID" still remains if nova boot
  got an error.  As a result, the ironic node cannot delete by API and deploy
  baremetal instance.

  
  Steps to reproduce
  ==
  1. create an ironic-node
  2. create an ironic-port
  3. nova boot for ironic instance with following illegal instance name.

 $ nova boot --flavor my-baremetal-agent --image $MY_IMAGE_UUID \

[Yahoo-eng-team] [Bug 1596813] Re: stable/mitaka branch creation request for networking-midonet

2016-06-28 Thread YAMAMOTO Takashi
Rossella,

this bug is following the procedure documented in
sub_project_guidelines.rst and bugs.rst.

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1596813

Title:
  stable/mitaka branch creation request for networking-midonet

Status in networking-midonet:
  New
Status in neutron:
  New

Bug description:
  Please cut stable/mitaka branch for networking-midonet
  on commit 797a2a892e08f3e8162f5f168a7c5e3420f961bc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1596813/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596933] [NEW] IntegrityError while archiving if consoles are not deleted

2016-06-28 Thread Abhishek Kekane
Public bug reported:

On deleting the instance its associated console entries will not be
deleted which will cause "IntegrityError" while archiving the deleted
records from database.

1. Create an instance.

$ nova boot test-1 --flavor 1 --image cirros-0.3.4-x86_64-uec

2. Create an entry in console table manually with the above created
instance_uuid.

insert into consoles (created_at, updated_at, instance_name, password,
port, instance_uuid, deleted) values ('2016-06-15 06:00:13', '2016-06-15
06:00:13', 'test-1', '12345', 8080, '326757a3-66da-455f-8ffa-
29775de1d5f5', 0);


3. Delete instance

$ nova delete 326757a3-66da-455f-8ffa-29775de1d5f5


4. Call db archive which raises following error

$ nova-manage db archive_deleted_rows 1000

2016-06-15 14:23:27.837 WARNING nova.db.sqlalchemy.api [-]
IntegrityError detected when archiving table instances:
(pymysql.err.IntegrityError) (1451, u'Cannot delete or update a parent
row: a foreign key constraint fails (`nova`.`consoles`, CONSTRAINT
`consoles_instance_uuid_fkey` FOREIGN KEY (`instance_uuid`) REFERENCES
`instances` (`uuid`))') [SQL: u'DELETE FROM instances WHERE instances.id
in (SELECT T1.id FROM (SELECT instances.id \nFROM instances \nWHERE
instances.deleted != %(deleted_1)s ORDER BY instances.id \n LIMIT
%(param_1)s) as T1)'] [parameters: {u'param_1': 979, u'deleted_1': 0}]

Note:
I have added db entry manually as console service is not enabled in my 
environment.

** Affects: nova
 Importance: Undecided
 Assignee: Bhagyashri Shewale (bhagyashri-shewale)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1596933

Title:
  IntegrityError while archiving if consoles are not deleted

Status in OpenStack Compute (nova):
  New

Bug description:
  On deleting the instance its associated console entries will not be
  deleted which will cause "IntegrityError" while archiving the deleted
  records from database.

  1. Create an instance.

  $ nova boot test-1 --flavor 1 --image cirros-0.3.4-x86_64-uec

  2. Create an entry in console table manually with the above created
  instance_uuid.

  insert into consoles (created_at, updated_at, instance_name, password,
  port, instance_uuid, deleted) values ('2016-06-15 06:00:13',
  '2016-06-15 06:00:13', 'test-1', '12345', 8080, '326757a3-66da-455f-
  8ffa-29775de1d5f5', 0);

  
  3. Delete instance

  $ nova delete 326757a3-66da-455f-8ffa-29775de1d5f5

  
  4. Call db archive which raises following error

  $ nova-manage db archive_deleted_rows 1000

  2016-06-15 14:23:27.837 WARNING nova.db.sqlalchemy.api [-]
  IntegrityError detected when archiving table instances:
  (pymysql.err.IntegrityError) (1451, u'Cannot delete or update a parent
  row: a foreign key constraint fails (`nova`.`consoles`, CONSTRAINT
  `consoles_instance_uuid_fkey` FOREIGN KEY (`instance_uuid`) REFERENCES
  `instances` (`uuid`))') [SQL: u'DELETE FROM instances WHERE
  instances.id in (SELECT T1.id FROM (SELECT instances.id \nFROM
  instances \nWHERE instances.deleted != %(deleted_1)s ORDER BY
  instances.id \n LIMIT %(param_1)s) as T1)'] [parameters: {u'param_1':
  979, u'deleted_1': 0}]

  Note:
  I have added db entry manually as console service is not enabled in my 
environment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1596933/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1576093] Re: [SRU] block migration fail with libvirt since version 1.2.17

2016-06-28 Thread James Page
This bug was fixed in the package nova - 2:13.0.0-0ubuntu5~cloud0
---

 nova (2:13.0.0-0ubuntu5~cloud0) trusty-mitaka; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 nova (2:13.0.0-0ubuntu5) xenial; urgency=medium
 .
   * d/p/fix-block-migration.patch: Cherry pick and rebase patch from
 stable/mitaka gerrit review to fix block migration (LP: #1576093).
   * Note: Skipping 0ubuntu3/4 because they were yakkety-only releases.


** Changed in: cloud-archive/mitaka
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1576093

Title:
  [SRU] block migration fail with libvirt since version  1.2.17

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Xenial:
  Fix Released
Status in nova source package in Yakkety:
  Fix Released

Bug description:
  [Impact]

  Try to do block migration but fail and libvirt reports that

  "Selecting disks to migrate is not implemented for tunneled migration"

  Nova:  a4e15e329f9adbcfe72fbcd6acb94f0743ad02f8

  libvirt: 1.3.1

  [Test Case]

  reproduce:

  default devstack setting and do block migration (no shared
  instance_dir and shared instance storage used)

  [Regression Potential]

  Patch was cherry picked from upstream gerrit review, which was cherry
  picked with little change from master branch.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1576093/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499177] Re: Performance: L2 agent takes too much time to refresh sg rules

2016-06-28 Thread OpenStack Infra
Fix proposed to branch: master
Review: https://review.openstack.org/335037

** Changed in: neutron
   Status: Expired => In Progress

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1499177

Title:
  Performance: L2 agent takes too much time to refresh sg rules

Status in neutron:
  In Progress

Bug description:
  This issue is introducing a performance problem for the L2 agent
  including LinuxBridge and OVS agent in Compute node when there are
  lots of networks and instances in this Compute node (eg. 500
  instances)

  The performance problem reflect in two aspects:

  1. When LinuxBridge agent service starts up(this seems only happened
  in LinuxBridge agent not for the OVS agent), I found there were two
  methods take too much time:

     1.1 get_interface_by_ip(),  we should find the interface which was
  assigned with the "local ip" defined in configuration file, and to
  check whether this interface support "vxlan" or not.  This method will
  iterate all the interface in this compute node and execute "ip link
  show [interface] to [local  ip]" to judge the result.  I think there
  should be a faster way.

     1.2 prepare_port_filter() ,  in this method ,  we should make sure
  the ipset are create correctly. But this method will execute too much
  "ipset" commands and take too much time.

  2. When devices' sg rules are changed,  L2 agent should refresh the
  firewalls.

  2.1 refresh_firewall() this method will call "modify_rules" to
  make the rules predicable, but this method also takes too much time.

  It will be very benefit for the large scales of networks if this
  performance problem can be fix or optimize.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1499177/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596124] Re: Python3 do not use dict.iteritems dict.iterkeys dict.itervalues

2016-06-28 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/334126
Committed: 
https://git.openstack.org/cgit/openstack/glance_store/commit/?id=19d8df31fdbb88439fc7876ea474b0abd465b240
Submitter: Jenkins
Branch:master

commit 19d8df31fdbb88439fc7876ea474b0abd465b240
Author: yuyafei 
Date:   Sat Jun 25 11:52:29 2016 +0800

Replace dict.iterkeys with six.iterkeys to make PY3 compatible

Python3 do not use dict.iterkeys, which would raise AttributeError:
'dict' object has no attribute 'iterkeys'.

Change-Id: I97e320eac9f2f0b2cb5cf34a1d3fc57e80e440ed
Closes-Bug: #1596124


** Changed in: glance-store
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1596124

Title:
  Python3 do not use dict.iteritems dict.iterkeys dict.itervalues

Status in Cinder:
  In Progress
Status in glance_store:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in python-glanceclient:
  In Progress
Status in python-heatclient:
  Fix Released
Status in OpenStack Object Storage (swift):
  New
Status in Tracker:
  New

Bug description:
  Python3 do not use dict.iteritemse dict.iterkeys dict.itervalues,
  which would raise AttributeError: 'dict' object has no attribute
  'iterkeys'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1596124/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596602] Re: Instance Snapshot failure: "Unable to set 'kernel_id' to 'None'. Reason: None is not of type u'string'"

2016-06-28 Thread Ben VanHavermaet
** This bug is no longer a duplicate of bug 1592808
   Snapshot failed during inconsistencies in glance v2 image schema

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1596602

Title:
  Instance Snapshot failure: "Unable to set 'kernel_id' to 'None'.
  Reason: None is not of type u'string'"

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  Attempted to take a snapshot of a suspended server through the CLI, but the 
command failed. (Nova log stack trace appended below)

  Steps to reproduce
  ==
  1. Created a server instance:
 $ nova boot --image  --flavor m1.tiny snapshotvm
  2. Suspended the server:
 $ openstack server suspend 
  3. Attempt to create server snapshot:
 $ nova image-create snapshotvm snapshotimage --poll

  Expected result
  ===
  Expected to have a snapshot of my instance created in the image list.

  Actual result
  =
  Received following error output from the image create command:

  # nova image-create snapshotvm snapshotimage --poll

  Server snapshotting... 25% complete
  ERROR (NotFound): Image not found. (HTTP 404) (Request-ID: 
req-4670eba3-a0d5-4814-b0a8-4aba37a1dd3a)

  Environment
  ===
  1. Running from master level of Openstack

  2. Using KVM virtualization on Ubuntu 14.04:
  # kvm --version
  QEMU emulator version 2.0.0 (Debian 2.0.0+dfsg-2ubuntu1.24), Copyright (c) 
2003-2008 Fabrice Bellard

  2. Which storage type did you use?
 LVM

  3. Which networking type did you use?
 Neutron


  2016-06-27 15:04:21.361 10781 INFO nova.compute.manager 
[req-f393be05-96c2-4d52-bfc1-0c5b644c553e d672d690e856437196a748ebc85abb41 
b9b13412e4fd4fd6a6016368b720309e - - -] [instance: 
ab71489a-0716-404b-808a-165f2a85af74] Successfully reverted task state from 
image_uploading on failure for instance.
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server 
[req-f393be05-96c2-4d52-bfc1-0c5b644c553e d672d690e856437196a748ebc85abb41 
b9b13412e4fd4fd6a6016368b720309e - - -] Exception during message handling
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server   File 
"/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",
 line 133, in _process_incoming
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server   File 
"/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 150, in dispatch
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server   File 
"/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 121, in _do_dispatch
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server   File 
"/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/nova/exception.py",
 line 104, in wrapped
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server payload)
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server   File 
"/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 221, in __exit__
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server 
self.force_reraise()
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server   File 
"/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 197, in force_reraise
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server   File 
"/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/nova/exception.py",
 line 83, in wrapped
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server return 
f(self, context, *args, **kw)
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server   File 
"/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 183, in decorated_function
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server 
LOG.warning(msg, e, instance=instance)
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server   File 
"/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 221, in __exit__
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server 
self.force_reraise()
  2016-06-27 15:04:21.

[Yahoo-eng-team] [Bug 1596957] [NEW] Cannot resize down instance booted from volume backed snapshot with ephemeral

2016-06-28 Thread Feodor Tersin
Public bug reported:

If an instance is booted from a volume backed snapshot with
ephemeral(s), it cannot be resized down to a flawor with disk size is
lesser than original flavor disk size.

Steps to reproduce:

1 Prepare flavors
$ nova flavor-create t1.nano-2-e auto 64 2 1 --ephemeral 1
$ nova flavor-create t1.nano-1-e auto 64 1 1 --ephemeral 1

2 Prepare a volume backed snapshot
$ cinder create 2 --image-id cirros-0.3.4-x86_64-disk --name boot-vol
$ nova boot --boot-volume  --flavor t1.nano-2-e inst-1
$ nova image-create inst-1 snap-1

3 Boot an instace from the snapshot
$ nova boot --image snap-1 --flavor t1.nano-2-e inst-2

4 Resize the instance
$ nova resize inst-2 t1.nano-1-e

5 Check status of the instance
$ nova list

Expected status: VERIFY_RESIZE
Actual status: ACITVE

Environment:
DevStack on current (the last Nova commit 
a7efa47ec91479c6cc77087cd5b86d2bbf5a0654) Newton OpenStack.

n-cpu.log fragment:

[req-4de3bbd2-a717-4f76-bca0-607573ae46b9 admin admin] Exception during message 
handling
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 
133, in _process_incoming
res = self.dispatcher.dispatch(message)
...
  File "/opt/stack/nova/nova/compute/manager.py", line 6464, in 
_error_out_instance_on_exception
raise error.inner_exception
ResizeError: Resize error: Unable to resize disk down.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1596957

Title:
  Cannot resize down instance booted from volume backed snapshot with
  ephemeral

Status in OpenStack Compute (nova):
  New

Bug description:
  If an instance is booted from a volume backed snapshot with
  ephemeral(s), it cannot be resized down to a flawor with disk size is
  lesser than original flavor disk size.

  Steps to reproduce:

  1 Prepare flavors
  $ nova flavor-create t1.nano-2-e auto 64 2 1 --ephemeral 1
  $ nova flavor-create t1.nano-1-e auto 64 1 1 --ephemeral 1

  2 Prepare a volume backed snapshot
  $ cinder create 2 --image-id cirros-0.3.4-x86_64-disk --name boot-vol
  $ nova boot --boot-volume  --flavor t1.nano-2-e inst-1
  $ nova image-create inst-1 snap-1

  3 Boot an instace from the snapshot
  $ nova boot --image snap-1 --flavor t1.nano-2-e inst-2

  4 Resize the instance
  $ nova resize inst-2 t1.nano-1-e

  5 Check status of the instance
  $ nova list

  Expected status: VERIFY_RESIZE
  Actual status: ACITVE

  Environment:
  DevStack on current (the last Nova commit 
a7efa47ec91479c6cc77087cd5b86d2bbf5a0654) Newton OpenStack.

  n-cpu.log fragment:
  
  [req-4de3bbd2-a717-4f76-bca0-607573ae46b9 admin admin] Exception during 
message handling
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 
133, in _process_incoming
  res = self.dispatcher.dispatch(message)
  ...
File "/opt/stack/nova/nova/compute/manager.py", line 6464, in 
_error_out_instance_on_exception
  raise error.inner_exception
  ResizeError: Resize error: Unable to resize disk down.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1596957/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596976] [NEW] optimize refresh firewall on ipset member update

2016-06-28 Thread venkata anil
Public bug reported:

Before the ipset, a port was creating explicit firewall rule to other 
ports(member of the same security group) i.e
port's firewall rules without ipset
-A neutron-openvswi-i92605eaf-b -s 192.168.83.17/32 -j RETURN
-A neutron-openvswi-i92605eaf-b -s 192.168.83.18/32 -j RETURN
-A neutron-openvswi-i92605eaf-b -s 192.168.83.15/32 -j RETURN
with ipset
-A neutron-openvswi-i92605eaf-b -m set –match-set ${ipset_name} src -j RETURN

With ipset, when a new port is up on remote ovs agent, then on local ovs
agent, only kernel ipset has to be updated and no need to update any
firewall rules. When port on remote agent is deleted, then it has to be
deleted from local agent's ipset, and corresponding connection tracking
entries has to deleted. In both the above scenarios, ovs shouldn't
update firewall rules.

 But current implementation is trying to update firewall rules(this will
result in removing all in-memory firewall rules and again creating them,
but still no iptable rules are updated on system). This is consuming lot
of agent's time. We can optimize this by avoid updating in-memory
firewall rules for this scenario, and make firewall refresh for
securitygroup-member-update faster.

** Affects: neutron
 Importance: Undecided
 Assignee: venkata anil (anil-venkata)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => venkata anil (anil-venkata)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1596976

Title:
  optimize refresh firewall  on ipset member update

Status in neutron:
  New

Bug description:
  Before the ipset, a port was creating explicit firewall rule to other 
ports(member of the same security group) i.e
  port's firewall rules without ipset
  -A neutron-openvswi-i92605eaf-b -s 192.168.83.17/32 -j RETURN
  -A neutron-openvswi-i92605eaf-b -s 192.168.83.18/32 -j RETURN
  -A neutron-openvswi-i92605eaf-b -s 192.168.83.15/32 -j RETURN
  with ipset
  -A neutron-openvswi-i92605eaf-b -m set –match-set ${ipset_name} src -j RETURN

  With ipset, when a new port is up on remote ovs agent, then on local
  ovs agent, only kernel ipset has to be updated and no need to update
  any firewall rules. When port on remote agent is deleted, then it has
  to be deleted from local agent's ipset, and corresponding connection
  tracking entries has to deleted. In both the above scenarios, ovs
  shouldn't update firewall rules.

   But current implementation is trying to update firewall rules(this
  will result in removing all in-memory firewall rules and again
  creating them, but still no iptable rules are updated on system). This
  is consuming lot of agent's time. We can optimize this by avoid
  updating in-memory firewall rules for this scenario, and make firewall
  refresh for securitygroup-member-update faster.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1596976/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596602] Re: Instance Snapshot failure: "Unable to set 'kernel_id' to 'None'. Reason: None is not of type u'string'"

2016-06-28 Thread Ben VanHavermaet
** Project changed: nova => python-glanceclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1596602

Title:
  Instance Snapshot failure: "Unable to set 'kernel_id' to 'None'.
  Reason: None is not of type u'string'"

Status in python-glanceclient:
  New

Bug description:
  Description
  ===
  Attempted to take a snapshot of a suspended server through the CLI, but the 
command failed. (Nova log stack trace appended below)

  Steps to reproduce
  ==
  1. Created a server instance:
 $ nova boot --image  --flavor m1.tiny snapshotvm
  2. Suspended the server:
 $ openstack server suspend 
  3. Attempt to create server snapshot:
 $ nova image-create snapshotvm snapshotimage --poll

  Expected result
  ===
  Expected to have a snapshot of my instance created in the image list.

  Actual result
  =
  Received following error output from the image create command:

  # nova image-create snapshotvm snapshotimage --poll

  Server snapshotting... 25% complete
  ERROR (NotFound): Image not found. (HTTP 404) (Request-ID: 
req-4670eba3-a0d5-4814-b0a8-4aba37a1dd3a)

  Environment
  ===
  1. Running from master level of Openstack

  2. Using KVM virtualization on Ubuntu 14.04:
  # kvm --version
  QEMU emulator version 2.0.0 (Debian 2.0.0+dfsg-2ubuntu1.24), Copyright (c) 
2003-2008 Fabrice Bellard

  2. Which storage type did you use?
 LVM

  3. Which networking type did you use?
 Neutron


  2016-06-27 15:04:21.361 10781 INFO nova.compute.manager 
[req-f393be05-96c2-4d52-bfc1-0c5b644c553e d672d690e856437196a748ebc85abb41 
b9b13412e4fd4fd6a6016368b720309e - - -] [instance: 
ab71489a-0716-404b-808a-165f2a85af74] Successfully reverted task state from 
image_uploading on failure for instance.
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server 
[req-f393be05-96c2-4d52-bfc1-0c5b644c553e d672d690e856437196a748ebc85abb41 
b9b13412e4fd4fd6a6016368b720309e - - -] Exception during message handling
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server   File 
"/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/oslo_messaging/rpc/server.py",
 line 133, in _process_incoming
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server   File 
"/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 150, in dispatch
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server   File 
"/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 121, in _do_dispatch
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server   File 
"/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/nova/exception.py",
 line 104, in wrapped
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server payload)
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server   File 
"/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 221, in __exit__
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server 
self.force_reraise()
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server   File 
"/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 197, in force_reraise
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server   File 
"/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/nova/exception.py",
 line 83, in wrapped
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server return 
f(self, context, *args, **kw)
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server   File 
"/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 183, in decorated_function
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server 
LOG.warning(msg, e, instance=instance)
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server   File 
"/opt/bbc/openstack-11.0-master/nova/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 221, in __exit__
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server 
self.force_reraise()
  2016-06-27 15:04:21.364 10781 ERROR oslo_messaging.rpc.server   File 
"/opt/bbc/openstack-11.0-ma

[Yahoo-eng-team] [Bug 1403836] Re: Nova volume attach fails for a iscsi disk with CHAP enabled.

2016-06-28 Thread Lucian Petrut
** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1403836

Title:
  Nova volume attach fails for a iscsi disk with CHAP enabled.

Status in OpenStack Compute (nova):
  Fix Released
Status in os-win:
  Fix Released

Bug description:
  I was trying nova volume attach of a disk with CHAP enabled on
  Windows(HyperV driver), I notice that the attach volume fails with
  CHAP authentication enforced and the same works without CHAP
  authentication set.

  My current setup is Juno based:

  I saw a similar bug reported as
  https://bugs.launchpad.net/nova/+bug/1397549  .  The fix of which is
  as per

  https://review.openstack.org/#/c/137623/ and
  https://review.openstack.org/#/c/134592/ .

  Even after incorporating  these changes  things do not work and it
  needs an additional fix.

  Issue: The issue even after having  the code as in the commits
  mentioned earlier is that – when we try to do nova volume-attach ,  on
  Hyperv host we first do a login to portal , then attach the volume to
  target.

  Now, if we login to portal without chap authentication – it will fail
  (Authentication failure) and hence the code needs to be changed here
  
(https://github.com/openstack/nova/blob/master/nova/virt/hyperv/volumeutilsv2.py#L64-65
  ) .

  
  Resoultion: While creating/adding  the new portal we need to add it with the 
CHAP credentials (as the way it is done on target.connect) . 

  Sample snippet of the fix would be;
  if portal:
  portal[0].Update()
  else:
  # Adding target portal to iscsi initiator. Sending targets
   LOG.debug("Create a new portal")
 auth = {}
  if auth_username and auth_password:
  auth['AuthenticationType'] = self._CHAP_AUTH_TYPE
  auth['ChapUsername'] = auth_username
  auth['ChapSecret'] = auth_password
  LOG.debug(auth)
  portal = self._conn_storage.MSFT_iSCSITargetPortal
  portal.New(TargetPortalAddress=target_address,
 TargetPortalPortNumber=target_port, **auth)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1403836/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1594466] Re: Additional params needed for openstack rc file

2016-06-28 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/331788
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=a86591864b69c663e3f0f63a532fc34190670e42
Submitter: Jenkins
Branch:master

commit a86591864b69c663e3f0f63a532fc34190670e42
Author: Matt Borland 
Date:   Mon Jun 20 11:33:42 2016 -0600

Add valuable exports to openstack RC file download

See bug for details; these exports help inform the openstack client about
its environment.

Change-Id: I025bdbc31c9352f5894e0ce4cdba153a341d739b
Closes-Bug: 1594466


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1594466

Title:
  Additional params needed for openstack rc file

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  There are some additional parameters needed for the openstack rc
  files, which are generated for download at: Access & Security >
  Download (appropriate) RC file.

  There are two kinds of files generated right now: Keystone v2 and
  Keystone v3.  Which you see as options are based on your
  installation/configuration.

  Due to changes for Keystone v3, there are some new exports that should be 
honored:
  OS_INTERFACE
  OS_IDENTITY_API_VERSION
  OS_AUTH_VERSION

  OS_ENDPOINT_TYPE should also be added for consistency.

  They should be produced in both the v2 and v3 versions of the file for
  compatibility purposes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1594466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596984] [NEW] LBaaS: Models and migrations are out of sync

2016-06-28 Thread Elena Ezhova
Public bug reported:

TestModelsMigrations reports that there are a number of tables whose
models cannot be found neither in
neutron_lbaas.db.loadbalancer.loadbalancer_db nor in
neutron_lbaas.db.loadbalancer.models which includes third-party driver
tables.

Trace:
http://paste.openstack.org/show/523789/
Full log of dsvm-functional job:
http://logs.openstack.org/99/320999/26/check/gate-neutron-lbaas-dsvm-functional-nv/ff96da0/console.html

** Affects: neutron
 Importance: Undecided
 Assignee: Elena Ezhova (eezhova)
 Status: New


** Tags: lbaas

** Tags added: lbaas

** Changed in: neutron
 Assignee: (unassigned) => Elena Ezhova (eezhova)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1596984

Title:
  LBaaS: Models and migrations are out of sync

Status in neutron:
  New

Bug description:
  TestModelsMigrations reports that there are a number of tables whose
  models cannot be found neither in
  neutron_lbaas.db.loadbalancer.loadbalancer_db nor in
  neutron_lbaas.db.loadbalancer.models which includes third-party driver
  tables.

  Trace:
  http://paste.openstack.org/show/523789/
  Full log of dsvm-functional job:
  
http://logs.openstack.org/99/320999/26/check/gate-neutron-lbaas-dsvm-functional-nv/ff96da0/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1596984/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1593653] Re: DVR: cannot manually remove router from l3 agent

2016-06-28 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/332102
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=64726983f2161b32e516f3f12007e5f5a3d57e50
Submitter: Jenkins
Branch:master

commit 64726983f2161b32e516f3f12007e5f5a3d57e50
Author: Hong Hui Xiao 
Date:   Tue Jun 21 10:56:45 2016 +

Not auto schedule router when sync routers from agent

In this patch, auto schedule router will be removed from sync_routers,
so that the reported bug can be fixed. And potential race can be avoid
accoridng to [1]

The result of patch will make the l3 agent can't get the router info
when the router is not bound to the l3 agent. And router in agent will
be removed during the agent processing. This makes sense, since, in
neutron server, the router is not tied to the agent. For DVR, if there
are service port in the agent host, the router info will still be
returned to l3 agent.

[1] https://review.openstack.org/#/c/317949/

Change-Id: Id0a8cf7537fefd626df06064f915d2de7c1680c6
Co-Authored-By: John Schwarz 
Closes-Bug: #1593653


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1593653

Title:
  DVR: cannot manually remove router from l3 agent

Status in neutron:
  Fix Released

Bug description:
  This is a regression from this commit [1]. If there are dvr
  serviceable ports on the node with agent, server now will notify agent
  with router_updated rather than router_removed, however when updating
  router, agent will request router_info and that's where server will
  schedule router back to this l3 agent due to autoscheduling being
  enabled.

  [1]
  https://review.openstack.org/#/q/c198710dc551bc0f79851a7801038b033088a8c2

  The potential fix might be not scheduler router when agent request
  router_info.

  Now there are 2 code path that will auto schedule routers to agent.

  The first one is get_router_ids, which will bind all un-scheduled
  routers to the agent. This will happen at agent startup, revive, and
  update.

  The second one is sync_routers, which will bind the specified routers
  to agent. This will happen when agent requires some router info.

  Since 'router_id' was removed in bug 1594711, get_router_ids
  will always be called at agent startup, revive, and update, which can
  cover all the cases that auto schedule needs. And no need for other
  code path.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1593653/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1587386] Re: Unshelve results in duplicated resource deallocated

2016-06-28 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/323269
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=f1320a7c2debf127a93773046adffb80563fd20b
Submitter: Jenkins
Branch:master

commit f1320a7c2debf127a93773046adffb80563fd20b
Author: Stephen Finucane 
Date:   Mon May 30 16:03:35 2016 +0100

Evaluate 'task_state' in resource (de)allocation

There are two types of VM states associated with shelving. The first,
'shelved' indicates that the VM has been powered off but the resources
remain allocated on the hypervisor. The second, 'shelved_offloaded',
indicates that the VM has been powered off and the resources freed.
When "unshelving" VMs in the latter state, the VM state does not change
from 'shelved_offloaded' until some time after the VM has been
"unshelved".

Change I83a5f06 introduced a change that allowed for deallocation of
resources when they were set to the 'shelved_offloaded' state. However,
the resource (de)allocation code path assumes any VM with a state of
'shelved_offloaded' should have resources deallocated from it, rather
than allocated to it. As the VM state has not changed when this code
path is executed, resources are incorrectly deallocated from the
instance twice.

Enhance the aformentioned check to account for task state in addition to
VM state. This ensures a VM that's still in 'shelved_offloaded' state,
but is in fact being unshelved, does not trigger deallocation.

Change-Id: Ie2e7b91937fc3d61bb1197fffc3549bebc65e8aa
Signed-off-by: Stephen Finucane 
Resolves-bug: #1587386
Related-bug: #1545675


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1587386

Title:
  Unshelve results in duplicated resource deallocated

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===

  Shelve/unshelve operations fail when using "NFV flavors". This was
  reported on the mailing list initially.

  http://lists.openstack.org/pipermail/openstack-
  dev/2016-May/095631.html

  Steps to reproduce
  ==

  1. Create a flavor with 'hw:numa_nodes=2', 'hw:cpu_policy=dedicated' and 
'hw:mempage_size=large'
  2. Configure Tempest to use this new flavor
  3. Run Tempest tests

  Expected result
  ===

  All tests will pass.

  Actual result
  =

  The shelve/unshelve Tempest tests always result in a timeout exception 
  being raised, looking similar to the following, from [1]:

  Traceback (most recent call last):
File "tempest/api/compute/base.py", line 166, in server_check_teardown
  cls.server_id, 'ACTIVE')
File "tempest/common/waiters.py", line 95, in wait_for_server_status
  raise exceptions.TimeoutException(message)2016-05-22 22:25:30.697 
13974 ERROR tempest.api.compute.base TimeoutException: Request timed out
  Details: (ServerActionsTestJSON:tearDown) Server 
cae6fd47-0968-4922-a03e-3f2872e4eb52 failed to reach ACTIVE status and task 
state "None" within the required time (196 s). Current status: 
SHELVED_OFFLOADED. Current task state: None.

  The following errors are raised in the compute logs:

  Traceback (most recent call last):
File "/opt/stack/new/nova/nova/compute/manager.py", line 4230, in 
_unshelve_instance
  with rt.instance_claim(context, instance, limits):
File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 
271, in inner
  return f(*args, **kwargs)
File "/opt/stack/new/nova/nova/compute/resource_tracker.py", line 151, 
in instance_claim
  self._update_usage_from_instance(context, instance_ref)
File "/opt/stack/new/nova/nova/compute/resource_tracker.py", line 827, 
in _update_usage_from_instance
  self._update_usage(instance, sign=sign)
File "/opt/stack/new/nova/nova/compute/resource_tracker.py", line 666, 
in _update_usage
  self.compute_node, usage, free)
File "/opt/stack/new/nova/nova/virt/hardware.py", line 1482, in 
get_host_numa_usage_from_instance
  host_numa_topology, instance_numa_topology, free=free))
File "/opt/stack/new/nova/nova/virt/hardware.py", line 1348, in 
numa_usage_from_instances
  newcell.unpin_cpus(pinned_cpus)
File "/opt/stack/new/nova/nova/objects/numa.py", line 94, in unpin_cpus
  pinned=list(self.pinned_cpus))
  CPUPinningInvalid: Cannot pin/unpin cpus [6] from the following pinned 
set [0, 2, 4]

  [1] http://intel-openstack-ci-logs.ovh/86/319686/1/check/tempest-dsvm-
  full-nfv/b463722/testr_results.html.gz

  Environment
  ===

  1. Exact version of OpenStack you are running. See the following

  Commit '25fdf64'.

To manage notifications about this bug go to:
https://bugs.laun

[Yahoo-eng-team] [Bug 1597000] [NEW] Directives and don't work nicely together

2016-06-28 Thread Timur Sufiev
Public bug reported:

New directive  provides an elegant way to reduce the
amount of boilerplate html templates one had to provide recently to
render tables. Unfortunately,  doesn't work with it
because of its internal structure and it's necessary to use the old
verbose markup inside its  and  sibling tags. This
has to be fixed.

** Affects: horizon
 Importance: Wishlist
 Assignee: Timur Sufiev (tsufiev-x)
 Status: In Progress

** Changed in: horizon
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1597000

Title:
  Directives  and  don't work nicely
  together

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  New directive  provides an elegant way to reduce the
  amount of boilerplate html templates one had to provide recently to
  render tables. Unfortunately,  doesn't work with it
  because of its internal structure and it's necessary to use the old
  verbose markup inside its  and  sibling tags.
  This has to be fixed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1597000/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597024] [NEW] List role_assignments?include_names=False returns Entitys' Names

2016-06-28 Thread Sam Leong
Public bug reported:

When include_names=True, it shows the expected results that is with all
the names in additional to Ids. However, it makes no difference when
include_names=False. Like the following -


GET 
https://localhost:35357/v3/role_assignments?user.id=44dfc2bd8f254a068d43e102a89fc355&include_names=False
 HTTP/1.1
Accept-Encoding: gzip,deflate
X-Auth-Token: bee62e952fba4bafb09ad73cdc968eca
Accept: application/json
Host: localhost:35357
Connection: Keep-Alive
User-Agent: Apache-HttpClient/4.1.1 (java 1.5)

HTTP/1.1 200 OK
Date: Thu, 02 Jun 2016 19:53:17 GMT
Server: Apache/2.4.10 (Debian)
Vary: X-Auth-Token
x-openstack-request-id: req-37962d1f-5d31-4d91-b8c3-1c598be4c82c
Content-Length: 765
Content-Type: application/json

{"role_assignments": [{"scope": {"domain": {"id":
"b4421df800714f33ae9cdcd37704a98e", "name":
"domainname_91455-201606021353160180"}}, "role": {"id":
"2d3797efdb684f57b35e63eb7f66b30c", "name":
"update_Role_91455-201606021353160180"}, "user": {"domain": {"id":
"b4421df800714f33ae9cdcd37704a98e", "name":
"domainname_91455-201606021353160180"}, "id":
"44dfc2bd8f254a068d43e102a89fc355", "name":
"DAname_91455-201606021353160180"}, "links": {"assignment":
"https://localhost:35357/v3/domains/b4421df800714f33ae9cdcd37704a98e/users/44dfc2bd8f254a068d43e102a89fc355/roles/2d3797efdb684f57b35e63eb7f66b30c"}}],
"links": {"self":
"https://localhost:35357/v3/role_assignments?user.id=44dfc2bd8f254a068d43e102a89fc355&include_names=False";,
"previous": null, "next": null}}



The workaround is to not specify include_names or set include_names=0.

A bug is in the parser.

There is a parser 'def query_filter_is_true(cls, filter_value):' defined in 
common/controller.py returns False when filter_value is 0; anything else return 
True. Like -
if (isinstance(filter_value, six.string_types) and filter_value == '0'):
val = False
else:
val = True

So basically it just does not consider value of filter_value False is
False, only 0.

** Affects: keystone
 Importance: Undecided
 Assignee: Sam Leong (chio-fai-sam-leong)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Sam Leong (chio-fai-sam-leong)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1597024

Title:
  List role_assignments?include_names=False returns Entitys' Names

Status in OpenStack Identity (keystone):
  New

Bug description:
  When include_names=True, it shows the expected results that is with
  all the names in additional to Ids. However, it makes no difference
  when include_names=False. Like the following -

  

  GET 
https://localhost:35357/v3/role_assignments?user.id=44dfc2bd8f254a068d43e102a89fc355&include_names=False
 HTTP/1.1
  Accept-Encoding: gzip,deflate
  X-Auth-Token: bee62e952fba4bafb09ad73cdc968eca
  Accept: application/json
  Host: localhost:35357
  Connection: Keep-Alive
  User-Agent: Apache-HttpClient/4.1.1 (java 1.5)

  HTTP/1.1 200 OK
  Date: Thu, 02 Jun 2016 19:53:17 GMT
  Server: Apache/2.4.10 (Debian)
  Vary: X-Auth-Token
  x-openstack-request-id: req-37962d1f-5d31-4d91-b8c3-1c598be4c82c
  Content-Length: 765
  Content-Type: application/json

  {"role_assignments": [{"scope": {"domain": {"id":
  "b4421df800714f33ae9cdcd37704a98e", "name":
  "domainname_91455-201606021353160180"}}, "role": {"id":
  "2d3797efdb684f57b35e63eb7f66b30c", "name":
  "update_Role_91455-201606021353160180"}, "user": {"domain": {"id":
  "b4421df800714f33ae9cdcd37704a98e", "name":
  "domainname_91455-201606021353160180"}, "id":
  "44dfc2bd8f254a068d43e102a89fc355", "name":
  "DAname_91455-201606021353160180"}, "links": {"assignment":
  
"https://localhost:35357/v3/domains/b4421df800714f33ae9cdcd37704a98e/users/44dfc2bd8f254a068d43e102a89fc355/roles/2d3797efdb684f57b35e63eb7f66b30c"}}],
  "links": {"self":
  
"https://localhost:35357/v3/role_assignments?user.id=44dfc2bd8f254a068d43e102a89fc355&include_names=False";,
  "previous": null, "next": null}}

  


  The workaround is to not specify include_names or set include_names=0.

  A bug is in the parser.

  There is a parser 'def query_filter_is_true(cls, filter_value):' defined in 
common/controller.py returns False when filter_value is 0; anything else return 
True. Like -
  if (isinstance(filter_value, six.string_types) and filter_value == '0'):
  val = False
  else:
  val = True

  So basically it just does not consider value of filter_value False is
  False, only 0.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1597024/+subscriptions

-- 
Ma

[Yahoo-eng-team] [Bug 1076019] Re: Image perpetually in "SAVING" state if snapshot fails to create

2016-06-28 Thread Augustina Ragwitz
*** This bug is a duplicate of bug 1555065 ***
https://bugs.launchpad.net/bugs/1555065

** This bug is no longer a duplicate of bug 1064386
   Image stays in Saving State if any error occurs while snapshotting an 
instance
** This bug has been marked a duplicate of bug 1555065
   Image goes to saving state when we delete instance just after taking 
snapshot and remain the state forever

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1076019

Title:
  Image perpetually in "SAVING" state if snapshot fails to create

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  If a user tries to create an image from a snapshot (e.g., through
  Horizon), and the process of creating a snapshot fails (e.g., nova
  fails to extract the KVM snapshot to a disk because there isn't enough
  room in the temp directory), then the resulting entry in "nova image-
  list" will always be in the "SAVING" state.

  It would be easier for ops to identify that a problem has occurred
  with the snapshot process if, when a failure like this occurred, the
  state of the new image was changed to some kind of error state.

  I encountered this problem with essex, but looking at the code in
  trunk as of today, it doesn't look like there's any exception handling
  code in nova/libvirt/driver.py:snapshot when snapshot.extract fails.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1076019/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 991941] Re: Restrating the compute service while taking snapshot of vm makes the snapshot stuck in saving state

2016-06-28 Thread Augustina Ragwitz
*** This bug is a duplicate of bug 1555065 ***
https://bugs.launchpad.net/bugs/1555065

** This bug is no longer a duplicate of bug 1064386
   Image stays in Saving State if any error occurs while snapshotting an 
instance
** This bug has been marked a duplicate of bug 1555065
   Image goes to saving state when we delete instance just after taking 
snapshot and remain the state forever

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/991941

Title:
  Restrating the compute service while taking snapshot of vm makes the
  snapshot stuck in saving state

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Branch: diablo Stable

  1. Create a vm and wait till active
  2. Take a snapshot of vm
  3 while taking snapshot restart the compute service of the node where the vm 
is created

  Observation: snapshot stuck in saving state

  expected: Snapshot should go to Error state or should come to active
  state

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/991941/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582673] Re: Angular components without the href attribute have incorrect cursor

2016-06-28 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/316157
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=77d7e26135bd37bfe28b8f14c04c0d62dc6f9c4e
Submitter: Jenkins
Branch:master

commit 77d7e26135bd37bfe28b8f14c04c0d62dc6f9c4e
Author: Elizabeth Elwell 
Date:   Fri May 13 16:19:39 2016 +0100

Added correct cursor to components without the href attribute

Original Bootstrap CSS depends on empty href attributes to style
cursors for several components. Adding href attributes to link tags in
AngularJS will cause unwanted route changes. This styling fix ensures
the correct cursor styling is applied to link tags without the need
for the href attribute to be added.

Closes-Bug: 1582673
Co-Authored-By: Revon Mathews 
Change-Id: I9c65e54d07db7aaeef1585a6f21c31c354951609


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1582673

Title:
  Angular components without the href attribute have incorrect cursor

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Original Bootstrap's CSS depends on empty href attributes to style
  cursors for several components, but in AngularJS adding empty href
  attributes to link tags will cause unwanted route changes. As a result
  of the missing href attributes styling is not applied correctly to
  some components.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1582673/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1594376] Re: Delete subnet fails with "ObjectDeletedError: Instance '' has been deleted, or its row is otherwise not present."

2016-06-28 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/333601
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=dfbe1fb7c510d556095a926fec6bfe6f4c153b47
Submitter: Jenkins
Branch:master

commit dfbe1fb7c510d556095a926fec6bfe6f4c153b47
Author: Carl Baldwin 
Date:   Thu Jun 23 15:39:47 2016 -0600

Fix code that's trying to read from a stale DB object

Change-Id: Ib601235b56c553c29618a4dfdf0608a7b2784c6f
Closes-Bug: #1594376


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1594376

Title:
  Delete subnet fails with "ObjectDeletedError: Instance '' has been deleted, or its row is otherwise not
  present."

Status in neutron:
  Fix Released

Bug description:
  Seen here:

  http://logs.openstack.org/91/327191/15/check/gate-tempest-dsvm-
  neutron-
  dvr/ed57c36/logs/screen-q-svc.txt.gz?level=TRACE#_2016-06-20_02_06_12_987

  2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource 
[req-3870a7f4-ae6c-4710-b55f-7f2179c3d48a tempest-NetworksTestDHCPv6-2051920777 
-] delete failed
  2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
  2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/resource.py", line 78, in resource
  2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
  2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 549, in delete
  2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource return 
self._delete(request, id, **kwargs)
  2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 148, in wrapper
  2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 221, in 
__exit__
  2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource 
self.force_reraise()
  2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 197, in 
force_reraise
  2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 138, in wrapper
  2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource return 
f(*args, **kwargs)
  2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 571, in _delete
  2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource 
obj_deleter(request.context, id, **kwargs)
  2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/common/utils.py", line 600, in inner
  2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource return 
f(self, context, *args, **kwargs)
  2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py", line 1058, in 
delete_subnet
  2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource 
LOG.debug("Port %s deleted concurrently", a.port_id)
  2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py", line 
237, in __get__
  2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource return 
self.impl.get(instance_state(instance), dict_)
  2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/attributes.py", line 
578, in get
  2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource value = 
state._load_expired(state, passive)
  2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/state.py", line 474, in 
_load_expired
  2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource 
self.manager.deferred_scalar_loader(self, toload)
  2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/loading.py", line 669, 
in load_scalar_attributes
  2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource raise 
orm_exc.ObjectDeletedError(state)
  2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource 
ObjectDeletedError: Instance '' has been 
deleted, or its row is otherwise not present.
  2016-06-20 02:06:12.987 18923 ERROR neutron.api.v2.resource 

  Looks like it's mostly failure but not 100%, might depend on the job.

  
http://logstash.opensta

[Yahoo-eng-team] [Bug 1597077] [NEW] Mitaka token 'expires' padding differs between POST and GET/HEAD

2016-06-28 Thread Kim Jensen
Public bug reported:

We are using fernet tokens and found that with Mitaka the 'expires'
values returned by the token POST and token GET/HEAD differ when one
would expect these to be the same.

POST /v2.0/tokens

Response:
{"access": {
   "token":{
  "issued_at": "2016-06-28T18:48:56.00Z",
  "expires": "2016-06-28T20:48:56Z",
  "id": 
"gABXcsaYGn-YFLOkLMfgq0JeBePL9s4WxiYbgSOyrAC83nUJhJh4c3xMTi_ZhaXkWH1S5BmvsvJwj90I_bKgiJlv5fQf7-wCdyPtTd7O_TcAleIBj7uOhcFhC1au7Fx9qnAkdg6DBIX_EiQLaC_ylB87nl05nQ",
  "audit_ids": ["OGGd2bYeTQOi-ZHZ5vYqVw"]
   },
   "serviceCatalog": [],
   "user":{
  "username": "account1",
  "roles_links": [],
  "id": "af4012992a154f158201f0590013bc32",
  "roles": [],
  "name": "account1"
   },
   "metadata":{
  "is_admin": 0,
  "roles": []
   }
}}

GET /v2.0/tokens/gABXcsaYGn-
YFLOkLMfgq0JeBePL9s4WxiYbgSOyrAC83nUJhJh4c3xMTi_ZhaXkWH1S5BmvsvJwj90I_bKgiJlv5fQf7-wCdyPtTd7O_TcAleIBj7uOhcFhC1au7Fx9qnAkdg6DBIX_EiQLaC_ylB87nl05nQ

Response:
{"access": {
   "token":{
  "issued_at": "2016-06-28T18:48:56.00Z",
  "expires": "2016-06-28T20:48:56.00Z",
  "id": 
"gABXcsaYGn-YFLOkLMfgq0JeBePL9s4WxiYbgSOyrAC83nUJhJh4c3xMTi_ZhaXkWH1S5BmvsvJwj90I_bKgiJlv5fQf7-wCdyPtTd7O_TcAleIBj7uOhcFhC1au7Fx9qnAkdg6DBIX_EiQLaC_ylB87nl05nQ",
  "audit_ids": ["OGGd2bYeTQOi-ZHZ5vYqVw"]
   },
   "serviceCatalog": [],
   "user":{
  "username": "account1",
  "roles_links": [],
  "id": "af4012992a154f158201f0590013bc32",
  "roles": [],
  "name": "account1"
   },
   "metadata":{
  "is_admin": 0,
  "roles": []
   }
}}

The POST response:"expires": "2016-06-28T20:48:56Z",
The GET response: "expires": "2016-06-28T20:48:56.00Z",

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1597077

Title:
  Mitaka token 'expires' padding differs between POST and GET/HEAD

Status in OpenStack Identity (keystone):
  New

Bug description:
  We are using fernet tokens and found that with Mitaka the 'expires'
  values returned by the token POST and token GET/HEAD differ when one
  would expect these to be the same.

  POST /v2.0/tokens

  Response:
  {"access": {
 "token":{
"issued_at": "2016-06-28T18:48:56.00Z",
"expires": "2016-06-28T20:48:56Z",
"id": 
"gABXcsaYGn-YFLOkLMfgq0JeBePL9s4WxiYbgSOyrAC83nUJhJh4c3xMTi_ZhaXkWH1S5BmvsvJwj90I_bKgiJlv5fQf7-wCdyPtTd7O_TcAleIBj7uOhcFhC1au7Fx9qnAkdg6DBIX_EiQLaC_ylB87nl05nQ",
"audit_ids": ["OGGd2bYeTQOi-ZHZ5vYqVw"]
 },
 "serviceCatalog": [],
 "user":{
"username": "account1",
"roles_links": [],
"id": "af4012992a154f158201f0590013bc32",
"roles": [],
"name": "account1"
 },
 "metadata":{
"is_admin": 0,
"roles": []
 }
  }}

  GET /v2.0/tokens/gABXcsaYGn-
  
YFLOkLMfgq0JeBePL9s4WxiYbgSOyrAC83nUJhJh4c3xMTi_ZhaXkWH1S5BmvsvJwj90I_bKgiJlv5fQf7-wCdyPtTd7O_TcAleIBj7uOhcFhC1au7Fx9qnAkdg6DBIX_EiQLaC_ylB87nl05nQ

  Response:
  {"access": {
 "token":{
"issued_at": "2016-06-28T18:48:56.00Z",
"expires": "2016-06-28T20:48:56.00Z",
"id": 
"gABXcsaYGn-YFLOkLMfgq0JeBePL9s4WxiYbgSOyrAC83nUJhJh4c3xMTi_ZhaXkWH1S5BmvsvJwj90I_bKgiJlv5fQf7-wCdyPtTd7O_TcAleIBj7uOhcFhC1au7Fx9qnAkdg6DBIX_EiQLaC_ylB87nl05nQ",
"audit_ids": ["OGGd2bYeTQOi-ZHZ5vYqVw"]
 },
 "serviceCatalog": [],
 "user":{
"username": "account1",
"roles_links": [],
"id": "af4012992a154f158201f0590013bc32",
"roles": [],
"name": "account1"
 },
 "metadata":{
"is_admin": 0,
"roles": []
 }
  }}

  The POST response:"expires": "2016-06-28T20:48:56Z",
  The GET response: "expires": "2016-06-28T20:48:56.00Z",

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1597077/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596135] Re: Make raw_input py3 compatible

2016-06-28 Thread Travis McPeak
** Changed in: bandit
   Status: New => Invalid

** No longer affects: bandit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1596135

Title:
  Make  raw_input py3 compatible

Status in anvil:
  New
Status in daisycloud-core:
  New
Status in Freezer:
  New
Status in Packstack:
  New
Status in Poppy:
  New
Status in python-solumclient:
  New
Status in vmware-nsx:
  New

Bug description:
  In py3,

  Raw_input renamed to input, 
  so it need to modify the code to make it compatible.


  https://github.com/openstack/python-
  solumclient/blob/ea37d226a6ba55d7ad4024233b9d8001aab92ca5/contrib
  /setup-tools/solum-app-setup.py#L76

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1596135/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597101] [NEW] WebSSO username shows as a UUID in the Horizon page

2016-06-28 Thread Roxana Gherle
Public bug reported:

When you login into Horizon using Web Single Sign On with saml2 or oidc 
federation protocols, the logged in user shows as a UUID (the user's ID) in the 
Horizon page. This was different before when the specific username from the 
external identity provider was showed by the Horizon dashboard.
This happens because both the unscoped and scoped federated tokens have both 
the user.id and user.name the ID of the user. The actual username does not show 
in the federated token.

This change in the behavior seems to have happened after introducing
shadow users functionality, because the token was containg the username
for both user.id and user.name in the pre-mitaka releases but now that
changed to both containing the UUID.

** Affects: keystone
 Importance: Undecided
 Assignee: Roxana Gherle (roxana-gherle)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Roxana Gherle (roxana-gherle)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1597101

Title:
  WebSSO username shows as a UUID in the Horizon page

Status in OpenStack Identity (keystone):
  New

Bug description:
  When you login into Horizon using Web Single Sign On with saml2 or oidc 
federation protocols, the logged in user shows as a UUID (the user's ID) in the 
Horizon page. This was different before when the specific username from the 
external identity provider was showed by the Horizon dashboard.
  This happens because both the unscoped and scoped federated tokens have both 
the user.id and user.name the ID of the user. The actual username does not show 
in the federated token.

  This change in the behavior seems to have happened after introducing
  shadow users functionality, because the token was containg the
  username for both user.id and user.name in the pre-mitaka releases but
  now that changed to both containing the UUID.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1597101/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597126] [NEW] problems with tasks, task schemas

2016-06-28 Thread Brian Rosmaita
Public bug reported:

The v2/schemas/tasks and v2/schemas/task responses aren't incorrect, but
they're not as accurate as they could be.

task schema:
- missing "links": 
[{"href":"{schema}","rel":"describedby"},{"href":"{self}","rel":"self"}]
- should have "additionalProperties": false
- some properties are already marked "readOnly":true , but there are other 
properties (e.g., "id") that should also be marked as read only
- could have a list of required properties added (I think that would be all 
except expires_at; the schema says it can have a null value, but i think in 
practice it doesn't appear in the response unless it's non-null)

tasks schema:
- same as the above readOnly situation
- same as the above required and expires_at situation

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1597126

Title:
  problems with tasks, task schemas

Status in Glance:
  New

Bug description:
  The v2/schemas/tasks and v2/schemas/task responses aren't incorrect,
  but they're not as accurate as they could be.

  task schema:
  - missing "links": 
[{"href":"{schema}","rel":"describedby"},{"href":"{self}","rel":"self"}]
  - should have "additionalProperties": false
  - some properties are already marked "readOnly":true , but there are other 
properties (e.g., "id") that should also be marked as read only
  - could have a list of required properties added (I think that would be all 
except expires_at; the schema says it can have a null value, but i think in 
practice it doesn't appear in the response unless it's non-null)

  tasks schema:
  - same as the above readOnly situation
  - same as the above required and expires_at situation

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1597126/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597132] [NEW] FWaaS: Create Firewall fails with "NoSuchOptError: no such option: router_distributed"

2016-06-28 Thread Sumit Naiksatam
Public bug reported:

This is seen in a setup where the stock L3 plugin
(neutron.services.l3_router.l3_router_plugin:L3RouterPlugin) is not
configured, but instead, a different L3 plugin is used. The create
firewall operation fails with the following exception:

2016-06-28 15:24:46.940 12176 DEBUG routes.middleware 
[req-d159be6a-85cd-44a7-a44e-d50168022948 80852c691f3448a0b536c7f573a53d02 
917cc98b9116461b9c36ba7aa3a7cdc7 - - -] Matched POST /fw/firewalls.json 
__call__ /usr/lib/python2.7/site-packages/routes/middleware.py:100
2016-06-28 15:24:46.941 12176 DEBUG routes.middleware 
[req-d159be6a-85cd-44a7-a44e-d50168022948 80852c691f3448a0b536c7f573a53d02 
917cc98b9116461b9c36ba7aa3a7cdc7 - - -] Route path: '/fw/firewalls.:(format)', 
defaults: {'action': u'create', 'controller': >} __call__ 
/usr/lib/python2.7/site-packages/routes/middleware.py:102
2016-06-28 15:24:46.941 12176 DEBUG routes.middleware 
[req-d159be6a-85cd-44a7-a44e-d50168022948 80852c691f3448a0b536c7f573a53d02 
917cc98b9116461b9c36ba7aa3a7cdc7 - - -] Match dict: {'action': u'create', 
'controller': >, 
'format': u'json'} __call__ 
/usr/lib/python2.7/site-packages/routes/middleware.py:103
2016-06-28 15:24:46.956 12176 DEBUG neutron.api.v2.base 
[req-d159be6a-85cd-44a7-a44e-d50168022948 80852c691f3448a0b536c7f573a53d02 
917cc98b9116461b9c36ba7aa3a7cdc7 - - -] Request body: {u'firewall': {u'shared': 
False, u'description': u"{'network_function_id': 
'b875efff-8fd5-4a9a-92e6-19a74c528f7f'}", u'firewall_policy_id': 
u'35d8b1f9-c0aa-478d-806b-7904e80f13fc', u'name': u'FWaaS-provider', 
u'admin_state_up': True}} prepare_request_body 
/usr/lib/python2.7/site-packages/neutron/api/v2/base.py:656
2016-06-28 15:24:46.957 12176 DEBUG neutron.api.v2.base 
[req-d159be6a-85cd-44a7-a44e-d50168022948 80852c691f3448a0b536c7f573a53d02 
917cc98b9116461b9c36ba7aa3a7cdc7 - - -] Unknown quota resources ['firewall']. 
_create /usr/lib/python2.7/site-packages/neutron/api/v2/base.py:458
2016-06-28 15:24:46.957 12176 DEBUG 
neutron_fwaas.services.firewall.fwaas_plugin 
[req-d159be6a-85cd-44a7-a44e-d50168022948 80852c691f3448a0b536c7f573a53d02 
917cc98b9116461b9c36ba7aa3a7cdc7 - - -] create_firewall() called 
create_firewall 
/usr/lib/python2.7/site-packages/neutron_fwaas/services/firewall/fwaas_plugin.py:230
2016-06-28 15:24:46.958 12176 DEBUG neutron_fwaas.db.firewall.firewall_db 
[req-d159be6a-85cd-44a7-a44e-d50168022948 80852c691f3448a0b536c7f573a53d02 
917cc98b9116461b9c36ba7aa3a7cdc7 - - -] create_firewall() called 
create_firewall 
/usr/lib/python2.7/site-packages/neutron_fwaas/db/firewall/firewall_db.py:302
2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource 
[req-d159be6a-85cd-44a7-a44e-d50168022948 80852c691f3448a0b536c7f573a53d02 
917cc98b9116461b9c36ba7aa3a7cdc7 - - -] create failed
2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/resource.py", line 83, in 
resource
2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/base.py", line 410, in create
2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource return 
self._create(request, body, **kwargs)
2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 146, in wrapper
2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 204, in __exit__
2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 136, in wrapper
2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/base.py", line 521, in _create
2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource obj = 
do_create(body)
2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/base.py", line 503, in 
do_create
2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource 
request.context, reservation.reservation_id)
2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 204, in __exit__
2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-06-28 15:24:46.959 12176 ERROR neutron.api.v2.resource   File 
"/usr/lib/python2.7/site-packages/neutron/api/v2/bas

[Yahoo-eng-team] [Bug 1597140] [NEW] Data Table Actions need context

2016-06-28 Thread Diana Whitten
Public bug reported:

Data Table action buttons/anchors need a context class that makes it
possible to customize button experiences on a global scale.

** Affects: horizon
 Importance: Wishlist
 Assignee: Diana Whitten (hurgleburgler)
 Status: In Progress


** Tags: branding

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1597140

Title:
  Data Table Actions need context

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Data Table action buttons/anchors need a context class that makes it
  possible to customize button experiences on a global scale.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1597140/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597145] [NEW] contrail-alarm-gen provisioning works only once

2016-06-28 Thread Anish Mehta
Public bug reported:

contrail-alarm-gen provisioning works only once.
fixup_contrail_alarm_gen assumes that the conf file is being changed for the 
first time.

** Affects: horizon
 Importance: Undecided
 Status: New

** Affects: juniperopenstack
 Importance: Undecided
 Status: New

** Affects: juniperopenstack/r3.0
 Importance: Undecided
 Status: In Progress


** Tags: analytics

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1597145

Title:
  contrail-alarm-gen provisioning works only once

Status in OpenStack Dashboard (Horizon):
  New
Status in Juniper Openstack:
  New
Status in Juniper Openstack r3.0 series:
  In Progress

Bug description:
  contrail-alarm-gen provisioning works only once.
  fixup_contrail_alarm_gen assumes that the conf file is being changed for the 
first time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1597145/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597145] [Review update] R3.0

2016-06-28 Thread OpenContrail Admin
Review in progress for https://review.opencontrail.org/21511 
Submitter: Anish Mehta (ameht...@gmail.com) 


** Also affects: juniperopenstack/r3.0
   Importance: Undecided
   Status: New

** Also affects: juniperopenstack/r3.0
   Importance: Undecided
   Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1597145

Title:
  contrail-alarm-gen provisioning works only once

Status in OpenStack Dashboard (Horizon):
  New
Status in Juniper Openstack:
  In Progress
Status in Juniper Openstack r3.0 series:
  In Progress
Status in Juniper Openstack trunk series:
  In Progress

Bug description:
  contrail-alarm-gen provisioning works only once.
  fixup_contrail_alarm_gen assumes that the conf file is being changed for the 
first time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1597145/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597145] [Review update] master

2016-06-28 Thread OpenContrail Admin
Review in progress for https://review.opencontrail.org/21512 
Submitter: Anish Mehta (ameht...@gmail.com) 


** Also affects: juniperopenstack/trunk
   Importance: Undecided
   Status: New

** Also affects: juniperopenstack/trunk
   Importance: Undecided
   Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1597145

Title:
  contrail-alarm-gen provisioning works only once

Status in OpenStack Dashboard (Horizon):
  New
Status in Juniper Openstack:
  In Progress
Status in Juniper Openstack r3.0 series:
  In Progress
Status in Juniper Openstack trunk series:
  In Progress

Bug description:
  contrail-alarm-gen provisioning works only once.
  fixup_contrail_alarm_gen assumes that the conf file is being changed for the 
first time.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1597145/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597160] [NEW] linuxbridget agent-id is not linuxbridge agent uuid

2016-06-28 Thread linbing
Public bug reported:

i found that linuxbridget call rpc to api server,call method is
'get_devices_details_list' and pass a param agent_id to rpc server. and
i also found that param agent_id does not used in api server, it is just
user to debug output message, so i think agent_id is just to used for
mark the incoming agent. and if it is used like that, why doesn't use
agent uuid instead of 'lb'+pythical_interface mac(like lb00505636ff2d),
because uuid is more quickly to see the incoming linuxbridge host than
mac address. of course you have another ways to know the incoming
linuxbrige host linke param host,so why not delete the useless param


./server.log:48649:2016-06-27 16:36:56.403 108124 DEBUG
neutron.plugins.ml2.rpc [req-d8a17733-6e4e-4012-8157-e7b9403c127d - - -
- -] Device tapb57d75e8-f4 details requested by agent lb00505636ff2d
with host com-net get_device_details /usr/lib/python2.7/site-
packages/neutron/plugins/ml2/rpc.py:70

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1597160

Title:
  linuxbridget agent-id is not linuxbridge agent uuid

Status in neutron:
  New

Bug description:
  i found that linuxbridget call rpc to api server,call method is
  'get_devices_details_list' and pass a param agent_id to rpc server.
  and i also found that param agent_id does not used in api server, it
  is just user to debug output message, so i think agent_id is just to
  used for mark the incoming agent. and if it is used like that, why
  doesn't use agent uuid instead of 'lb'+pythical_interface mac(like
  lb00505636ff2d), because uuid is more quickly to see the incoming
  linuxbridge host than mac address. of course you have another ways to
  know the incoming linuxbrige host linke param host,so why not delete
  the useless param


  ./server.log:48649:2016-06-27 16:36:56.403 108124 DEBUG
  neutron.plugins.ml2.rpc [req-d8a17733-6e4e-4012-8157-e7b9403c127d - -
  - - -] Device tapb57d75e8-f4 details requested by agent lb00505636ff2d
  with host com-net get_device_details /usr/lib/python2.7/site-
  packages/neutron/plugins/ml2/rpc.py:70

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1597160/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1593846] Re: Fix designate dns driver for SSL based endpoints

2016-06-28 Thread Naoaki MAEDA
** Also affects: ubuntu
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1593846

Title:
  Fix designate dns driver for SSL based endpoints

Status in neutron:
  Fix Committed
Status in openstack-manuals:
  New
Status in Ubuntu:
  New

Bug description:
  https://review.openstack.org/330817
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/neutron" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit c705e2f9f6c7b4a9db4a80a764268e490ea41f01
  Author: imran malik 
  Date:   Wed Jun 8 02:45:32 2016 -0700

  Fix designate dns driver for SSL based endpoints
  
  Allow setting options in designate section to specify if want
  to skip SSL cert check. This makes it possible to work with HTTPS
  based endpoints, the default behavior of keystoneclient is to always
  set verify=True however in current code, one cannot either provide
  a valid CA cert or skip the verification.
  
  DocImpact: Introduce two additional options for `[designate]` section
  in neutron.conf
  CONF.designate.insecure to allow insecure connections over SSL.
  CONF.designate.ca_cert for a valid cert when connecting over SSL
  
  Change-Id: Ic371cc11d783618c38ee40a18206b0c2a197bb3e
  Closes-Bug: #1588067

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1593846/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1517839] Re: Make CONF.set_override with parameter enforce_type=True by default

2016-06-28 Thread ChangBo Guo(gcb)
** Also affects: glance
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1517839

Title:
  Make CONF.set_override with parameter enforce_type=True by default

Status in Cinder:
  In Progress
Status in cloudkitty:
  Fix Released
Status in Designate:
  Fix Released
Status in Glance:
  New
Status in heat:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Manila:
  Fix Released
Status in Murano:
  Fix Released
Status in neutron:
  Won't Fix
Status in OpenStack Compute (nova):
  In Progress
Status in oslo.config:
  In Progress
Status in oslo.messaging:
  Fix Released
Status in Rally:
  Fix Released
Status in watcher:
  Fix Released

Bug description:
  1. Problems :
 oslo_config provides method CONF.set_override[1] , developers usually use 
it to change config option's value in tests. That's convenient .
 By default  parameter enforce_type=False,  it doesn't check any type or 
value of override. If set enforce_type=True , will check parameter
 override's type and value.  In production code(running time code),  
oslo_config  always checks  config option's value.
 In short, we test and run code in different ways. so there's  gap:  config 
option with wrong type or invalid value can pass tests when
 parameter enforce_type = False in consuming projects.  that means some 
invalid or wrong tests are in our code base.
 There is nova POC result when I enable "enforce_type=true" [2],  and I 
must fix them in [3]

 [1] 
https://github.com/openstack/oslo.config/blob/master/oslo_config/cfg.py#L2173
 [2] 
http://logs.openstack.org/16/242416/1/check/gate-nova-python27/97b5eff/testr_results.html.gz
 [3]  https://review.openstack.org/#/c/242416/  
https://review.openstack.org/#/c/242717/  
https://review.openstack.org/#/c/243061/

  2. Proposal 
 1) Make  method CONF.set_override with  enforce_type=True  in consuming 
projects. and  fix violations when  enforce_type=True in each project.

2) Make  method CONF.set_override with  enforce_type=True by default
  in oslo_config

 Hope some one from consuming projects can help make
  enforce_type=True in consuming projects and fix violations,

 You can find more details and comments  in
  https://etherpad.openstack.org/p/enforce_type_true_by_default

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1517839/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597168] [NEW] There's no way to update '--shared' argument of qos-policy to False by using qos-policy-update command.

2016-06-28 Thread xiewj
Public bug reported:

In Mitaka,
There's no way to update '--shared' argument of qos-policy to False by using 
qos-policy-update command.
Actually,we may need to update --shared to False in qos-policy-update command,
Such as in the qos-policy rbac scenario, we may need to change the rbac policy  
by updating ‘--shared’ parameters to False,so that we can cancel sharing the 
qos-policy to all tenants

[root@localhost devstack]# neutron qos-policy-create qos-policy-01
Created a new policy:
+-+--+
| Field   | Value|
+-+--+
| description |  |
| id  | 53e278d6-ad97-4875-9eb3-374e46d5fe9c |
| name| qos-policy-01|
| rules   |  |
| shared  | False|
| tenant_id   | aced7a29bb134dec82307a880d1cc542 |
+-+--+
[root@localhost devstack]# neutron qos-policy-update qos-policy-01 --shared 
Updated policy: qos-policy-01
[root@localhost devstack]# neutron qos-policy-show qos-policy-01
+-+--+
| Field   | Value|
+-+--+
| description |  |
| id  | 53e278d6-ad97-4875-9eb3-374e46d5fe9c |
| name| qos-policy-01|
| rules   |  |
| shared  | True |
| tenant_id   | aced7a29bb134dec82307a880d1cc542 |
+-+--+
[root@localhost devstack]# neutron qos-policy-update qos-policy-01 
--shared=False
usage: neutron qos-policy-update [-h] [--request-format {json}] [--name NAME]
 [--description DESCRIPTION] [--shared]
 POLICY
neutron qos-policy-update: error: argument --shared: ignored explicit argument 
u'False'
Try 'neutron help qos-policy-update' for more information.
[root@localhost devstack]# 

[root@localhost devstack]# neutron help qos-policy-update
usage: neutron qos-policy-update [-h] [--request-format {json}] [--name NAME]
 [--description DESCRIPTION] [--shared]
 POLICY

Update a given qos policy.

positional arguments:
  POLICYID or name of policy to update.

optional arguments:
  -h, --helpshow this help message and exit
  --request-format {json}
DEPRECATED! Only JSON request format is supported.
  --name NAME   Name of QoS policy.
  --description DESCRIPTION
Description of the QoS policy.
  --shared  Accessible by other tenants. Set shared to True
(default is False).
[root@localhost devstack]#

** Affects: neutron
 Importance: Undecided
 Assignee: QunyingRan (ran-qunying)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1597168

Title:
  There's no way to update '--shared' argument of qos-policy to False by
  using qos-policy-update command.

Status in neutron:
  New

Bug description:
  In Mitaka,
  There's no way to update '--shared' argument of qos-policy to False by using 
qos-policy-update command.
  Actually,we may need to update --shared to False in qos-policy-update command,
  Such as in the qos-policy rbac scenario, we may need to change the rbac 
policy  
  by updating ‘--shared’ parameters to False,so that we can cancel sharing the 
qos-policy to all tenants

  [root@localhost devstack]# neutron qos-policy-create qos-policy-01
  Created a new policy:
  +-+--+
  | Field   | Value|
  +-+--+
  | description |  |
  | id  | 53e278d6-ad97-4875-9eb3-374e46d5fe9c |
  | name| qos-policy-01|
  | rules   |  |
  | shared  | False|
  | tenant_id   | aced7a29bb134dec82307a880d1cc542 |
  +-+--+
  [root@localhost devstack]# neutron qos-policy-update qos-policy-01 --shared   
  
  Updated policy: qos-policy-01
  [root@localhost devstack]# neutron qos-policy-show qos-policy-01
  +-+--+
  | Field   | Value|
  +-+--+
  | description |  |
  | id  | 53e278d6-ad97-4875-9eb3-374e46d5fe9c |
  | name| qos

[Yahoo-eng-team] [Bug 1495463] Re: While creating firewall for another tenant which does not have router, firewall policy , firewall gets created and it comes into active state.

2016-06-28 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1495463

Title:
  While creating firewall for another tenant which does  not have
  router, firewall policy , firewall gets created and it comes into
  active state.

Status in neutron:
  Expired

Bug description:
  While creating firewall for another tenant which does  not have
  router, firewall policy , firewall gets created and it comes into
  active state.

  Steps Followed:

  i.There are two tenants, admin and demo. Create router, network, add 
router interface  for admin ( nothing for demo user).
  ii.   Source rc file for admin user , create firewall rule, firewall policy
  iii.  Try to create firewall for demo user by passing --tenant-id 

   Observation:  Firewall getting created and status is in active state
  even though tenant does not have any router, network, firewall rule,
  firewall policy.  Attaching file “firewall created for dem user.txt”
  for your reference. I think we should not be able to create firewall
  when there is not firewall policy, rule for the given tenant.

  
  stack@hdp-001:~$ neutron firewall-create 0daf67b4-ccc1-41bd-b191-a1ddacd9af63 
--name fwrayrarouter2 --router 6d1e84d2-6d3a-4f03-94c3-d3e2caa7bf41 --tenant-id 
b207eb16956040fd94a2469b56656f9d
  Created a new firewall:
  ++--+
  | Field  | Value|
  ++--+
  | admin_state_up | True |
  | description|  |
  | firewall_policy_id | 0daf67b4-ccc1-41bd-b191-a1ddacd9af63 |
  | id | ad29ac47-57fb-4c58-8fef-bf4e74816afc |
  | name   | fwrayrarouter2   |
  | router_ids | 6d1e84d2-6d3a-4f03-94c3-d3e2caa7bf41 |
  | status | CREATED  |
  | tenant_id  | b207eb16956040fd94a2469b56656f9d |
  ++--+
  stack@hdp-001:~$ neutron firwall-list
  Unknown command [u'firwall-list']
  stack@hdp-001:~$ neutron firewall-list
  
+--++--+
  | id   | name   | firewall_policy_id  
 |
  
+--++--+
  | 8c8cc582-a2ad-41c7-8532-ac462c914a3e | fwtest1| 
0daf67b4-ccc1-41bd-b191-a1ddacd9af63 |
  | ad29ac47-57fb-4c58-8fef-bf4e74816afc | fwrayrarouter2 | 
0daf67b4-ccc1-41bd-b191-a1ddacd9af63 |
  
+--++--+
  stack@hdp-001:~$ neutron firewall-show ad29ac47-57fb-4c58-8fef-bf4e74816afc
  ++--+
  | Field  | Value|
  ++--+
  | admin_state_up | True |
  | description|  |
  | firewall_policy_id | 0daf67b4-ccc1-41bd-b191-a1ddacd9af63 |
  | id | ad29ac47-57fb-4c58-8fef-bf4e74816afc |
  | name   | fwrayrarouter2   |
  | router_ids | 6d1e84d2-6d3a-4f03-94c3-d3e2caa7bf41 |
  | status | ACTIVE   |
  | tenant_id  | b207eb16956040fd94a2469b56656f9d |
  ++--+
  stack@hdp-001:~$ neutron firewall-policy-show 
0daf67b4-ccc1-41bd-b191-a1ddacd9af63
  ++--+
  | Field  | Value|
  ++--+
  | audited| False|
  | description|  |
  | firewall_rules | 5916773d-ada5-436e-9055-cb115e3a5220 |
  | id | 0daf67b4-ccc1-41bd-b191-a1ddacd9af63 |
  | name   | rayra-policy |
  | shared | False|
  | tenant_id  | bbb6670febf14bbfbc498e4e04038dd9 |
  ++--+
  stack@hdp-001:~$
  stack@hdp-001:~$  neutron firewall-policy-list --tenant-id 
bbb6670febf14bbfbc498e4e04038dd9
  
+--+--++
  | id   | name | firewall_rules
 |
  
+--+--+--

[Yahoo-eng-team] [Bug 1584858] Re: ryu 4.2.1 breaks python 2.7 environments

2016-06-28 Thread huan
** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1584858

Title:
  ryu 4.2.1 breaks python 2.7 environments

Status in neutron:
  Fix Released

Bug description:
  ryu >= 4.1 is broken for Python 2.7 environments - the following commit 
requires ovs>=2.6.0.dev0 for all environments:
  https://github.com/osrg/ryu/commit/b8cb2d22aab1b761e9e2de2df1a93cc60ed60dfa

  Contrastingly, upper-constraints prevents ovs>2.5.0 in Python 2.7 
environments:
  
https://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt#n229

  This is showing up as a breakage in the XenServer Neutron CI
  (currently on internal testing before commenting externally):
  pkg_resources.ContextualVersionConflict: (ovs 2.5.0
  (/usr/local/lib/python2.7/dist-packages),
  Requirement.parse('ovs>=2.6.0.dev0'), set(['ryu']))

  Example failure logs at:
  
http://dd6b71949550285df7dc-dda4e480e005aaa13ec303551d2d8155.r49.cf1.rackcdn.com/05/243105/11/2000/logs/screen-q-agt.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1584858/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596869] Re: APIv3 compatibility broken in Mitaka and Liberty

2016-06-28 Thread A.Ojea
I see my fault now, I was supposing that "default" in liberty was the
Name field  and it's the ID, and in Mitaka following the docs the
"default" domain is the name but the ID is an uuid randomly generated.

Apologize for any inconvenience and appreciate your help @stevemar and
@guang-yee.



** Changed in: keystone
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1596869

Title:
  APIv3 compatibility broken in Mitaka and Liberty

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  Current API documentation [1] uses the fields   "domain": { "id":
  "default" }, to select a domain.

  This call works in Liberty as you can see in the following snippet:

  curl -i   -H "Content-Type: application/json"   -d '
  { "auth": {
  "identity": {
    "methods": ["password"],
    "password": {
  "user": {
    "name": "admin",
    "domain": { "id": "default" },
    "password": "admin"
  }
    }
  },
  "scope": {
    "project": {
  "name": "admin",
  "domain": { "id": "default" }
    }
  }
    }
  }'   http://172.17.0.3:5000/v3/auth/tokens ; echo
  HTTP/1.1 201 Created
  X-Subject-Token: 8e861d59fb1847a388b27ab7150f2d15
  Vary: X-Auth-Token
  X-Distribution: Ubuntu
  Content-Type: application/json
  Content-Length: 2794
  X-Openstack-Request-Id: req-2fcb81ac-4adf-4d0d-85f9-41d355c0606d
  Date: Tue, 28 Jun 2016 08:59:42 GMT

  {"token": {"methods": ["password"], "roles": [{"id":
  "b1abb292e4af4ead9a1b62b4a6e39ba4", "name": "__member__"}, {"id":
  "f071d23c5131434e8823101f3b8e33db", "name": "admin"}], "expires_at":
  "2016-06-28T09:59:42.646127Z", "project": {"domain": {"id": "default",
  "name": "Default"}, "id": "890fc0394fe34024b62aab12fb335960", "name":
  "admin"}, "catalog": [{"..."}], "extras": {}, "user": {"domain":
  {"id": "default", "name": "Default"}, "id":
  "d1b7876ff28e4db29296797296daecfe", "name": "admin"}, "audit_ids":
  ["7p_bhw8tTvqAOjKRpkHE2Q"], "issued_at":
  "2016-06-28T08:59:42.646167Z"}}

  but it's turned out that in mitaka it fails if you use the id field
  with the name of the domain:

  curl -i   -H "Content-Type: application/json"   -d '
  { "auth": {
  "identity": {
    "methods": ["password"],
    "password": {
  "user": {
    "name": "admin",
    "domain": { "id": "default" },
    "password": "openstack"
  }
    }
  },
  "scope": {
    "project": {
  "name": "admin",
  "domain": { "id": "default" }
    }
  }
    }
  }'   http://localhost:5000/v3/auth/tokens ; echo
  HTTP/1.1 401 Unauthorized
  Date: Tue, 28 Jun 2016 09:01:04 GMT
  Server: Apache/2.4.7 (Ubuntu)
  Vary: X-Auth-Token
  X-Distribution: Ubuntu
  x-openstack-request-id: req-4898044b-25d4-4b9d-96c4-d823c0107cb0
  WWW-Authenticate: Keystone uri="http://localhost:5000";
  Content-Length: 114
  Content-Type: application/json

  {"error": {"message": "The request you have made requires
  authentication.", "code": 401, "title": "Unauthorized"}}

  in order to work you need to use name instead id:

  curl -i   -H "Content-Type: application/json"   -d '
  { "auth": {
  "identity": {
    "methods": ["password"],
    "password": {
  "user": {
    "name": "admin",
    "domain": { "name": "default" },
    "password": "openstack"
  }
    }
  },
  "scope": {
    "project": {
  "name": "admin",
  "domain": { "name": "default" }
    }
  }
    }
  }'   http://localhost:5000/v3/auth/tokens ; echo
  TTP/1.1 201 Created
  Date: Tue, 28 Jun 2016 09:01:53 GMT
  Server: Apache/2.4.7 (Ubuntu)
  X-Subject-Token: 0c293d9ceeba4a9f8c1a9edba99a1b11
  Vary: X-Auth-Token
  X-Distribution: Ubuntu
  x-openstack-request-id: req-1a414584-472f-4b87-9981-a838e3df6f4a
  Content-Length: 4155
  Content-Type: application/json

  {"token": {"methods": ["password"], "roles": [{"id":
  "444fc66b35834eafb3936dca445b56de", "name": "admin"}], "expires_at":
  "2016-06-28T10:01:53.680623Z", "project": {"domain": {"id":
  "0a686f9a064c46eda176a8670d2af12e", "name": "default"}, "id":
  "7c34e27bfb53415daef0b1696886fec5", "name": "admin"}, "catalog":
  [{"...}], "user": {"domain": {"id":
  "0a686f9a064c46eda176a8670d2af12e", "name": "default"}, "id":
  "bcc79501b12948d1b48540bea231b89c", "name": "admin"}, "audit_ids":
  ["U-uBxUKqStWW557xSCmgKA"], "issued_at":
  "2016-06-28T09:01:53.680711Z"}}

  breaking all the compatibility

  [1]
  http://docs.openstack.org/developer/keystone/api_curl_examples.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1596869/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe