[Yahoo-eng-team] [Bug 1368661] Re: Unit tests sometimes fail because of stale pyc files

2017-05-17 Thread wangxiyuan
** Changed in: python-zaqarclient
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368661

Title:
  Unit tests sometimes fail because of stale pyc files

Status in congress:
  Fix Released
Status in Gnocchi:
  Invalid
Status in Ironic:
  Fix Released
Status in Magnum:
  Fix Released
Status in Mistral:
  Fix Released
Status in Monasca:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in oslo.cache:
  Invalid
Status in oslo.concurrency:
  Invalid
Status in oslo.service:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in python-congressclient:
  Fix Released
Status in python-cueclient:
  Fix Released
Status in Glance Client:
  In Progress
Status in python-heatclient:
  Fix Committed
Status in python-keystoneclient:
  Fix Released
Status in python-magnumclient:
  Fix Released
Status in python-mistralclient:
  Fix Released
Status in python-neutronclient:
  Fix Released
Status in Python client library for Sahara:
  Fix Released
Status in python-solumclient:
  Fix Released
Status in python-swiftclient:
  Fix Released
Status in python-troveclient:
  Fix Committed
Status in Python client library for Zaqar:
  Fix Released
Status in Solum:
  Fix Released
Status in OpenStack Object Storage (swift):
  Fix Released
Status in OpenStack DBaaS (Trove):
  Fix Released
Status in zaqar:
  Fix Released

Bug description:
  Because python creates pyc files during tox runs, certain changes in
  the tree, like deletes of files, or switching branches, can create
  spurious errors. This can be suppressed by PYTHONDONTWRITEBYTECODE=1
  in the tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/congress/+bug/1368661/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1691340] Re: create default network show wrong

2017-05-17 Thread OpenStack Infra
** Changed in: neutron
   Status: Invalid => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1691340

Title:
  create default network show wrong

Status in neutron:
  In Progress

Bug description:
  When create a network use  “--default --internal”. It's resp is
  "is_default| True "
  When show it, in fact it is None.

  [root@localhost auto_allocate]# openstack network create ysm_test --default 
--internal
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| UP   |
  | availability_zone_hints   |  |
  | availability_zones|  |
  | created_at| 2017-05-17T05:43:55Z |
  | description   |  |
  | dns_domain| None |
  | id| d508fafa-25c7-4bd8-bfc4-25903f79aa53 |
  | ipv4_address_scope| None |
  | ipv6_address_scope| None |
  | is_default| True |
  | mtu   | 1450 |
  | name  | ysm_test |
  | port_security_enabled | True |
  | project_id| bca504c769234d4db32e05142428fd64 |
  | provider:network_type | vxlan|
  | provider:physical_network | None |
  | provider:segmentation_id  | 37   |
  | qos_policy_id | None |
  | revision_number   | 3|
  | router:external   | Internal |
  | segments  | None |
  | shared| False|
  | status| ACTIVE   |
  | subnets   |  |
  | updated_at| 2017-05-17T05:43:55Z |
  +---+--+
  [root@localhost auto_allocate]# openstack network show 
d508fafa-25c7-4bd8-bfc4-25903f79aa53
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| UP   |
  | availability_zone_hints   |  |
  | availability_zones|  |
  | created_at| 2017-05-17T05:43:55Z |
  | description   |  |
  | dns_domain| None |
  | id| d508fafa-25c7-4bd8-bfc4-25903f79aa53 |
  | ipv4_address_scope| None |
  | ipv6_address_scope| None |
  | is_default| None |
  | mtu   | 1450 |
  | name  | ysm_test |
  | port_security_enabled | True |
  | project_id| bca504c769234d4db32e05142428fd64 |
  | provider:network_type | vxlan|
  | provider:physical_network | None |
  | provider:segmentation_id  | 37   |
  | qos_policy_id | None |
  | revision_number   | 3|
  | router:external   | Internal |
  | segments  | None |
  | shared| False|
  | status| ACTIVE   |
  | subnets   |  |
  | updated_at| 2017-05-17T05:43:55Z |
  +---+--+
  [root@localhost auto_allocate]#

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1691340/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : 

[Yahoo-eng-team] [Bug 1577488] Re: [RFE]"Fast exit" for compute node egress flows when using DVR

2017-05-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/355062
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=fb2093c3655ecd15f48e841c0fc6f9ccb7697a34
Submitter: Jenkins
Branch:master

commit fb2093c3655ecd15f48e841c0fc6f9ccb7697a34
Author: Swaminathan Vasudevan 
Date:   Fri Aug 12 11:05:46 2016 -0700

DVR: Add forwarding routes based on address_scopes

When we create agent gateway port on all the nodes irrespective
of the floatingips we can basically use that agent gateway port to
forward traffic in and out of the nodes if the address_scopes match,
since we don't need SNAT functionality if address scopes match.

If a gateway is configured and if it has internal ports that belong
to the same address_scopes then no need to add the redirect rules.
At the same we should also add a static route in the fip namespace
for every interface that is connected to the router that belongs to
the same address scope.

Change-Id: Iaf6d3b38b1fb45772cf0b88706586c057ddb0230
Closes-Bug: #1577488


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1577488

Title:
  [RFE]"Fast exit" for compute node egress flows when using DVR

Status in neutron:
  Fix Released

Bug description:
  In its current state, distributed north-south flows with DVR can only
  be acheived when a floating IP is bound to a fixed IP. Without a
  floating IP associated, the north-south flows are steered through the
  centralized SNAT node, even if you are directly routing the tenant
  network without any SNAT. When DVR is combined with either BGP or IPv6
  proxy neighbor discovery, it becomes possible to route traffic
  directly to a fixed IP by advertising the FIP gateway port on a
  compute as the next-hop.  For packets egressing the compute node, we
  need the ability to bypass re-direction of packets to the central SNAT
  node in cases where no floating IP is associated with a fixed IP. By
  enabling this data flow on egress from a compute node, it leaves the
  operator with the option of not running any SNAT nodes. Distributed
  SNAT is not a consideration as the targeted use cases involve
  scenarios where the operator does not want to use any SNAT.

  It is important to note that the use cases this would support are use
  cases where the operator has no need for SNAT. In the scenarios that
  would be supported by this RFE, the operator intends to run a routing
  protocol or IPv6 proxy neighbor discovery to directly route the fixed
  IP's of their tenants. It is also important to note that this RFE does
  not specify what technology the operator would use for routing their
  north-south DVR flows. The intent is simply to enable operators who
  have the infrastructure in place to handle north-south flows in a
  distributed fashion for their tenants.

  To enable this functionality, we have the following options:

  1. The semantics surrounding the "enable_snat" flag when set to
  "False" on a distributed router could use some refinement. We could
  use this flag to enable SNAT node bypass (fast-exit). This approach
  has the benefit of cleaning up some semantics that seem loosley
  defined, and allows us to piggyback on an existing attribute without
  extending the model. The drawback is that this field is exposed to
  tenants who most likely are not aware of how their network traffic is
  routed by the provider network. Tenants probably don't need to be made
  aware that they are "fast exit" treatment through the API, and it may
  not make sense to place the burden on them to set this flag
  appropriately.

  2. Add a new L3 agent mode called "dvr_fast_exit". When the L3 agent
  is run in this mode, all router instances hosted on an L3 agent will
  send egress traffic directly out through the FIP namespace and out to
  the gateway, completely disabling SNAT support on all routers hosted
  on the agent. This approach involves a simple change to skip
  programmming  the "steal" rule that sends traffic to the SNAT node
  when run in this mode. This is likely the least invasive change, but
  also has some drawbacks in that upgrading to using this flag requires
  an agent restart and all agents should be run in this mode. This
  approach would be well suited to green-field deployments, but doesn't
  work well with brown-field deployments.

  3. There could be a third option I haven't considered yet. It could be
  hashed out in a spec.

  In addition to the work discussed above, we need to be able to
  instantiate the FIP namespace and gateway port immediately when a
  router gateway is created instead of waiting for the first floating IP
  association on a node.

  Related WIP patches
  - https://review.openstack.org/#/c/297468/
  - https://review.openstack.org/#/c/283757/

To manage 

[Yahoo-eng-team] [Bug 1691615] [NEW] Spurious "Received unexpected event network-vif-unplugged" warnings in n-cpu logs

2017-05-17 Thread Matt Riedemann
Public bug reported:

In a normal tempest dsvm full CI job run we see at least 19 occurrences
of this warning in the n-cpu logs:

http://logs.openstack.org/64/458564/2/check/gate-tempest-dsvm-neutron-
full-ubuntu-
xenial/e5f9b92/logs/screen-n-cpu.txt.gz?level=TRACE#_May_07_20_08_38_082712

May 07 20:08:38.082712 ubuntu-xenial-osic-cloud1-s3500-8755125 nova-
compute[23037]: WARNING nova.compute.manager [req-22e142af-e20a-
4cd5-b9c8-6f757330f225 service nova] [instance: 705953fc-02f1-4c51-a6eb-
627adff91d1b] Received unexpected event network-vif-unplugged-653e2f64
-82ec-45fa-bfc6-c77293220be3 for instance

If the instance is being deleted and we're just racing with neutron
sending the network-vif-unplugged event before the instance is actually
deleted, then it's not really unexpected and we shouldn't log a warning
for this. We can probably check the instance task_state to see if it's
being deleted and adjust the log level accordingly.

** Affects: nova
 Importance: Low
 Assignee: Matt Riedemann (mriedem)
 Status: Triaged

** Changed in: nova
   Importance: Undecided => Low

** Changed in: nova
 Assignee: (unassigned) => Matt Riedemann (mriedem)

** Changed in: nova
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1691615

Title:
  Spurious "Received unexpected event network-vif-unplugged" warnings in
  n-cpu logs

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  In a normal tempest dsvm full CI job run we see at least 19
  occurrences of this warning in the n-cpu logs:

  http://logs.openstack.org/64/458564/2/check/gate-tempest-dsvm-neutron-
  full-ubuntu-
  xenial/e5f9b92/logs/screen-n-cpu.txt.gz?level=TRACE#_May_07_20_08_38_082712

  May 07 20:08:38.082712 ubuntu-xenial-osic-cloud1-s3500-8755125 nova-
  compute[23037]: WARNING nova.compute.manager [req-22e142af-e20a-
  4cd5-b9c8-6f757330f225 service nova] [instance: 705953fc-02f1-4c51
  -a6eb-627adff91d1b] Received unexpected event network-vif-unplugged-
  653e2f64-82ec-45fa-bfc6-c77293220be3 for instance

  If the instance is being deleted and we're just racing with neutron
  sending the network-vif-unplugged event before the instance is
  actually deleted, then it's not really unexpected and we shouldn't log
  a warning for this. We can probably check the instance task_state to
  see if it's being deleted and adjust the log level accordingly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1691615/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1685761] Re: upload a 7.1G images TypeError: Cannot read property 'data' of undefined create failed

2017-05-17 Thread jeck
** Project changed: kolla-ansible => horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1685761

Title:
  upload a 7.1G images TypeError: Cannot read property 'data' of
  undefined create failed

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  kolla stable ocata
  upload cirror is ok ,but upload big image.qcow2(7.1G winserver2008) is failed

  830e26e34b64.js:652 JQMIGRATE: Logging is active
  2830e26e34b64.js:1336 The initScope() method is deprecated. Invocation of it 
will stop in Queens.
  (anonymous) @ 830e26e34b64.js:1336
  setActionScope @ cac59396880b.js:324
  forEach @ 830e26e34b64.js:703
  initActions @ cac59396880b.js:324
  onResourceTypeNameChange @ cac59396880b.js:583
  $digest @ 830e26e34b64.js:1512
  $apply @ 830e26e34b64.js:1517
  bootstrapApply @ 830e26e34b64.js:792
  invoke @ 830e26e34b64.js:938
  doBootstrap @ 830e26e34b64.js:792
  bootstrap @ 830e26e34b64.js:793
  angularInit @ 830e26e34b64.js:789
  (anonymous) @ 830e26e34b64.js:1846
  fire @ 830e26e34b64.js:208
  fireWith @ 830e26e34b64.js:213
  ready @ 830e26e34b64.js:32
  completed @ 830e26e34b64.js:14
  830e26e34b64.js:1336 The initScope() method is deprecated. Invocation of it 
will stop in Queens.
  (anonymous) @ 830e26e34b64.js:1336
  setActionScope @ cac59396880b.js:324
  forEach @ 830e26e34b64.js:703
  initActions @ cac59396880b.js:324
  onResourceTypeNameChange @ cac59396880b.js:583
  $digest @ 830e26e34b64.js:1512
  $apply @ 830e26e34b64.js:1517
  bootstrapApply @ 830e26e34b64.js:792
  invoke @ 830e26e34b64.js:938
  doBootstrap @ 830e26e34b64.js:792
  bootstrap @ 830e26e34b64.js:793
  angularInit @ 830e26e34b64.js:789
  (anonymous) @ 830e26e34b64.js:1846
  fire @ 830e26e34b64.js:208
  fireWith @ 830e26e34b64.js:213
  ready @ 830e26e34b64.js:32
  completed @ 830e26e34b64.js:14
  830e26e34b64.js:1336 The initScope() method is deprecated. Invocation of it 
will stop in Queens.
  (anonymous) @ 830e26e34b64.js:1336
  setActionScope @ cac59396880b.js:324
  forEach @ 830e26e34b64.js:703
  initActions @ cac59396880b.js:324
  onResourceTypeNameChange @ cac59396880b.js:583
  $digest @ 830e26e34b64.js:1512
  $apply @ 830e26e34b64.js:1517
  bootstrapApply @ 830e26e34b64.js:792
  invoke @ 830e26e34b64.js:938
  doBootstrap @ 830e26e34b64.js:792
  bootstrap @ 830e26e34b64.js:793
  angularInit @ 830e26e34b64.js:789
  (anonymous) @ 830e26e34b64.js:1846
  fire @ 830e26e34b64.js:208
  fireWith @ 830e26e34b64.js:213
  ready @ 830e26e34b64.js:32
  completed @ 830e26e34b64.js:14
  830e26e34b64.js:1336 The "scope" param to modal() is deprecated.Handling of 
it will stop in Queens.
  (anonymous) @ 830e26e34b64.js:1336
  modal @ cac59396880b.js:566
  perform @ cac59396880b.js:662
  genPassThroughCallback @ cac59396880b.js:410
  fn @ VM877:4
  expensiveCheckFn @ 830e26e34b64.js:1447
  callback @ 830e26e34b64.js:1747
  $eval @ 830e26e34b64.js:1516
  $apply @ 830e26e34b64.js:1517
  (anonymous) @ 830e26e34b64.js:1747
  dispatch @ 830e26e34b64.js:332
  elemData.handle @ 830e26e34b64.js:305
  830e26e34b64.js:654 JQMIGRATE: jQuery.fn.attr('selected') may use property 
instead of attribute
  migrateWarn @ 830e26e34b64.js:654
  jQuery.attr @ 830e26e34b64.js:663
  access @ 830e26e34b64.js:59
  attr @ 830e26e34b64.js:262
  renderUnknownOption @ 830e26e34b64.js:1798
  writeNgOptionsValue @ 830e26e34b64.js:1799
  ngModelCtrl.$render @ 830e26e34b64.js:1838
  ngModelWatch @ 830e26e34b64.js:1773
  $digest @ 830e26e34b64.js:1512
  $apply @ 830e26e34b64.js:1517
  (anonymous) @ 830e26e34b64.js:1747
  dispatch @ 830e26e34b64.js:332
  elemData.handle @ 830e26e34b64.js:305
  830e26e34b64.js:654 console.trace
  migrateWarn @ 830e26e34b64.js:654
  jQuery.attr @ 830e26e34b64.js:663
  access @ 830e26e34b64.js:59
  attr @ 830e26e34b64.js:262
  renderUnknownOption @ 830e26e34b64.js:1798
  writeNgOptionsValue @ 830e26e34b64.js:1799
  ngModelCtrl.$render @ 830e26e34b64.js:1838
  ngModelWatch @ 830e26e34b64.js:1773
  $digest @ 830e26e34b64.js:1512
  $apply @ 830e26e34b64.js:1517
  (anonymous) @ 830e26e34b64.js:1747
  dispatch @ 830e26e34b64.js:332
  elemData.handle @ 830e26e34b64.js:305
  830e26e34b64.js:654 JQMIGRATE: jQuery.fn.attr('value', val) no longer sets 
properties
  migrateWarn @ 830e26e34b64.js:654
  set @ 830e26e34b64.js:667
  attr @ 830e26e34b64.js:285
  jQuery.attr @ 830e26e34b64.js:664
  access @ 830e26e34b64.js:59
  attr @ 830e26e34b64.js:262
  $set @ 830e26e34b64.js:1033
  (anonymous) @ 830e26e34b64.js:1130
  forEach @ 830e26e34b64.js:703
  mergeTemplateAttributes @ 830e26e34b64.js:1129
  (anonymous) @ 830e26e34b64.js:1134
  processQueue @ 830e26e34b64.js:1469
  (anonymous) @ 830e26e34b64.js:1470
  $eval @ 830e26e34b64.js:1516
  $digest @ 830e26e34b64.js:1510
  $apply @ 830e26e34b64.js:1517
  done @ 830e26e34b64.js:1244
  completeRequest @ 830e26e34b64.js:1259
  requestLoaded @ 830e26e34b64.js:1253
  

[Yahoo-eng-team] [Bug 1685761] [NEW] upload a 7.1G images TypeError: Cannot read property 'data' of undefined create failed

2017-05-17 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

kolla stable ocata
upload cirror is ok ,but upload big image.qcow2(7.1G winserver2008) is failed

830e26e34b64.js:652 JQMIGRATE: Logging is active
2830e26e34b64.js:1336 The initScope() method is deprecated. Invocation of it 
will stop in Queens.
(anonymous) @ 830e26e34b64.js:1336
setActionScope @ cac59396880b.js:324
forEach @ 830e26e34b64.js:703
initActions @ cac59396880b.js:324
onResourceTypeNameChange @ cac59396880b.js:583
$digest @ 830e26e34b64.js:1512
$apply @ 830e26e34b64.js:1517
bootstrapApply @ 830e26e34b64.js:792
invoke @ 830e26e34b64.js:938
doBootstrap @ 830e26e34b64.js:792
bootstrap @ 830e26e34b64.js:793
angularInit @ 830e26e34b64.js:789
(anonymous) @ 830e26e34b64.js:1846
fire @ 830e26e34b64.js:208
fireWith @ 830e26e34b64.js:213
ready @ 830e26e34b64.js:32
completed @ 830e26e34b64.js:14
830e26e34b64.js:1336 The initScope() method is deprecated. Invocation of it 
will stop in Queens.
(anonymous) @ 830e26e34b64.js:1336
setActionScope @ cac59396880b.js:324
forEach @ 830e26e34b64.js:703
initActions @ cac59396880b.js:324
onResourceTypeNameChange @ cac59396880b.js:583
$digest @ 830e26e34b64.js:1512
$apply @ 830e26e34b64.js:1517
bootstrapApply @ 830e26e34b64.js:792
invoke @ 830e26e34b64.js:938
doBootstrap @ 830e26e34b64.js:792
bootstrap @ 830e26e34b64.js:793
angularInit @ 830e26e34b64.js:789
(anonymous) @ 830e26e34b64.js:1846
fire @ 830e26e34b64.js:208
fireWith @ 830e26e34b64.js:213
ready @ 830e26e34b64.js:32
completed @ 830e26e34b64.js:14
830e26e34b64.js:1336 The initScope() method is deprecated. Invocation of it 
will stop in Queens.
(anonymous) @ 830e26e34b64.js:1336
setActionScope @ cac59396880b.js:324
forEach @ 830e26e34b64.js:703
initActions @ cac59396880b.js:324
onResourceTypeNameChange @ cac59396880b.js:583
$digest @ 830e26e34b64.js:1512
$apply @ 830e26e34b64.js:1517
bootstrapApply @ 830e26e34b64.js:792
invoke @ 830e26e34b64.js:938
doBootstrap @ 830e26e34b64.js:792
bootstrap @ 830e26e34b64.js:793
angularInit @ 830e26e34b64.js:789
(anonymous) @ 830e26e34b64.js:1846
fire @ 830e26e34b64.js:208
fireWith @ 830e26e34b64.js:213
ready @ 830e26e34b64.js:32
completed @ 830e26e34b64.js:14
830e26e34b64.js:1336 The "scope" param to modal() is deprecated.Handling of it 
will stop in Queens.
(anonymous) @ 830e26e34b64.js:1336
modal @ cac59396880b.js:566
perform @ cac59396880b.js:662
genPassThroughCallback @ cac59396880b.js:410
fn @ VM877:4
expensiveCheckFn @ 830e26e34b64.js:1447
callback @ 830e26e34b64.js:1747
$eval @ 830e26e34b64.js:1516
$apply @ 830e26e34b64.js:1517
(anonymous) @ 830e26e34b64.js:1747
dispatch @ 830e26e34b64.js:332
elemData.handle @ 830e26e34b64.js:305
830e26e34b64.js:654 JQMIGRATE: jQuery.fn.attr('selected') may use property 
instead of attribute
migrateWarn @ 830e26e34b64.js:654
jQuery.attr @ 830e26e34b64.js:663
access @ 830e26e34b64.js:59
attr @ 830e26e34b64.js:262
renderUnknownOption @ 830e26e34b64.js:1798
writeNgOptionsValue @ 830e26e34b64.js:1799
ngModelCtrl.$render @ 830e26e34b64.js:1838
ngModelWatch @ 830e26e34b64.js:1773
$digest @ 830e26e34b64.js:1512
$apply @ 830e26e34b64.js:1517
(anonymous) @ 830e26e34b64.js:1747
dispatch @ 830e26e34b64.js:332
elemData.handle @ 830e26e34b64.js:305
830e26e34b64.js:654 console.trace
migrateWarn @ 830e26e34b64.js:654
jQuery.attr @ 830e26e34b64.js:663
access @ 830e26e34b64.js:59
attr @ 830e26e34b64.js:262
renderUnknownOption @ 830e26e34b64.js:1798
writeNgOptionsValue @ 830e26e34b64.js:1799
ngModelCtrl.$render @ 830e26e34b64.js:1838
ngModelWatch @ 830e26e34b64.js:1773
$digest @ 830e26e34b64.js:1512
$apply @ 830e26e34b64.js:1517
(anonymous) @ 830e26e34b64.js:1747
dispatch @ 830e26e34b64.js:332
elemData.handle @ 830e26e34b64.js:305
830e26e34b64.js:654 JQMIGRATE: jQuery.fn.attr('value', val) no longer sets 
properties
migrateWarn @ 830e26e34b64.js:654
set @ 830e26e34b64.js:667
attr @ 830e26e34b64.js:285
jQuery.attr @ 830e26e34b64.js:664
access @ 830e26e34b64.js:59
attr @ 830e26e34b64.js:262
$set @ 830e26e34b64.js:1033
(anonymous) @ 830e26e34b64.js:1130
forEach @ 830e26e34b64.js:703
mergeTemplateAttributes @ 830e26e34b64.js:1129
(anonymous) @ 830e26e34b64.js:1134
processQueue @ 830e26e34b64.js:1469
(anonymous) @ 830e26e34b64.js:1470
$eval @ 830e26e34b64.js:1516
$digest @ 830e26e34b64.js:1510
$apply @ 830e26e34b64.js:1517
done @ 830e26e34b64.js:1244
completeRequest @ 830e26e34b64.js:1259
requestLoaded @ 830e26e34b64.js:1253
830e26e34b64.js:654 console.trace
migrateWarn @ 830e26e34b64.js:654
set @ 830e26e34b64.js:667
attr @ 830e26e34b64.js:285
jQuery.attr @ 830e26e34b64.js:664
access @ 830e26e34b64.js:59
attr @ 830e26e34b64.js:262
$set @ 830e26e34b64.js:1033
(anonymous) @ 830e26e34b64.js:1130
forEach @ 830e26e34b64.js:703
mergeTemplateAttributes @ 830e26e34b64.js:1129
(anonymous) @ 830e26e34b64.js:1134
processQueue @ 830e26e34b64.js:1469
(anonymous) @ 830e26e34b64.js:1470
$eval @ 830e26e34b64.js:1516
$digest @ 830e26e34b64.js:1510
$apply @ 830e26e34b64.js:1517
done @ 830e26e34b64.js:1244
completeRequest @ 

[Yahoo-eng-team] [Bug 1691602] Re: live migration generates several network-changed events which lock up refreshing the nw info cache

2017-05-17 Thread Matt Riedemann
** Tags added: neutron

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => Matt Riedemann (mriedem)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1691602

Title:
  live migration generates several network-changed events which lock up
  refreshing the nw info cache

Status in neutron:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  Chris Friesen has reported that in Newton with a live migration that
  has ~16 ports per instance, the "network-changed" events generated
  from neutron when the vifs are unplugged from the source host can
  effectively block the network info cache refresh that's called at the
  end of the live migration operation. Details are in the IRC logs:

  http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-
  nova.2017-05-17.log.html#t2017-05-17T22:50:31

  But this stands out:

  cfriesenmriedem: so it looks like _build_network_info_model()
  costs about 200ms plus about 125ms per port since we query each port
  separatly from neutron.  and the refresh_cache lock is held the whole
  time

  In Nova the 'network-changed' event is handled generically because
  there is no port id in the event, so nova just refreshes the entire nw
  info cache on the instance - which can be expensive and redundant
  since it's doing a lot of queries to Neutron to build up information
  about ports, fixed IPs, floating IPs, subnets and networks, and
  Neutron doesn't have bulk query APIs or allow OR filters in the API
  for bulk queries on things like floating IPs.

  
https://github.com/openstack/nova/blob/8d492c76d53f3fcfacdd945a277446bdfe6797b0/nova/compute/manager.py#L6854

  Looking in neutron's code that sends the network-changed event, there
  is a port in scope, it's just not sent like for network-vif-deleted
  events.

  We should be able to scope the network-changed event to a specific
  port on the neutron side and check for that on the nova side so we
  don't have to refresh the entire network info cache, but just the vif
  that was updated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1691602/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1691602] [NEW] live migration generates several network-changed events which lock up refreshing the nw info cache

2017-05-17 Thread Matt Riedemann
Public bug reported:

Chris Friesen has reported that in Newton with a live migration that has
~16 ports per instance, the "network-changed" events generated from
neutron when the vifs are unplugged from the source host can effectively
block the network info cache refresh that's called at the end of the
live migration operation. Details are in the IRC logs:

http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-
nova.2017-05-17.log.html#t2017-05-17T22:50:31

But this stands out:

cfriesenmriedem: so it looks like _build_network_info_model()
costs about 200ms plus about 125ms per port since we query each port
separatly from neutron.  and the refresh_cache lock is held the whole
time

In Nova the 'network-changed' event is handled generically because there
is no port id in the event, so nova just refreshes the entire nw info
cache on the instance - which can be expensive and redundant since it's
doing a lot of queries to Neutron to build up information about ports,
fixed IPs, floating IPs, subnets and networks, and Neutron doesn't have
bulk query APIs or allow OR filters in the API for bulk queries on
things like floating IPs.

https://github.com/openstack/nova/blob/8d492c76d53f3fcfacdd945a277446bdfe6797b0/nova/compute/manager.py#L6854

Looking in neutron's code that sends the network-changed event, there is
a port in scope, it's just not sent like for network-vif-deleted events.

We should be able to scope the network-changed event to a specific port
on the neutron side and check for that on the nova side so we don't have
to refresh the entire network info cache, but just the vif that was
updated.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1691602

Title:
  live migration generates several network-changed events which lock up
  refreshing the nw info cache

Status in OpenStack Compute (nova):
  New

Bug description:
  Chris Friesen has reported that in Newton with a live migration that
  has ~16 ports per instance, the "network-changed" events generated
  from neutron when the vifs are unplugged from the source host can
  effectively block the network info cache refresh that's called at the
  end of the live migration operation. Details are in the IRC logs:

  http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-
  nova.2017-05-17.log.html#t2017-05-17T22:50:31

  But this stands out:

  cfriesenmriedem: so it looks like _build_network_info_model()
  costs about 200ms plus about 125ms per port since we query each port
  separatly from neutron.  and the refresh_cache lock is held the whole
  time

  In Nova the 'network-changed' event is handled generically because
  there is no port id in the event, so nova just refreshes the entire nw
  info cache on the instance - which can be expensive and redundant
  since it's doing a lot of queries to Neutron to build up information
  about ports, fixed IPs, floating IPs, subnets and networks, and
  Neutron doesn't have bulk query APIs or allow OR filters in the API
  for bulk queries on things like floating IPs.

  
https://github.com/openstack/nova/blob/8d492c76d53f3fcfacdd945a277446bdfe6797b0/nova/compute/manager.py#L6854

  Looking in neutron's code that sends the network-changed event, there
  is a port in scope, it's just not sent like for network-vif-deleted
  events.

  We should be able to scope the network-changed event to a specific
  port on the neutron side and check for that on the nova side so we
  don't have to refresh the entire network info cache, but just the vif
  that was updated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1691602/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1691545] Re: Significant increase in DB connections with cells

2017-05-17 Thread Matt Riedemann
** Also affects: nova/newton
   Importance: Undecided
   Status: New

** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Changed in: nova/newton
   Status: New => Confirmed

** Changed in: nova/ocata
   Importance: Undecided => High

** Changed in: nova/ocata
   Status: New => Confirmed

** Changed in: nova/newton
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1691545

Title:
  Significant increase in DB connections with cells

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) newton series:
  Confirmed
Status in OpenStack Compute (nova) ocata series:
  Confirmed

Bug description:
  Recently in the gate we have seen a trace [1] on some work-in-progress 
patches:
  
OperationalError: (pymysql.err.OperationalError)
  (1040, u'Too many connections')
  
  and at least one operator has reported that the number of database 
connections increased significantly going from Mitaka to Newton.
  
  It was suspected that the increase was caused by creating new oslo.db 
transaction context managers on-the-fly when switching database connections for 
cells. Comparing the dstat --tcp output of runs of the 
gate-tempest-dsvm-neutron-full-ubuntu-xenial job with and without caching of 
the database connections showed a difference of 445 active TCP connections and 
1495 active TCP connections, respectively [1].

  [1] 
http://logs.openstack.org/37/458537/19/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/e290ec2/logs/screen-n-api.txt.gz?level=TRACE#_May_11_20_08_20_211256
  [2] 
https://docs.google.com/spreadsheets/d/1DIfFfX3kaA_SRoCM-aO7BN4IBEShChXLztOBFeKryt4/edit?usp=sharing

  Full trace:

  May 11 20:08:20.190181 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions [req-04cbf0fb-d31a-48fb-bc1a-8572b4fe1dfb 
tempest-AttachVolumeShelveTestJSON-2114880401 
tempest-AttachVolumeShelveTestJSON-2114880401] Unexpected exception in API 
method
  May 11 20:08:20.194490 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions Traceback (most recent call last):
  May 11 20:08:20.194634 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/extensions.py", line 336, in wrapped
  May 11 20:08:20.194768 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions return f(*args, **kwargs)
  May 11 20:08:20.194899 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/compute/servers.py", line 439, in show
  May 11 20:08:20.195035 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions instance = self._get_server(context, req, id, 
is_detail=True)
  May 11 20:08:20.195171 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/compute/servers.py", line 344, in 
_get_server
  May 11 20:08:20.195313 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions expected_attrs=expected_attrs)
  May 11 20:08:20.195443 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/common.py", line 479, in get_instance
  May 11 20:08:20.195600 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions expected_attrs=expected_attrs)
  May 11 20:08:20.195806 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions   File "/opt/stack/new/nova/nova/compute/api.py", 
line 2468, in get
  May 11 20:08:20.196044 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions expected_attrs)
  May 11 20:08:20.196207 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions   File "/opt/stack/new/nova/nova/compute/api.py", 
line 2428, in _get_instance
  May 11 20:08:20.196350 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions context, instance_uuid, 
expected_attrs=expected_attrs)
  May 11 20:08:20.196488 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
184, in wrapper
  May 11 20:08:20.196629 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions result = fn(cls, context, *args, **kwargs)
  May 11 20:08:20.196760 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/objects/instance.py", line 463, in get_by_uuid
  May 11 20:08:20.196889 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 

[Yahoo-eng-team] [Bug 1626343] Re: Status dropdown list label in "Available" is unlocalized

2017-05-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/461453
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=74cfb5d9afc56b2086a532e8737872e30d40453f
Submitter: Jenkins
Branch:master

commit 74cfb5d9afc56b2086a532e8737872e30d40453f
Author: Julie Gravel 
Date:   Mon May 1 09:51:23 2017 -0700

Fix Status dropdown initial value

The value of Status dropdown on Admin > Volume > Volumes > Update Volume
Status and Admin > Volume > Snapshots > Update Status forms were initially
set with the raw data instead of the localized version of the value. This
fix changes the initial value to the localized version and also updates
localized statuses to use values from
project.volumes.tables.VolumesTableBase.STATUS_DISPLAY_CHOICES.

Closes-Bug: #1626343
Change-Id: I1ac58e2eb7b4a8d4894280285a824d280fafd531


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1626343

Title:
  Status dropdown list label in  "Available" is unlocalized

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Admin > Volume > Change Volume Status

  Status dropdown list label "Available" is unlocalized.
  I am not sure whether this is referring to "Available status options" or the 
status "Available" but either way, it should be localized.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1626343/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1687139] Re: dsid_missing_source of datasource OpenStack

2017-05-17 Thread Scott Moser
Hi, I suspect that new instances launched with cloud-init 
0.7.9-90-g61eb03fe-0ubuntu1~16.04.1 will have the fix you're after.
Unfortunately, the warnings don't get cleaned up unless you silence them as 
suggested (or rm -Rf /var/lib/cloud or /var/lib/cloud/instance/warnings/).

If you do not think that the bug is actually fixed, please re-open.


** Changed in: cloud-init
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1687139

Title:
  dsid_missing_source of datasource OpenStack

Status in cloud-init:
  Fix Released

Bug description:
  A new feature in cloud-init identified possible datasources for#
  # this system as:#
  #   ['Ec2', 'None']  #
  # However, the datasource used was: OpenStack#
  ##
  # In the future, cloud-init will only attempt to use datasources that#
  # are identified or specifically configured. #

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1687139/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1686514] Re: Azure: cloud-init does not handle reformatting GPT partition ephemeral disks

2017-05-17 Thread Scott Moser
** Also affects: cloud-init (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Yakkety)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Artful)
   Importance: High
 Assignee: Scott Moser (smoser)
   Status: In Progress

** Also affects: cloud-init (Ubuntu Zesty)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu Xenial)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Yakkety)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Zesty)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Xenial)
   Importance: Undecided => Medium

** Changed in: cloud-init (Ubuntu Yakkety)
   Importance: Undecided => Medium

** Changed in: cloud-init (Ubuntu Zesty)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1686514

Title:
  Azure: cloud-init does not handle reformatting GPT partition ephemeral
  disks

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  In Progress
Status in cloud-init source package in Xenial:
  Confirmed
Status in cloud-init source package in Yakkety:
  Confirmed
Status in cloud-init source package in Zesty:
  Confirmed
Status in cloud-init source package in Artful:
  In Progress

Bug description:
  Some Azure instances such as L32 or G5 have very large ephemeral disks
  which are partitioned via GPT vs. smaller ephemeral disks that have
  dos disklabels.

  At first boot of an instance the ephemeral disk is prepared and
  formatted properly. But if the instance is deallocated and then
  reallocated (thus receiving a new ephemeral disk) then cloud-init does
  not handle reformatting GPT partition ephemeral disks properly.
  Therefore /mnt is never mounted again.

  Test cases:
   1. Deploy an L32(s) VM on Azure
   2. Log in and ensure that the ephemeral disk is formatted and mounted to /mnt
   3. Via the portal you can "Redeploy" the VM to a new Azure Host (or 
alternatively stop and deallocate the VM for some time, and then 
restart/reallocate the VM).

  Expected Results:
   - After reallocation we expect the ephemeral disk to be formatted and 
mounted to /mnt.

  Actual Results:
   - After reallocation /mnt is not mounted and there are errors in the 
cloud-init log.

  *This was tested on Ubuntu 16.04 - but may affect other releases.

  Note: This bug a regression from previous cloud-init releases. GPT
  support for Azure ephemeral disk handling was added to cloud-init via
  this bug: https://bugs.launchpad.net/ubuntu/+source/cloud-
  init/+bug/1422919.

  Related bugs:
   * bug 1691489: fstab entries written by cloud-config may not be mounted

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1686514/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1691551] [NEW] warnings are still printed after user touches file

2017-05-17 Thread Scott Moser
Public bug reported:

When cloud-init shows a warning with Z99-cloudinit-warnings.sh (such as
on login after a ds-identify error), it suggests that you can:

Disable the warnings above by:
  touch /home/ubuntu/.cloud-warnings.skip
or
  touch /var/lib/cloud/instance/warnings/.skip

The second file (/var/lib/cloud/) is not honored.

The easiest recreate is:
$ name="x1"
$ lxc launch ubuntu-daily:xenial $name
$ sleep 10
$ lxc exec $name -- sh -c 'd=/var/lib/cloud/instance/warnings/; mkdir -p $d; 
echo "WARNING WARNING FOO" > "$d/warn-foo"'

## see the warning is there.
$ lxc exec $name -- bash --login  In Progress

** Changed in: cloud-init
   Importance: Undecided => Medium

** Changed in: cloud-init
 Assignee: (unassigned) => Chris Brinker (chris-brinker)

** Merge proposal linked:
   
https://code.launchpad.net/~chris-brinker/cloud-init/+git/cloud-init/+merge/323406

** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Zesty)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Artful)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Yakkety)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu Xenial)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Yakkety)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Zesty)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Artful)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Xenial)
   Importance: Undecided => Medium

** Changed in: cloud-init (Ubuntu Yakkety)
   Importance: Undecided => Medium

** Changed in: cloud-init (Ubuntu Zesty)
   Importance: Undecided => Medium

** Changed in: cloud-init (Ubuntu Artful)
   Importance: Undecided => Medium

** Changed in: cloud-init
   Status: In Progress => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1691551

Title:
  warnings are still printed after user touches file

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Confirmed
Status in cloud-init source package in Xenial:
  Confirmed
Status in cloud-init source package in Yakkety:
  Confirmed
Status in cloud-init source package in Zesty:
  Confirmed
Status in cloud-init source package in Artful:
  Confirmed

Bug description:
  When cloud-init shows a warning with Z99-cloudinit-warnings.sh (such
  as on login after a ds-identify error), it suggests that you can:

  Disable the warnings above by:
touch /home/ubuntu/.cloud-warnings.skip
  or
touch /var/lib/cloud/instance/warnings/.skip

  The second file (/var/lib/cloud/) is not honored.

  The easiest recreate is:
  $ name="x1"
  $ lxc launch ubuntu-daily:xenial $name
  $ sleep 10
  $ lxc exec $name -- sh -c 'd=/var/lib/cloud/instance/warnings/; mkdir -p $d; 
echo "WARNING WARNING FOO" > "$d/warn-foo"'

  ## see the warning is there.
  $ lxc exec $name -- bash --login https://bugs.launchpad.net/cloud-init/+bug/1691551/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1688645] Re: Networks panel action names not consistent with form names

2017-05-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/463854
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=24c8cfb31d19ae1abeb208edf0323e4f1c819105
Submitter: Jenkins
Branch:master

commit 24c8cfb31d19ae1abeb208edf0323e4f1c819105
Author: Julie Gravel 
Date:   Wed May 10 12:27:12 2017 -0700

Change Network form names from Update to Edit

Change Networks, Routers, Ports update form names from 'Update xxx'
to 'Edit xxx' to be consistent with the corresponding row actions.

Change-Id: I7d42830e640c75bf2bafd10bcf3ec96de592dc3d
Closes-Bug: #1688645


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1688645

Title:
  Networks panel action names not consistent  with form names

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  In both Project > Networks and Admin > Networks panels, the table row
  action uses "Edit xxx" but the corresponding form name is "Update
  xxx". The action name should be consistent with the form name. For
  example, in the Networks table, the action is "Edit Network" but the
  form name is "Update network" and the Ports table, the action is "Edit
  Port" while the form name is "Update Port".

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1688645/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1691545] [NEW] Significant increase in DB connections with cells

2017-05-17 Thread melanie witt
Public bug reported:

Recently in the gate we have seen a trace [1] on some work-in-progress patches:

  OperationalError: (pymysql.err.OperationalError)
(1040, u'Too many connections')

and at least one operator has reported that the number of database connections 
increased significantly going from Mitaka to Newton.

It was suspected that the increase was caused by creating new oslo.db 
transaction context managers on-the-fly when switching database connections for 
cells. Comparing the dstat --tcp output of runs of the 
gate-tempest-dsvm-neutron-full-ubuntu-xenial job with and without caching of 
the database connections showed a difference of 445 active TCP connections and 
1495 active TCP connections, respectively [1].

[1] 
http://logs.openstack.org/37/458537/19/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/e290ec2/logs/screen-n-api.txt.gz?level=TRACE#_May_11_20_08_20_211256
[2] 
https://docs.google.com/spreadsheets/d/1DIfFfX3kaA_SRoCM-aO7BN4IBEShChXLztOBFeKryt4/edit?usp=sharing

Full trace:

May 11 20:08:20.190181 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions [req-04cbf0fb-d31a-48fb-bc1a-8572b4fe1dfb 
tempest-AttachVolumeShelveTestJSON-2114880401 
tempest-AttachVolumeShelveTestJSON-2114880401] Unexpected exception in API 
method
May 11 20:08:20.194490 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions Traceback (most recent call last):
May 11 20:08:20.194634 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/extensions.py", line 336, in wrapped
May 11 20:08:20.194768 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions return f(*args, **kwargs)
May 11 20:08:20.194899 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/compute/servers.py", line 439, in show
May 11 20:08:20.195035 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions instance = self._get_server(context, req, id, 
is_detail=True)
May 11 20:08:20.195171 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/compute/servers.py", line 344, in 
_get_server
May 11 20:08:20.195313 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions expected_attrs=expected_attrs)
May 11 20:08:20.195443 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/api/openstack/common.py", line 479, in get_instance
May 11 20:08:20.195600 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions expected_attrs=expected_attrs)
May 11 20:08:20.195806 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions   File "/opt/stack/new/nova/nova/compute/api.py", 
line 2468, in get
May 11 20:08:20.196044 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions expected_attrs)
May 11 20:08:20.196207 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions   File "/opt/stack/new/nova/nova/compute/api.py", 
line 2428, in _get_instance
May 11 20:08:20.196350 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions context, instance_uuid, 
expected_attrs=expected_attrs)
May 11 20:08:20.196488 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions   File 
"/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", line 
184, in wrapper
May 11 20:08:20.196629 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions result = fn(cls, context, *args, **kwargs)
May 11 20:08:20.196760 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/objects/instance.py", line 463, in get_by_uuid
May 11 20:08:20.196889 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions use_slave=use_slave)
May 11 20:08:20.197011 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions   File 
"/opt/stack/new/nova/nova/db/sqlalchemy/api.py", line 235, in wrapper
May 11 20:08:20.197173 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions with reader_mode.using(context):
May 11 20:08:20.197361 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions   File "/usr/lib/python2.7/contextlib.py", line 
17, in __enter__
May 11 20:08:20.197513 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions return self.gen.next()
May 11 20:08:20.197635 ubuntu-xenial-rax-ord-8797540 nova-api[18343]: ERROR 
nova.api.openstack.extensions   File 

[Yahoo-eng-team] [Bug 1691546] [NEW] libvirt: original exception is lost of vif unplug fails during attach_interface error handling

2017-05-17 Thread Matt Riedemann
Public bug reported:

Because we're not using excutils.save_and_reraise_exception here:

https://github.com/openstack/nova/blob/8d492c76d53f3fcfacdd945a277446bdfe6797b0/nova/virt/libvirt/driver.py#L1405

If the vif unplug call raises a new exception, we'll lose the original
exception context from when guest.attach_device failed.

** Affects: nova
 Importance: Low
 Assignee: Matt Riedemann (mriedem)
 Status: Triaged


** Tags: libvirt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1691546

Title:
  libvirt: original exception is lost of vif unplug fails during
  attach_interface error handling

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  Because we're not using excutils.save_and_reraise_exception here:

  
https://github.com/openstack/nova/blob/8d492c76d53f3fcfacdd945a277446bdfe6797b0/nova/virt/libvirt/driver.py#L1405

  If the vif unplug call raises a new exception, we'll lose the original
  exception context from when guest.attach_device failed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1691546/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1658070] Re: Failed SR_IOV evacuation with host

2017-05-17 Thread Matt Riedemann
Ah gyee pointed out in IRC that if you're using microversion<2.29 then
force is passed to the compute API code as None:

https://github.com/openstack/nova/blob/stable/newton/nova/api/openstack/compute/evacuate.py#L92

And then this fails because force is not False, it's None:

https://github.com/openstack/nova/blob/stable/newton/nova/compute/api.py#L3784

** Changed in: nova
   Importance: Undecided => High

** Also affects: nova/newton
   Importance: Undecided
   Status: New

** Also affects: nova/ocata
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1658070

Title:
  Failed SR_IOV evacuation with host

Status in Mirantis OpenStack:
  Confirmed
Status in OpenStack Compute (nova):
  Triaged
Status in OpenStack Compute (nova) newton series:
  Confirmed
Status in OpenStack Compute (nova) ocata series:
  Confirmed

Bug description:
  When we try evacuate SR-IOV vm on concret host the VM is in ERROR
  state

  Steps to reproduce:
  1) Download trusty image
  2) Create image
  3) Create vf port:
  neutron port-create  --binding:vnic-type direct --device_owner 
nova-compute --name sriov
  4) Boot vm on this port:
  nova boot vm --flavor m1.small --image 1ff0759c-ea85-4909-a711-70fd6b71ad23 
--nic port-id=cfc947be-1975-42f3-95bd-f261a2eccec0 --key-name vm_key
  5) Sgut down node with vm
  6) Evacuate vm:
  nova evacuate vm node-5.test.domain.local
  Expected result:
   VM evacuates on the 5th node
  Actual result:
   VM in error state

  Workaround:
  We can evacuate without pointing the host just nova evacuate vm

  Environment:
  #785 snap
  2 controllers? 2 compute with SR-IOV

To manage notifications about this bug go to:
https://bugs.launchpad.net/mos/+bug/1658070/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1658070] Re: Failed SR_IOV evacuation with host

2017-05-17 Thread Matt Riedemann
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1658070

Title:
  Failed SR_IOV evacuation with host

Status in Mirantis OpenStack:
  Confirmed
Status in OpenStack Compute (nova):
  New

Bug description:
  When we try evacuate SR-IOV vm on concret host the VM is in ERROR
  state

  Steps to reproduce:
  1) Download trusty image
  2) Create image
  3) Create vf port:
  neutron port-create  --binding:vnic-type direct --device_owner 
nova-compute --name sriov
  4) Boot vm on this port:
  nova boot vm --flavor m1.small --image 1ff0759c-ea85-4909-a711-70fd6b71ad23 
--nic port-id=cfc947be-1975-42f3-95bd-f261a2eccec0 --key-name vm_key
  5) Sgut down node with vm
  6) Evacuate vm:
  nova evacuate vm node-5.test.domain.local
  Expected result:
   VM evacuates on the 5th node
  Actual result:
   VM in error state

  Workaround:
  We can evacuate without pointing the host just nova evacuate vm

  Environment:
  #785 snap
  2 controllers? 2 compute with SR-IOV

To manage notifications about this bug go to:
https://bugs.launchpad.net/mos/+bug/1658070/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1691340] Re: create default network show wrong

2017-05-17 Thread Armando Migliaccio
I cannot repro the exact output as shown in this bug report.

That said, the is_default flag applies only to router:external networks,
as this is where it is actually used. Marking a regular network as
default has no effect. This is probably a note that was omitted in [1].
Bear in mind that RFE [2] would most likely expand the use of the
default flag to regular networks, but there's no plan at the moment.

[1] 
https://docs.openstack.org/mitaka/networking-guide/config-auto-allocation.html
[2] https://bugs.launchpad.net/neutron/+bug/1690439

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1691340

Title:
  create default network show wrong

Status in neutron:
  Invalid

Bug description:
  When create a network use  “--default --internal”. It's resp is
  "is_default| True "
  When show it, in fact it is None.

  [root@localhost auto_allocate]# openstack network create ysm_test --default 
--internal
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| UP   |
  | availability_zone_hints   |  |
  | availability_zones|  |
  | created_at| 2017-05-17T05:43:55Z |
  | description   |  |
  | dns_domain| None |
  | id| d508fafa-25c7-4bd8-bfc4-25903f79aa53 |
  | ipv4_address_scope| None |
  | ipv6_address_scope| None |
  | is_default| True |
  | mtu   | 1450 |
  | name  | ysm_test |
  | port_security_enabled | True |
  | project_id| bca504c769234d4db32e05142428fd64 |
  | provider:network_type | vxlan|
  | provider:physical_network | None |
  | provider:segmentation_id  | 37   |
  | qos_policy_id | None |
  | revision_number   | 3|
  | router:external   | Internal |
  | segments  | None |
  | shared| False|
  | status| ACTIVE   |
  | subnets   |  |
  | updated_at| 2017-05-17T05:43:55Z |
  +---+--+
  [root@localhost auto_allocate]# openstack network show 
d508fafa-25c7-4bd8-bfc4-25903f79aa53
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| UP   |
  | availability_zone_hints   |  |
  | availability_zones|  |
  | created_at| 2017-05-17T05:43:55Z |
  | description   |  |
  | dns_domain| None |
  | id| d508fafa-25c7-4bd8-bfc4-25903f79aa53 |
  | ipv4_address_scope| None |
  | ipv6_address_scope| None |
  | is_default| None |
  | mtu   | 1450 |
  | name  | ysm_test |
  | port_security_enabled | True |
  | project_id| bca504c769234d4db32e05142428fd64 |
  | provider:network_type | vxlan|
  | provider:physical_network | None |
  | provider:segmentation_id  | 37   |
  | qos_policy_id | None |
  | revision_number   | 3|
  | router:external   | Internal |
  | segments  | None |
  | shared| False

[Yahoo-eng-team] [Bug 1691517] [NEW] centos7 unit tests fail due to hard coded mkfs.ext4

2017-05-17 Thread Joshua Powers
Public bug reported:

A recent merge that added a mkfs.ext4 tests has a hard coded location
for the binary of mkfs.ext4. The result is that on centos 7, which has
the command in a different location than Ubuntu, is a failed test:

https://paste.ubuntu.com/24589593/

Steps to reproduce:
lxc launch images:centos/7 c7
lxc exec c7 bash
yum install --assumeyes epel-release
yum install --assumeyes pyserial python-argparse python-cheetah 
python-configobj python-jinja2 python-jsonpatch python-oauthlib 
python-prettytable python-requests python-six python-pip PyYAML git file 
e2fsprogs
pip install contextlib2 httpretty mock nose pep8 unittest2
git clone https://git.launchpad.net/cloud-init
cd cloud-init
nosetests tests/unittests

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Description changed:

  A recent merge that added a mkfs.ext4 tests has a hard coded location
  for the binary of mkfs.ext4. The result is that on centos 7, which has
  the command in a different location than Ubuntu, is a failed test:
  
  https://paste.ubuntu.com/24589593/
- 
  
  Steps to reproduce:
  lxc launch images:centos/7 c7
  lxc exec c7 bash
  yum install --asumeyes python-pip
  yum install --assumeyes git python-pip file e2fsprogs
  pip install setuptools tox virtualenv contextlib2 httpretty mock nose pep8 
unittest2
- git clone https://git.launchpad.net/cloud-init 
+ git clone https://git.launchpad.net/cloud-init
  cd cloud-init
- tox
+ nosetests tests/unittests

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1691517

Title:
  centos7 unit tests fail due to hard coded mkfs.ext4

Status in cloud-init:
  New

Bug description:
  A recent merge that added a mkfs.ext4 tests has a hard coded location
  for the binary of mkfs.ext4. The result is that on centos 7, which has
  the command in a different location than Ubuntu, is a failed test:

  https://paste.ubuntu.com/24589593/

  Steps to reproduce:
  lxc launch images:centos/7 c7
  lxc exec c7 bash
  yum install --assumeyes epel-release
  yum install --assumeyes pyserial python-argparse python-cheetah 
python-configobj python-jinja2 python-jsonpatch python-oauthlib 
python-prettytable python-requests python-six python-pip PyYAML git file 
e2fsprogs
  pip install contextlib2 httpretty mock nose pep8 unittest2
  git clone https://git.launchpad.net/cloud-init
  cd cloud-init
  nosetests tests/unittests

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1691517/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1691489] [NEW] fstab entries written by cloud-config may not be mounted

2017-05-17 Thread Scott Moser
Public bug reported:

As reported in bug 1686514, sometimes /mnt will not get mounted when re-
delpoying or stopping-then-starting a Azure vm of L32S.  This is
probably a more generic issue, I suspect shown due to the speed of disks
on these systems.


Related bugs:
 * bug 1686514: Azure: cloud-init does not handle reformatting GPT partition 
ephemeral disks

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1691489

Title:
  fstab entries written by cloud-config may not be mounted

Status in cloud-init:
  New

Bug description:
  As reported in bug 1686514, sometimes /mnt will not get mounted when
  re-delpoying or stopping-then-starting a Azure vm of L32S.  This is
  probably a more generic issue, I suspect shown due to the speed of
  disks on these systems.

  
  Related bugs:
   * bug 1686514: Azure: cloud-init does not handle reformatting GPT partition 
ephemeral disks

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1691489/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1677206] Re: Can not choose flavor in dashboard if glance.min_disk > flavor.disk

2017-05-17 Thread Beth Elwell
** Changed in: horizon
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1677206

Title:
  Can not choose flavor in dashboard if glance.min_disk > flavor.disk

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Description of problem:

  Can not choose flavor in dashboard if glance.min_disk > flavor.disk

  Version-Release number of selected component (if applicable):

  python-django-horizon-8.0.1-6.el7ost.noarch

  How reproducible:

  100%

  Steps to Reproduce:
  1. Create a flavor with Root Disk (GB) = 0
  2. Create an image with Minimum Disk (GB) = 1
  3. Launch instance and select the image we created in step2.
  4. We can not select the flavor we created in step 1 and it is showing grey.

  Actual results:

  Horizon is making an unnecessary check.

  
https://github.com/openstack/horizon/blob/stable/liberty/horizon/static/horizon/js/horizon.quota.js#L150

  
https://github.com/openstack/horizon/blob/stable/liberty/horizon/static/horizon/js/horizon.quota.js#L86

  
https://github.com/openstack/horizon/blob/stable/liberty/horizon/static/horizon/js/horizon.quota.js#L78

  Expected results:

  We should be able to launch instance even though glance.min_disk >
  flavor.disk.

  Additional info:

  There is no problem if we launch instance from command line.

  In /usr/lib/python2.7/site-packages/nova/compute/api.py

   736 # NOTE(johannes): root_gb is allowed to be 0 for legacy 
reasons
   737 # since libvirt interpreted the value differently than other
   738 # drivers. A value of 0 means don't check size.

   744 if image_min_disk > dest_size:
   745 raise exception.FlavorDiskSmallerThanMinDisk(
   746 flavor_size=dest_size, 
image_min_disk=image_min_disk)

  We can see that if the root_gb is 0, then nova will not check the disk
  size. But Horizon seems to be still checking even though the root_gb
  is 0. Horizon should take care of this situation when root_gb == 0.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1677206/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1689279] Re: tempest scenario trunk test "test_subport_connectivity" fails with non root rhel user

2017-05-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/463309
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=b92a37f3efb45ff2a503ed81863816b1a316cd8c
Submitter: Jenkins
Branch:master

commit b92a37f3efb45ff2a503ed81863816b1a316cd8c
Author: AlexSTafeyev 
Date:   Mon May 8 10:57:37 2017 +

Change PATH for "ip addr list" command so it could work with cloud-user

It's needed for some custom images like RHEL7 where /usr/sbin/ is not
enabled by default for users under test.

Change-Id: Ib75c468cc15f9912e110af4748734bf0b48bcf5d
Closes-bug: 1689279


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1689279

Title:
  tempest  scenario trunk test "test_subport_connectivity" fails with
  non root rhel user

Status in neutron:
  Fix Released

Bug description:
  When I run the test (python -m testtools.run)
  neutron.tests.tempest.scenario.test_trunk.TrunkTest.test_subport_connectivity
  with rhel image and "cloud_user" user the test fails due to following
  :


  
  Traceback (most recent call last):
File 
"/home/centos/tempest-upstream/neutron/neutron/tests/tempest/scenario/test_trunk.py",
 line 242, in test_subport_connectivity
  out = server['ssh_client'].exec_command('ip addr list')
File "tempest/lib/common/ssh.py", line 202, in exec_command
  stderr=err_data, stdout=out_data)
  tempest.lib.exceptions.SSHExecCommandFailed: Command 'ip addr list', exit 
status: 127, stderr:
  bash: ip: command not found

  stdout:

  Ran 1 test in 154.543s


  adding sudo in the code for the command solves the issue

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1689279/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1691449] [NEW] Field "Admin State" show wrong value in network editing form.

2017-05-17 Thread Debo Zhang
Public bug reported:

Version is Ocata.

In network editing form, field "Admin State" show True/False, it should
be UP/DOWN.

** Affects: horizon
 Importance: Undecided
 Assignee: Debo Zhang (laun-zhangdebo)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Debo Zhang (laun-zhangdebo)

** Changed in: horizon
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1691449

Title:
  Field "Admin State" show wrong value in network editing form.

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Version is Ocata.

  In network editing form, field "Admin State" show True/False, it
  should be UP/DOWN.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1691449/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1691446] [NEW] Label changes to True/False from UP/DOWN when an error occurred in firewall form.

2017-05-17 Thread Debo Zhang
Public bug reported:

In firewall creating form, I forgot to choose a policy and click button
"Add", then the form page reappeared with an error message, that's quite
normal. But look at the field "Admin State", it has changed to be
True/False, however, it should be UP/DOWN.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1691446

Title:
  Label changes to True/False from UP/DOWN when an error occurred in
  firewall form.

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In firewall creating form, I forgot to choose a policy and click
  button "Add", then the form page reappeared with an error message,
  that's quite normal. But look at the field "Admin State", it has
  changed to be True/False, however, it should be UP/DOWN.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1691446/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1690388] Re: wrong hwaddr on the vlan bond with nplan and cloud-init

2017-05-17 Thread Dimitri John Ledkov
** Also affects: cloud-init
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1690388

Title:
  wrong hwaddr on the vlan bond with nplan and cloud-init

Status in cloud-init:
  New
Status in cloud-init package in Ubuntu:
  In Progress
Status in nplan package in Ubuntu:
  New
Status in cloud-init source package in Xenial:
  New
Status in nplan source package in Xenial:
  New
Status in cloud-init source package in Yakkety:
  New
Status in nplan source package in Yakkety:
  New
Status in cloud-init source package in Zesty:
  New
Status in nplan source package in Zesty:
  New

Bug description:
  The expected hwaddresses are as follows:

  4: bond0:  mtu 1500 qdisc noqueue 
state UP group default 
  link/ether a0:36:9f:2d:93:80 brd ff:ff:ff:ff:ff:ff
  inet6 fe80::a236:9fff:fe2d:9380/64 scope link 
 valid_lft forever preferred_lft forever
  5: bond0.101@bond0:  mtu 1500 qdisc noqueue 
state UP group default 
  link/ether a0:36:9f:2d:93:80 brd ff:ff:ff:ff:ff:ff
  inet 104.130.20.119/24 brd 104.130.20.255 scope global bond0.101
 valid_lft forever preferred_lft forever
  inet6 fe80::a236:9fff:fe2d:9380/64 scope link 
 valid_lft forever preferred_lft forever
  6: bond0.401@bond0:  mtu 1500 qdisc noqueue 
state UP group default 
  link/ether a0:36:9f:2d:93:81 brd ff:ff:ff:ff:ff:ff
  inet 10.184.7.120/20 brd 10.184.15.255 scope global bond0.401
 valid_lft forever preferred_lft forever
  inet6 fe80::a236:9fff:fe2d:9381/64 scope link 
 valid_lft forever preferred_lft forever

  however cloud-init shows:
  May 12 14:33:28 xnox-iad-nr5 cloud-init[1163]: ci-info: 
Net device 
info
  May 12 14:33:28 xnox-iad-nr5 cloud-init[1163]: ci-info: 
+---+--+--+---+---+---+
  May 12 14:33:28 xnox-iad-nr5 cloud-init[1163]: ci-info: |   Device  |  Up  |  
 Address|  Mask | Scope | Hw-Address|
  May 12 14:33:28 xnox-iad-nr5 cloud-init[1163]: ci-info: 
+---+--+--+---+---+---+
  May 12 14:33:28 xnox-iad-nr5 cloud-init[1163]: ci-info: |   bond0   | True |  
.   |   .   |   .   | a0:36:9f:2d:93:81 |
  May 12 14:33:28 xnox-iad-nr5 cloud-init[1163]: ci-info: |   bond0   | True | 
fe80::a236:9fff:fe2d:9381/64 |   .   |  link | a0:36:9f:2d:93:81 |
  May 12 14:33:28 xnox-iad-nr5 cloud-init[1163]: ci-info: | bond0.101 | True |  
  104.130.20.119| 255.255.255.0 |   .   | a0:36:9f:2d:93:81 |
  May 12 14:33:28 xnox-iad-nr5 cloud-init[1163]: ci-info: | bond0.101 | True | 
fe80::a236:9fff:fe2d:9381/64 |   .   |  link | a0:36:9f:2d:93:81 |
  May 12 14:33:28 xnox-iad-nr5 cloud-init[1163]: ci-info: | lo| True |  
127.0.0.1   |   255.0.0.0   |   .   | . |
  May 12 14:33:28 xnox-iad-nr5 cloud-init[1163]: ci-info: | lo| True |  
 ::1/128|   .   |  host | . |
  May 12 14:33:28 xnox-iad-nr5 cloud-init[1163]: ci-info: | bond0.401 | True |  
   10.184.7.120 | 255.255.240.0 |   .   | a0:36:9f:2d:93:81 |
  May 12 14:33:28 xnox-iad-nr5 cloud-init[1163]: ci-info: | bond0.401 | True | 
fe80::a236:9fff:fe2d:9381/64 |   .   |  link | a0:36:9f:2d:93:81 |
  May 12 14:33:28 xnox-iad-nr5 cloud-init[1163]: ci-info: |   ens9f1  | True |  
.   |   .   |   .   | a0:36:9f:2d:93:81 |
  May 12 14:33:28 xnox-iad-nr5 cloud-init[1163]: ci-info: |   ens9f0  | True |  
.   |   .   |   .   | a0:36:9f:2d:93:81 |
  May 12 14:33:28 xnox-iad-nr5 cloud-init[1163]: ci-info: 
+---+--+--+---+---+---+

  
  Specifically
bond0   | True | fe80::a236:9fff:fe2d:9381/64 |   .   |  link | 
a0:36:9f:2d:93:81
  bond0.101 | True |104.130.20.119| 255.255.255.0 |   .   | 
a0:36:9f:2d:93:81

  Instead of expected a0:36:9f:2d:93:80

  The generated netplan.yaml does not set macaddress on the vlans at
  all.

  Where as the network_data.json does explicitely specifies the mac
  address to be in use for those vlans:

  "vlan_mac_address" : "a0:36:9f:2d:93:80"

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1690388/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1689346] Re: cloud-init and nplan do not parse and use OpenStack networking correctly with netmask

2017-05-17 Thread Dimitri John Ledkov
** Also affects: cloud-init
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1689346

Title:
  cloud-init and nplan do not parse and use OpenStack networking
  correctly with netmask

Status in cloud-init:
  New
Status in cloud-init package in Ubuntu:
  In Progress
Status in nplan package in Ubuntu:
  Invalid
Status in cloud-init source package in Xenial:
  Confirmed
Status in nplan source package in Xenial:
  Invalid
Status in cloud-init source package in Yakkety:
  Confirmed
Status in nplan source package in Yakkety:
  Invalid
Status in cloud-init source package in Zesty:
  Confirmed
Status in nplan source package in Zesty:
  Invalid

Bug description:
  networking data josn has:

  "ip_address" : "104.130.20.155",
  "netmask" : "255.255.255.0"

  "ip_address" : "10.184.3.234",
  "netmask" : "255.255.240.0",

  that got rendered into nplan as:
   - 104.130.20.155/255.255.255.0
   - 10.184.3.234/255.255.240.0

  which it failed to parse

  Stderr: Error in network definition //etc/netplan/50-cloud-init.yaml
  line 32 column 12: invalid prefix length in address
  '104.130.20.155/255.255.255.0'

  
  I believe nplan is expecing CIDR notation of /24 or some such. I belive the 
current plan is to fix cloud-init to generate /24 cidr notation in the nplan 
renderer.

  This needs SRU into xenial.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1689346/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1690480] Re: cloud-init / nplan - missing bond mode miimon xmit_hash_policy

2017-05-17 Thread Dimitri John Ledkov
** Also affects: cloud-init
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1690480

Title:
  cloud-init / nplan - missing bond mode miimon xmit_hash_policy

Status in cloud-init:
  New
Status in cloud-init package in Ubuntu:
  In Progress
Status in cloud-init source package in Xenial:
  New
Status in cloud-init source package in Yakkety:
  New
Status in cloud-init source package in Zesty:
  New

Bug description:
  Given network-data.json http://paste.ubuntu.com/24561026/
  cloud-init generates http://paste.ubuntu.com/24564006/

  which is missing

   "bond_mode" : "802.3ad",
   "bond_miimon" : 100,
   "bond_xmit_hash_policy" : "layer3+4"

  For the bond specification

  As per nplan docs it should be defined as parameters dictionary
  https://git.launchpad.net/netplan/tree/doc/netplan.md#n302

  mode: 802.3ad
  mii-monitor-interval: 100
  transmit-hash-policy: layer3+4

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1690480/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667138] Re: Minumum bandwidth can be higher than maximum bandwidth limit in same QoS policy

2017-05-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/442375
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=a7e6d3b175b740b8286d2030fccba538e875e6df
Submitter: Jenkins
Branch:master

commit a7e6d3b175b740b8286d2030fccba538e875e6df
Author: Reedip 
Date:   Tue Mar 7 05:36:02 2017 -0500

Add check for Bandwidth Limit Rules

Currently it is possible to create 2 rules in the same policy
where the Max Bandwidth ( from the Bandwidth Limit Rule ) can
be less than Minimum Bandwidth defined in ( Minimum Bandwidth
Rule) , which can be pretty confusing.
This patch raises an exception if such an issue occurs.

Change-Id: Ib748947bcd85253aa22e370a56870afbfbafa19b
Closes-Bug: #1667138


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1667138

Title:
  Minumum bandwidth can be higher than maximum bandwidth limit in same
  QoS policy

Status in neutron:
  Fix Released

Bug description:
  Currently at least SR-IOV driver supports both QoS rules: bandwidth limit and 
minimum bandwidth. User can set in one policy both of such rules and set higher 
minimum bandwidth (best effor) then maximum available bandwidth for port.
  IMO such behaviour will be undefined on backend and should be forbidden 
somehow on API level.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1667138/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1684065] Re: No tests available for l3-ha extension under neutron tempest tests

2017-05-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/462013
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=b971ac799f632e1f2652adb2191c76fc29f271dc
Submitter: Jenkins
Branch:master

commit b971ac799f632e1f2652adb2191c76fc29f271dc
Author: Dongcan Ye 
Date:   Wed May 3 15:07:36 2017 +0800

Add tempest test for l3-ha extension

Add missing l3-ha extension under neutron tempest tests.

Change-Id: Ia608d3f5d63a88eefa4e61da6df2f3656c8446a0
Closes-Bug: #1684065


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1684065

Title:
  No tests available for l3-ha extension under neutron tempest tests

Status in neutron:
  Fix Released

Bug description:
  After doing a grep on neutron test repo for the extension I am not
  able to find any tests that are related to this extension. I believe
  coverage should be increased in this case.

  I am adding below snippet of the discussion I had with Ihar regarding
  this.

  """
  Indeed it seems there are no tests that explicitly target the
  extension (meaning, they don't utilize the 'ha' attribute added by
  
https://github.com/openstack/neutron/blob/master/neutron/extensions/l3_ext_ha_mode.py#L23)

  That doesn't mean that there are no tests that cover the
  implementation. Instead, existing tests utilizing neutron routers will
  use keepalived implementation if neutron.conf is configured to use HA
  routers for router creation:

  
https://github.com/openstack/neutron/blob/master/neutron/db/l3_hamode_db.py#L62

  I agree that it's not ideal, and we should have some tests that
  actually check that 'ha' attribute works as expected. You may want to
  report a bug for that matter in upstream Launchpad if you feel like.
  """

  Please let me know if more information is needed from my end.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1684065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1691427] [NEW] AttributeError: 'DvrLocalRouter' object has no attribute 'ha_state'

2017-05-17 Thread sean redmond
Public bug reported:

I have noticed the below in the newutron l3 agent log of compute nodes
running in DVR mode, It seems to log the same message every 60 seconds
or so. Restarting the agent will stop the log until log rotate is called
the next day and then the return in the same fashion.

2017-05-17 10:37:10.432 27502 ERROR oslo_service.periodic_task 
[req-5c48816b-0705-438a-b823-eca41cfe6c35 - - - - -] Error during 
L3NATAgentWithStateReport.periodic_sync_routers_task
2017-05-17 10:37:10.432 27502 ERROR oslo_service.periodic_task Traceback (most 
recent call last):
2017-05-17 10:37:10.432 27502 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/oslo_service/periodic_task.py", line 220, in 
run_periodic_tasks
2017-05-17 10:37:10.432 27502 ERROR oslo_service.periodic_task task(self, 
context)
2017-05-17 10:37:10.432 27502 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 552, in 
periodic_sync_routers_task
2017-05-17 10:37:10.432 27502 ERROR oslo_service.periodic_task 
self.fetch_and_sync_all_routers(context, ns_manager)
2017-05-17 10:37:10.432 27502 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/osprofiler/profiler.py", line 154, in wrapper
2017-05-17 10:37:10.432 27502 ERROR oslo_service.periodic_task return 
f(*args, **kwargs)
2017-05-17 10:37:10.432 27502 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 586, in 
fetch_and_sync_all_routers
2017-05-17 10:37:10.432 27502 ERROR oslo_service.periodic_task r['id'], 
r.get(l3_constants.HA_ROUTER_STATE_KEY))
2017-05-17 10:37:10.432 27502 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/ha.py", line 120, in 
check_ha_state_for_router
2017-05-17 10:37:10.432 27502 ERROR oslo_service.periodic_task if ri and 
current_state != TRANSLATION_MAP[ri.ha_state]:
2017-05-17 10:37:10.432 27502 ERROR oslo_service.periodic_task AttributeError: 
'DvrLocalRouter' object has no attribute 'ha_state'
2017-05-17 10:37:10.432 27502 ERROR oslo_service.periodic_task
2017-05-17 10:37:50.102 27502 ERROR oslo_service.periodic_task 
[req-5c48816b-0705-438a-b823-eca41cfe6c35 - - - - -] Error during 
L3NATAgentWithStateReport.periodic_sync_routers_task
2017-05-17 10:37:50.102 27502 ERROR oslo_service.periodic_task Traceback (most 
recent call last):
2017-05-17 10:37:50.102 27502 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/oslo_service/periodic_task.py", line 220, in 
run_periodic_tasks
2017-05-17 10:37:50.102 27502 ERROR oslo_service.periodic_task task(self, 
context)
2017-05-17 10:37:50.102 27502 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 552, in 
periodic_sync_routers_task
2017-05-17 10:37:50.102 27502 ERROR oslo_service.periodic_task 
self.fetch_and_sync_all_routers(context, ns_manager)
2017-05-17 10:37:50.102 27502 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/osprofiler/profiler.py", line 154, in wrapper
2017-05-17 10:37:50.102 27502 ERROR oslo_service.periodic_task return 
f(*args, **kwargs)
2017-05-17 10:37:50.102 27502 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py", line 586, in 
fetch_and_sync_all_routers
2017-05-17 10:37:50.102 27502 ERROR oslo_service.periodic_task r['id'], 
r.get(l3_constants.HA_ROUTER_STATE_KEY))
2017-05-17 10:37:50.102 27502 ERROR oslo_service.periodic_task   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/l3/ha.py", line 120, in 
check_ha_state_for_router
2017-05-17 10:37:50.102 27502 ERROR oslo_service.periodic_task if ri and 
current_state != TRANSLATION_MAP[ri.ha_state]:
2017-05-17 10:37:50.102 27502 ERROR oslo_service.periodic_task AttributeError: 
'DvrLocalRouter' object has no attribute 'ha_state'
2017-05-17 10:37:50.102 27502 ERROR oslo_service.periodic_task

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1691427

Title:
  AttributeError: 'DvrLocalRouter' object has no attribute 'ha_state'

Status in neutron:
  New

Bug description:
  I have noticed the below in the newutron l3 agent log of compute nodes
  running in DVR mode, It seems to log the same message every 60 seconds
  or so. Restarting the agent will stop the log until log rotate is
  called the next day and then the return in the same fashion.

  2017-05-17 10:37:10.432 27502 ERROR oslo_service.periodic_task 
[req-5c48816b-0705-438a-b823-eca41cfe6c35 - - - - -] Error during 
L3NATAgentWithStateReport.periodic_sync_routers_task
  2017-05-17 10:37:10.432 27502 ERROR oslo_service.periodic_task Traceback 
(most recent call last):
  2017-05-17 10:37:10.432 27502 ERROR 

[Yahoo-eng-team] [Bug 1673411] Re: config-drive support is broken

2017-05-17 Thread James Page
Same issue on xenial-newton; I'll raise a separate bug to track the
marshalling problem.

** Changed in: nova-lxd/newton
   Status: Fix Committed => Fix Released

** Changed in: nova-lxd/ocata
   Status: Fix Committed => Fix Released

** Tags removed: verification-newton-needed
** Tags added: verification-newton-failed

** Tags removed: verification-needed
** Tags added: verification-failed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1673411

Title:
  config-drive support is broken

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive newton series:
  Fix Committed
Status in Ubuntu Cloud Archive ocata series:
  Fix Released
Status in cloud-init:
  Fix Committed
Status in nova-lxd:
  Fix Released
Status in nova-lxd newton series:
  Fix Released
Status in nova-lxd ocata series:
  Fix Released
Status in nova-lxd trunk series:
  Fix Released
Status in cloud-init package in Ubuntu:
  Fix Released
Status in nova-lxd package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Fix Released
Status in nova-lxd source package in Xenial:
  Invalid
Status in cloud-init source package in Yakkety:
  Fix Released
Status in nova-lxd source package in Yakkety:
  Fix Committed
Status in cloud-init source package in Zesty:
  Fix Released
Status in nova-lxd source package in Zesty:
  Fix Released

Bug description:
  === Begin cloud-init SRU Template ===
  [Impact]
  nova-lxd can provide data to instances in 2 ways:
   a.) metadata service
   b.) config drive

  The support for reading the config drive in cloud-init was never
  functional.  Nova-lxd has changed the way they're presenting the config
  drive to the guest.  Now they are doing so by populating a directory in
  the container /config-drive with the information.
  The change added to cloud-init was to extend support read config drive
  information from that directory.

  [Test Case]
  With a nova-lxd that contains the fix this can be fully tested
  by launching an instance with updated cloud-init and config drive
  attached.

  For cloud-init, the easiest way to demonstrate this is to
  create a lxc container and populate it with a '/config-drive'.

  lxc-proposed-snapshot is
    
https://git.launchpad.net/~smoser/cloud-init/+git/sru-info/tree/bin/lxc-proposed-snapshot
  It publishes an image to lxd with proposed enabled and cloud-init upgraded.

  $ release=xenial
  $ ref=xenial-proposed
  $ name=$release-lp1673411
  $ lxc-proposed-snapshot --proposed --publish $release $ref
  $ lxc init $ref $name

  # lxc will create the 'NoCloud' seed, and the normal search
  # path looks there first, so remove it.

  $ lxc file pull $name/etc/cloud/cloud.cfg.d/90_dpkg.cfg - |
  sed 's/NoCloud, //' |
  lxc file push - $name/etc/cloud/cloud.cfg.d/90_dpkg.cfg

  ## populate a /config-drive with attached 'make-config-drive-dir'
  ## and push it to the container

  $ d=$(mktemp -d)
  $ make-config-drive-dir "$d" "$name"
  $ rm -Rf "$d"

  ## start it and look around
  $ lxc start $name
  $ sleep 10
  $ lxc exec $name cat /run/cloud-init/result.json
  {
   "v1": {
    "datasource": "DataSourceConfigDrive [net,ver=2][source=/config-drive]",
    "errors": []
   }
  }

  [Regression Potential]
  There is a potentiali false positive where a user had data in
  /config-drive and now that information is read as config drive data.

  That would require a directory tree like:
    /config-drive/openstack/2???-??-??/meta_data.json
  or
    /config-drive/openstack/latest/meta_data.json

  Which seems like a small likelyhood of non-contrived hit.

  [Other Info]
  Upstream commit:
   https://git.launchpad.net/cloud-init/commit/?id=443095f4d4b6fe

  === End cloud-init SRU Template ===

  After reviewing https://review.openstack.org/#/c/445579/ and doing
  some testing, it would appear that the config-drive support in the
  nova-lxd driver is not functional.

  cloud-init ignores the data presented in /var/lib/cloud/data and reads
  from the network accessible metadata-service.

  To test this effectively you have to have a fully offline instance
  (i.e. no metadata service access).

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1673411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1690165] Re: Gratutuious ARP updates sent by Neutron L3 agent may be ignored by Linux peers

2017-05-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/464020
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=96c5dd6a2b9610d345c8e7df73f25820b5041360
Submitter: Jenkins
Branch:master

commit 96c5dd6a2b9610d345c8e7df73f25820b5041360
Author: Ihar Hrachyshka 
Date:   Thu May 11 08:08:42 2017 -0700

Wait 2 seconds between gratuitous ARP updates instead of 1 second

An unfortunate scenario in Linux kernel may end up with no gratuitous
ARP update being processed by network peers, resulting in connectivity
recovery slowdown when moving an IP address between devices.

Change-Id: Iefd0d01d12d06ce6398c4c5634c634991a78bbe9
Closes-Bug: #1690165


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1690165

Title:
  Gratutuious ARP updates sent by Neutron L3 agent may be ignored by
  Linux peers

Status in neutron:
  Fix Released

Bug description:
  An ufortunate scenario in Linux kernel explained in
  https://patchwork.ozlabs.org/patch/760372/ may result in no gARP being
  honoured by Linux network peers. To work the kernel bug around, we may
  want to spread updates more, not to hit default kernel locktime which
  is 1s.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1690165/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1666827] Re: Backport fixes for Rename Network return 403 Error

2017-05-17 Thread Edward Hope-Morley
The horizon 2:9.1.2-0ubuntu1~cloud0 point release has been uploaded to
the Trusty Mitaka UCA [1] and will be available shortly so closing this
bug.

[1]
https://bugs.launchpad.net/ubuntu/+source/keystone/+bug/1680098/comments/16

** Changed in: cloud-archive/mitaka
   Status: Triaged => Fix Released

** Changed in: horizon (Ubuntu Xenial)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1666827

Title:
  Backport fixes for Rename Network return 403 Error

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in horizon package in Ubuntu:
  Fix Released
Status in horizon source package in Xenial:
  Fix Released
Status in horizon source package in Yakkety:
  Fix Released

Bug description:
  [Impact]
  Non-admin users are not allowed to change the name of a network using the 
OpenStack Dashboard GUI

  [Test Case]
  1. Deploy trusty-mitaka or xenial-mitaka OpenStack Cloud
  2. Create demo project
  3. Create demo user
  4. Log into OpenStack Dashboard using demo user
  5. Go to Project -> Network and create a network
  6. Go to Project -> Network and Edit the just created network
  7. Change the name and click Save
  8. Observe that your request is denied with an error message

  [Regression Potential]
  Minimal.

  We are adding a patch already merged into upstream stable/mitaka for
  the horizon call to policy_check before sending request to Neutron
  when updating networks.

  The addition of rule "update_network:shared" to horizon's copy of
  Neutron policy.json is our own due to upstream not willing to back-
  port this required change. This rule is not referenced anywhere else
  in the code base so it will not affect other policy_check calls.

  Upstream bug: https://bugs.launchpad.net/horizon/+bug/1609467

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1666827/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1671365] Re: Downloading image with --progress fails for python3

2017-05-17 Thread Cyril Roelandt
Sorry to "steal" this bug, but I proposed a fix since it's getting a bit
old: https://review.openstack.org/#/c/465469/

** Project changed: glance => python-glanceclient

** Changed in: python-glanceclient
 Assignee: Abhishek Kekane (abhishek-kekane) => Cyril Roelandt 
(cyril-roelandt)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1671365

Title:
  Downloading image with --progress fails for python3

Status in Glance Client:
  New

Bug description:
  Downloading image with --progress fails for python3 with,
  TypeError: 'IterableWithLength' object is not an iterator

  Steps to reproduce:
  Ensure OpenStack is installed for python3

  $ glance -d image-download 2974158b-383d-4fe6-9671-5248b9a5d07d --file
  bmc-base.qcow2 --progress

  Output:
  $ TypeError: 'IterableWithLength' object is not an iterator

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-glanceclient/+bug/1671365/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1602057] Re: [SRU] (libvirt) KeyError updating resources for some node, guest.uuid is not in BDM list

2017-05-17 Thread James Page
This bug was fixed in the package nova - 2:13.1.3-0ubuntu2~cloud0
---

 nova (2:13.1.3-0ubuntu2~cloud0) trusty-mitaka; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 nova (2:13.1.3-0ubuntu2) xenial; urgency=medium
 .
   * Fix exception due to BDM race in get_available_resource() (LP: #1602057)
 - d/p/fix-exception-due-to-bdm-race-in-get_available_resou.patch


** Changed in: cloud-archive/mitaka
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1602057

Title:
  [SRU] (libvirt) KeyError updating resources for some node, guest.uuid
  is not in BDM list

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in Ubuntu Cloud Archive newton series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  Won't Fix
Status in OpenStack Compute (nova) newton series:
  Fix Committed
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Xenial:
  Fix Released

Bug description:
  [Impact]

  There currently exists a race condition whereby the compute
  resource_tracker periodic task polls extant instances and checks their
  BDMs which can occur prior to any mappings having yet been created
  e.g. root disk mapping for new instances. This patch ensures that
  instances without any BDMs are skipped.

  [Test Case]
    * deploy Openstack Mitaka with debug logging enabled (not essential but 
helps)

    * create an instance

    * delete its BDMs - pastebin.ubuntu.com/24287419/

    * watch /var/log/nova/nova-compute.log on hypervisor hosting
  instance and wait for next resource_tracker tick

    * ensure that exception mentioned in LP does not occur (happens
  after "Auditing locally available compute resources for node")

  [Regression Potential]

  The resource tracker information is used by the scheduler when
  deciding which compute hosts are able to have an instances scheduled
  to them. In this case the resource tracker would be skipping instances
  that would contribute to disk overcommit ratios. As such it is
  possible that that scheduler will have momentarily skewed information
  about resource consumption on that compute host until the next
  resource_tracker tick. Since the likelihood of this race condition
  occurring is hopefully slim and provided that users have a reasonable
  frequency for the resource_tracker, the likelihood of this becoming a
  long term problem is low since the issue will always be corrected by a
  subsequent tick (although if the compute host in question were
  saturated that would not be fixed until an instances was deleted or
  migrated).

  [Other]
  Note that this patch did not make it into upstream stable/mitaka branch due 
to the stable cutoff so the proposal is to carry in the archive (indefinitely).

  

  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager 
[req-d5d5d486-b488-4429-bbb5-24c9f19ff2c0 - - - - -] Error updating resources 
for node controller.
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager Traceback (most 
recent call last):
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6726, in 
update_available_resource
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager 
rt.update_available_resource(context)
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 500, 
in update_available_resource
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager resources = 
self.driver.get_available_resource(self.nodename)
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 5728, in 
get_available_resource
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager 
disk_over_committed = self._get_disk_over_committed_size_total()
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 7397, in 
_get_disk_over_committed_size_total
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager 
local_instances[guest.uuid], bdms[guest.uuid])
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager KeyError: 
'0a5c5743-9555-4dfd-b26e-198449ebeee5'
  2016-07-12 09:54:36.021 10056 ERROR nova.compute.manager

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1602057/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648242] Re: [SRU] Failure to retry update_ha_routers_states

2017-05-17 Thread James Page
This bug was fixed in the package neutron - 2:8.4.0-0ubuntu2~cloud0
---

 neutron (2:8.4.0-0ubuntu2~cloud0) trusty-mitaka; urgency=medium
 .
   * New update for the Ubuntu Cloud Archive.
 .
 neutron (2:8.4.0-0ubuntu2) xenial; urgency=medium
 .
   [ Edward Hope-Morley ]
   * Backport fix for Failure to retry update_ha_routers_states (LP: #1648242)
 - d/p/add-check-for-ha-state.patch
 .
   [ Chuck Short ]
   * d/neutron-common.install, d/neutron-dhcp-agent.install:
 Remove cron jobs since they will cause a race when
 using an L3 agent. The L3 agent cleans up after itself now.
 (LP: #1623664)


** Changed in: cloud-archive/mitaka
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1648242

Title:
  [SRU] Failure to retry update_ha_routers_states

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive mitaka series:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Xenial:
  Fix Released

Bug description:
  [Impact]

Mitigates risk of incorrect ha_state reported by l3-agent for HA
routers in case where rmq connection is lost during update
window. Fix is already in Ubuntu for O and N but upstream
backport just missed the Mitaka PR hence this SRU.

  [Test Case]

* deploy Openstack Mitaka (Xenial) with l3-ha enabled and min/max l3
  -agents-per-router set to 3

* configure network, router, boot instance with floating ip and
  start pinging

* check that status is 1 agent showing active and 2 showing standby

* trigger some router failovers while rabbit server stopped e.g.

  - go to l3-agent hosting your router and do:

ip netns exec qrouter-${router} ip link set dev  down

check other units to see if ha iface has been failed over

ip netns exec qrouter-${router} ip link set dev  up
 
* ensure ping still running

* eventually all agents will be xxx/standby

* start rabbit server

* wait for correct ha_state to be set (takes a few seconds)

  [Regression Potential]

   I do not envisage any regression from this patch. One potential side-effect 
is
   mildy increased rmq traffic but should be negligible.

  
  

  Version: Mitaka

  While performing failover testing of L3 HA routers, we've discovered
  an issue with regards to the failure of an agent to report its state.

  In this scenario, we have a router (7629f5d7-b205-4af5-8e0e-
  a3c4d15e7677) scheduled to (3) L3 agents:

  
+--+--++---+--+
  | id   | host 
| admin_state_up | alive | ha_state |
  
+--+--++---+--+
  | 4434f999-51d0-4bbb-843c-5430255d5c64 | 
726404-infra03-neutron-agents-container-a8bb0b1f | True   | :-)   | 
active  |
  | 710e7768-df47-4bfe-917f-ca35c138209a | 
726402-infra01-neutron-agents-container-fc937477 | True   | :-)   | 
standby   |
  | 7f0888ba-1e8a-4a36-8394-6448b8c606fb | 
726403-infra02-neutron-agents-container-0338af5a | True   | :-)   | 
standby   |
  
+--+--++---+--+

  The infra03 node was shut down completely and abruptly. The router
  transitioned to master on infra02 as indicated in these log messages:

  2016-12-06 16:15:06.457 18450 INFO neutron.agent.linux.interface [-] Device 
qg-d48918fa-eb already exists
  2016-12-07 15:16:51.145 18450 INFO neutron.agent.l3.ha [-] Router 
c8b5d5b7-ab57-4f56-9838-0900dc304af6 transitioned to master
  2016-12-07 15:16:51.811 18450 INFO eventlet.wsgi.server [-]  - - 
[07/Dec/2016 15:16:51] "GET / HTTP/1.1" 200 115 0.666464
  2016-12-07 15:18:29.167 18450 INFO neutron.agent.l3.ha [-] Router 
c8b5d5b7-ab57-4f56-9838-0900dc304af6 transitioned to backup
  2016-12-07 15:18:29.229 18450 INFO eventlet.wsgi.server [-]  - - 
[07/Dec/2016 15:18:29] "GET / HTTP/1.1" 200 115 0.062110
  2016-12-07 15:21:48.870 18450 INFO neutron.agent.l3.ha [-] Router 
7629f5d7-b205-4af5-8e0e-a3c4d15e7677 transitioned to master
  2016-12-07 15:21:49.537 18450 INFO eventlet.wsgi.server [-]  - - 
[07/Dec/2016 15:21:49] "GET / HTTP/1.1" 200 115 0.667920
  2016-12-07 15:22:08.796 18450 INFO neutron.agent.l3.ha [-] Router 
4676e7a5-279c-4114-8674-209f7fd5ab1a transitioned to master
  2016-12-07 15:22:09.515 18450 INFO eventlet.wsgi.server [-]  - - 
[07/Dec/2016 15:22:09] "GET / HTTP/1.1" 200 115 0.719848

  Traffic to/from VMs through the new master router functioned as
  expected. However, the ha_state remained 'standby':

 

[Yahoo-eng-team] [Bug 1691291] Re: Documentation for the theme preview panel should be updated

2017-05-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/465196
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=3cb953e1c70897d00b7d175fcd2de59385da2fdd
Submitter: Jenkins
Branch:master

commit 3cb953e1c70897d00b7d175fcd2de59385da2fdd
Author: Ying Zuo 
Date:   Tue May 16 16:20:40 2017 -0700

Update documentation for Theme Preview panel

The enabled files for the Developer dashboard and Theme Preview
panel have been moved to contrib.

Added the step for copying enabled files to the local folder.

Change-Id: I3182aafd86e1ed1ba610d4b11bed81f8a65eb0e3
Closes-bug: #1691291


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1691291

Title:
  Documentation for the theme preview panel should be updated

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The enabled files for the developer dashboard and theme preview panel
  have been moved to contrib so the documentation should be updated.

  https://docs.openstack.org/developer/horizon/topics/styling.html
  #theme-preview-page

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1691291/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1476114] Re: Launch instance failed using instances' snapshot created volume

2017-05-17 Thread Zhenyu Zheng
** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1476114

Title:
  Launch instance failed using instances' snapshot created volume

Status in Cinder:
  New
Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Launching instance fails when using a volume that is created using a
  snapshot of  a volume-backended instance.

  How to reproduce:

  Step 1:
  Create an volume backended instance.

  root@zheng-dev1:/var/log/nova# nova boot --flavor 1 --boot-volume
  daaddb77-4257-4ccd-86f2-220b31a0ce9b --nic net-id=8744ee96-7690-43bb-
  89b4-fcac805557bc test1

  root@zheng-dev1:/var/log/nova# nova list
  
+--+---+++-++
  | ID   | Name  | Status | Task State | Power 
State | Networks   |
  
+--+---+++-++
  | ef3c6074-4d38-4d7b-8d93-d0ace58d3a6a | test1 | ACTIVE | -  | 
Running | public=2001:db8::6, 172.24.4.5 |
  
+--+---+++-++

  Step 2:
  Create a snapshot of this instance using nova image-create, this will create 
an image in glance.

  root@zheng-dev1:/var/log/nova# nova image-create 
ef3c6074-4d38-4d7b-8d93-d0ace58d3a6a test-image
  root@zheng-dev1:/var/log/nova# glance image-list
  
+--+-+-+--+--++
  | ID   | Name| 
Disk Format | Container Format | Size | Status |
  
+--+-+-+--+--++
  | 7bdff9a3-d051-4e75-bcd3-de69dbffe063 | cirros-0.3.4-x86_64-uec | 
ami | ami  | 25165824 | active |
  | 2af2dce2-f778-4d73-b827-5281741fc1cf | cirros-0.3.4-x86_64-uec-kernel  | 
aki | aki  | 4979632  | active |
  | 60ea7020-fcc1-4535-af5e-0e894a01a44a | cirros-0.3.4-x86_64-uec-ramdisk | 
ari | ari  | 3740163  | active |
  | ce7b2d17-196a-4871-bc1b-9dcb184863be | test-image  |
 |  |  | active |
  
+--+-+-+--+--++

  Step 3:
  Create a new volume using the previously created image.

  root@zheng-dev1:/var/log/nova# cinder create --image-id 
ce7b2d17-196a-4871-bc1b-9dcb184863be --name test-volume 1
  
+---+--+
  |Property   |Value
 |
  
+---+--+
  |  attachments  |  [] 
 |
  |   availability_zone   | nova
 |
  |bootable   |false
 |
  |  consistencygroup_id  | None
 |
  |   created_at  |  2015-07-20T06:44:41.00 
 |
  |  description  | None
 |
  |   encrypted   |False
 |
  |   id  | 
cc21dc7d-aa4b-4e24-8f11-8b916c5d6347 |
  |metadata   |  {} 
 |
  |  multiattach  |False
 |
  |  name | test-volume 
 |
  | os-vol-host-attr:host | None
 |
  | os-vol-mig-status-attr:migstat| None
 |
  | os-vol-mig-status-attr:name_id| None
 |
  |  os-vol-tenant-attr:tenant_id |   b8112a8d8227490eba99419b8a8c2555  
 |
  |   os-volume-replication:driver_data   | None
 |
  | os-volume-replication:extended_status | None
 |
  |   replication_status  |   disabled  
 |
  |  size |  1  
 |
  |  snapshot_id  | None
 |
  |  source_volid | None
 |
  | status|   creating  

[Yahoo-eng-team] [Bug 1495429] Re: Vmware: Failed to snapshot an instance with a big root disk.

2017-05-17 Thread Jay Pipes
Going to set this to Fix Released since the patch is in both Newton and
Mitaka.

** Changed in: nova
   Status: Confirmed => Fix Released

** Changed in: oslo.vmware
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1495429

Title:
  Vmware: Failed to snapshot an instance with a big root disk.

Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.vmware:
  Fix Released

Bug description:
  python-nova-2015.1.1-1.el7.noarch
  openstack-nova-common-2015.1.1-1.el7.noarch
  python-novaclient-2.23.0-1.el7.noarch
  openstack-nova-compute-2015.1.1-1.el7.noarch
  python-oslo-vmware-0.11.1-1.el7.noarch

  I can't snap an instance if the root disk is too large. (>8GB)

  Snap in Vcenter works, OVF export works, DL image on glance node
  works, but after the DL, compute have trace and delete glance image.

  I can see in vcenter, during upload, export OVF timeout.

  So I guess when nova make request with OVF (deploy or export) and if
  the transfert is too long, Vcenter delete OSTACK_IMG or OSTACK_SNAP
  and nova bug.

  Trace in compute.log ==>

  2015-09-14 10:46:00.003 10248 DEBUG oslo_vmware.api [-] Fault list: 
[ManagedObjectNotFound] _invoke_api 
/usr/lib/python2.7/site-packages/oslo_vmware/api.py:326
  2015-09-14 10:46:00.004 10248 DEBUG oslo_vmware.exceptions [-] Fault 
ManagedObjectNotFound not matched. get_fault_class 
/usr/lib/python2.7/site-packages/oslo_vmware/exceptions.py:250
  2015-09-14 10:46:00.004 10248 DEBUG nova.virt.vmwareapi.vm_util 
[req-5ce7f3a7-5db7-4157-b4ae-212b585b586a 3e014852e6e642d4a11600f2d453324c 
eb151dcad08b434ab919a47392da4c95 - - -] [instance: 
5da44d78-b0cf-44f6-9789-b0fd78906b4e] Destroying the VM destroy_vm 
/usr/lib/python2.7/site-packages/nova/virt/vmwareapi/vm_util.py:1304
  2015-09-14 10:46:00.004 10248 DEBUG oslo_vmware.api 
[req-5ce7f3a7-5db7-4157-b4ae-212b585b586a 3e014852e6e642d4a11600f2d453324c 
eb151dcad08b434ab919a47392da4c95 - - -] Waiting for function _invoke_api to 
return. func /usr/lib/python2.7/site-packages/oslo_vmware/api.py:121
  2015-09-14 10:46:00.028 10248 DEBUG oslo_vmware.api 
[req-5ce7f3a7-5db7-4157-b4ae-212b585b586a 3e014852e6e642d4a11600f2d453324c 
eb151dcad08b434ab919a47392da4c95 - - -] Waiting for the task: (returnval){
  2015-09-14 10:46:00.028 10248 DEBUG oslo_vmware.api [-] Invoking VIM API to 
read info of task: (returnval){
  2015-09-14 10:46:00.029 10248 DEBUG oslo_vmware.api [-] Waiting for function 
_invoke_api to return. func 
/usr/lib/python2.7/site-packages/oslo_vmware/api.py:121
  2015-09-14 10:46:05.029 10248 DEBUG oslo_vmware.api [-] Invoking VIM API to 
read info of task: (returnval){
  2015-09-14 10:46:05.030 10248 DEBUG oslo_vmware.api [-] Waiting for function 
_invoke_api to return. func 
/usr/lib/python2.7/site-packages/oslo_vmware/api.py:121
  2015-09-14 10:46:05.056 10248 DEBUG oslo_vmware.api [-] Task: (returnval){
  2015-09-14 10:46:05.056 10248 INFO nova.virt.vmwareapi.vm_util 
[req-5ce7f3a7-5db7-4157-b4ae-212b585b586a 3e014852e6e642d4a11600f2d453324c 
eb151dcad08b434ab919a47392da4c95 - - -] [instance: 
5da44d78-b0cf-44f6-9789-b0fd78906b4e] Destroyed the VM
  2015-09-14 10:46:05.056 10248 DEBUG nova.virt.vmwareapi.vmops 
[req-5ce7f3a7-5db7-4157-b4ae-212b585b586a 3e014852e6e642d4a11600f2d453324c 
eb151dcad08b434ab919a47392da4c95 - - -] [instance: 
5da44d78-b0cf-44f6-9789-b0fd78906b4e] Deleting Snapshot of the VM instance 
_delete_vm_snapshot 
/usr/lib/python2.7/site-packages/nova/virt/vmwareapi/vmops.py:759
  2015-09-14 10:46:05.057 10248 DEBUG oslo_vmware.api 
[req-5ce7f3a7-5db7-4157-b4ae-212b585b586a 3e014852e6e642d4a11600f2d453324c 
eb151dcad08b434ab919a47392da4c95 - - -] Waiting for function _invoke_api to 
return. func /usr/lib/python2.7/site-packages/oslo_vmware/api.py:121
  2015-09-14 10:46:05.084 10248 DEBUG oslo_vmware.api 
[req-5ce7f3a7-5db7-4157-b4ae-212b585b586a 3e014852e6e642d4a11600f2d453324c 
eb151dcad08b434ab919a47392da4c95 - - -] Waiting for the task: (returnval){
  2015-09-14 10:46:05.084 10248 DEBUG oslo_vmware.api [-] Invoking VIM API to 
read info of task: (returnval){
  2015-09-14 10:46:05.085 10248 DEBUG oslo_vmware.api [-] Waiting for function 
_invoke_api to return. func 
/usr/lib/python2.7/site-packages/oslo_vmware/api.py:121
  2015-09-14 10:46:10.085 10248 DEBUG oslo_vmware.api [-] Invoking VIM API to 
read info of task: (returnval){
  2015-09-14 10:46:10.086 10248 DEBUG oslo_vmware.api [-] Waiting for function 
_invoke_api to return. func 
/usr/lib/python2.7/site-packages/oslo_vmware/api.py:121
  2015-09-14 10:46:10.105 10248 DEBUG oslo_vmware.api [-] Task: (returnval){
  2015-09-14 10:46:10.106 10248 DEBUG nova.virt.vmwareapi.vmops 
[req-5ce7f3a7-5db7-4157-b4ae-212b585b586a 3e014852e6e642d4a11600f2d453324c 
eb151dcad08b434ab919a47392da4c95 - - -] [instance: 
5da44d78-b0cf-44f6-9789-b0fd78906b4e] Deleted Snapshot of 

[Yahoo-eng-team] [Bug 1690203] Re: keystoneauth1 v3 Token object ignores the token passed in

2017-05-17 Thread prashkre
** No longer affects: keystone

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1690203

Title:
  keystoneauth1 v3 Token object ignores the token passed in

Status in keystoneauth:
  In Progress

Bug description:
  The primary problem reported in the defect is that when a
  keystoneauth1 identity Token is set in the session and a REST call is
  made, the session does not use the same token for making the call.

  auth = identity.v3.Token(auth_url, token)
  s = session.Session(auth=auth, verify=False)
  resp = s.get('http://localhost:9292/v2/images', headers={'Accept': 
'application/json'}

  Even though the token has been explicitly as part of the v3.Token
  object , the token that is set is not user to make the REST call.
  Instead a new unscoped token is generated. This new unscoped token
  which is generated doesn't have roles, project and catalog information
  as seen below

  {"token": {"issued_at": "2017-05-11T12:07:13.00Z", "audit_ids":
  ["_0-Hir4UTS-ATQmbiOP0Wg", "Zh4SNR-jREugwuoxGXL4wg"], "user": {"id":
  "0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9",
  "domain": {"id": "default", "name": "Default"}, "password_expires_at":
  null, "name": "root"}, "expires_at": "2017-05-11T18:05:50.00Z",
  "methods": ["token", "password"]}}


  The flow here is :

  1. Using the keystoneauth1 session object a post call is made with the auth 
v3.Token object set.
  2. When we make a session call, control comes here 
  >> 
https://github.com/openstack/keystoneauth/blob/stable/ocata/keystoneauth1/session.py#L491
  >> 
https://github.com/openstack/keystoneauth/blob/stable/ocata/keystoneauth1/session.py#L818
  >> 
https://github.com/openstack/keystoneauth/blob/stable/ocata/keystoneauth1/plugin.py#L90

  The keystoneauth1.identity.v3.Token object does not have an
  implementation for get_token so the control finally falls back on the
  keystoneauth1 identity base implementation which is probably not even
  applicable for keystone v3.

  >> 
https://github.com/openstack/keystoneauth/blob/stable/ocata/keystoneauth1/identity/base.py#L90
  >> 
https://github.com/openstack/keystoneauth/blob/stable/ocata/keystoneauth1/identity/base.py#L135
  >> 
https://github.com/openstack/keystoneauth/blob/stable/ocata/keystoneauth1/identity/base.py#L92

  The above check for re-authenticate always returns True as it does not 
consider the token that has been passed into the v3.Token object and in all 
cases goes on to create a new token, which is subsequently used to make the 
REST call, which happens here>>
  
https://github.com/openstack/keystoneauth/blob/stable/ocata/keystoneauth1/identity/v3/base.py#L112
  
https://github.com/openstack/keystoneauth/blob/stable/ocata/keystoneauth1/identity/v3/base.py#L166

  3. To resolve the above problem I overrided the get_token method
  inside v3.Token to return the token that was passed in instead of a
  re-authentication and everything worked fine..Of course this is more
  of a hack to check if this helped fix this problem. The below doesn't
  have logic to check if the token was going to expire and if re-
  authentication was required etc.

  class Token(base.AuthConstructor):
  _auth_method_class = TokenMethod
  token_new = None

  def __init__(self, auth_url, token, **kwargs):
  super(Token, self).__init__(auth_url, token=token, **kwargs)
  self.token_new = token

  def get_token(self, session, **kwargs):
  return self.token_new

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystoneauth/+bug/1690203/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1691356] [NEW] vendor data is not properly rendered if there are multiline strings

2017-05-17 Thread Ignasi Barrera
Public bug reported:

When using the vendor data to configure a set of cloud-init directives,
if the YAML document contains multiline strings, the rendered vendor-
data.txt file has an invalid format.

According to the OpenStack vendor data format, used in the OpenStack and
ConfigDrive data sources, the vendor data file is a JSON file that might
contain a "cloud-init" attribute that is parsed using the same handlers
as the user data:
http://cloudinit.readthedocs.io/en/latest/topics/datasources/openstack.html
#vendor-data

When that configuration is a "#cloud-config" YAML document with
multiline strings (for example a validation certificate commonly used in
the Chef module), the resulting vendor-data.txt is generated with
invalid values for all multiline string values.

The issue is probably in the "safeyaml" utility class, that uses default 
settings to process the YAML documents:
https://git.launchpad.net/cloud-init/tree/cloudinit/safeyaml.py?id=0.7.9

But Python does not properly handle multiline strings by default:
http://stackoverflow.com/questions/6432605/any-yaml-libraries-in-python-that-support-dumping-of-long-strings-as-block-liter

** Affects: cloud-init
 Importance: Undecided
 Status: New


** Tags: configdrive openstack vendordata

** Tags added: configdrive

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1691356

Title:
  vendor data is not properly rendered if there are multiline strings

Status in cloud-init:
  New

Bug description:
  When using the vendor data to configure a set of cloud-init
  directives, if the YAML document contains multiline strings, the
  rendered vendor-data.txt file has an invalid format.

  According to the OpenStack vendor data format, used in the OpenStack
  and ConfigDrive data sources, the vendor data file is a JSON file that
  might contain a "cloud-init" attribute that is parsed using the same
  handlers as the user data:
  http://cloudinit.readthedocs.io/en/latest/topics/datasources/openstack.html
  #vendor-data

  When that configuration is a "#cloud-config" YAML document with
  multiline strings (for example a validation certificate commonly used
  in the Chef module), the resulting vendor-data.txt is generated with
  invalid values for all multiline string values.

  The issue is probably in the "safeyaml" utility class, that uses default 
settings to process the YAML documents:
  https://git.launchpad.net/cloud-init/tree/cloudinit/safeyaml.py?id=0.7.9

  But Python does not properly handle multiline strings by default:
  
http://stackoverflow.com/questions/6432605/any-yaml-libraries-in-python-that-support-dumping-of-long-strings-as-block-liter

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1691356/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1673027] Re: ovsfw: no vm connectivity after nova reboot

2017-05-17 Thread IWAMOTO Toshihiro
*** This bug is a duplicate of bug 1645655 ***
https://bugs.launchpad.net/bugs/1645655

** This bug has been marked a duplicate of bug 1645655
   ovs firewall cannot handle server reboot

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1673027

Title:
  ovsfw: no vm connectivity after nova reboot

Status in neutron:
  In Progress

Bug description:
  Seen on: multinode devstack (1 controller/1 compute)

  Steps to reproduce:

  1. boot a vm, verify that it can be reached
  2. reboot the vm with nova reboot
  3. check that the vm can't be reached anymore (on different deployments 
reproducibility varied from 50% to 100%)

  The reason for connectivity loss is that ofport number corresponding
  to vm's tap interface doesn't match in_port number in ovs flows
  generated by the firewall. I suspect a race of some kind between tap
  interface plugging into br-int and the generation of ovs flows for a
  new vm.

  Port numbers will match again after issuing nova shelve/unshelve.

  no connectivity after reboot - http://paste.openstack.org/show/602726/
  connectivity regained after shelve/unshelve - 
http://paste.openstack.org/show/602729/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1673027/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1675343] Re: vif_type='tap' fails with permission error on /dev/net/tun

2017-05-17 Thread Kevin Benton
Adding openstack-manuals so we can document the qemu.conf permissions
required.

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1675343

Title:
  vif_type='tap' fails with permission error on /dev/net/tun

Status in devstack:
  In Progress
Status in openstack-manuals:
  New

Bug description:
  *On master branch.*

  I'm working on switching the Linux Bridge plugin in Neutron to return
  vif_type='tap' to Nova so we can avoid the race condition of Nova and
  the Neutron agent trying to create the network bridge and conditional
  logic guessing whether or not the agent should try to add a given port
  to a bridge.[1]

  
  However, Nova can't seem to boot instances with that vif_type with errors 
like the following:

  libvirtError: internal error: process exited while connecting to
  monitor: 2017-03-22T16:37:48.246587Z qemu-system-x86_64: -netdev
  tap,ifname=tap2b1add98-31,script=,id=hostnet0: could not open
  /dev/net/tun: Operation not permitted

  Here is the full gate run for that error above:
  http://logs.openstack.org/50/447150/5/check/gate-tempest-dsvm-neutron-
  linuxbridge-ubuntu-
  xenial/9153647/logs/screen-n-cpu.txt.gz?level=TRACE#_2017-03-22_16_37_48_708

  
  I see https://review.openstack.org/#/c/448203/ was supposed to fix it, but it 
didn't seem to work even though 'script=' is visible in the qemu call.


  1. https://review.openstack.org/#/c/447150/

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1675343/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1481715] Re: test_list_servers_filtered_by_ip_regex is racey in the gate

2017-05-17 Thread Jay Pipes
I've seen no evidence of this occurring within the last 30 days at
least. Setting to Invalid. Feel free to re-open if you see this
happening again in the future.

** Changed in: nova
 Assignee: Eli Qiao (taget-9) => (unassigned)

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1481715

Title:
  test_list_servers_filtered_by_ip_regex is racey in the gate

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  When running in the upstream gate,
  test_list_servers_filtered_by_ip_regex is failing from time to time,
  and there isn't enough info to determine why.

  Example fail: http://logs.openstack.org/10/185910/13/gate/gate-
  tempest-dsvm-
  nova-v21-full/eaf043c//console.html#_2015-08-05_10_18_54_522

  2015-08-05 10:18:54.523 | Captured traceback:
  2015-08-05 10:18:54.523 | ~~~
  2015-08-05 10:18:54.523 | Traceback (most recent call last):
  2015-08-05 10:18:54.523 |   File 
"tempest/api/compute/servers/test_list_server_filters.py", line 313, in 
test_list_servers_filtered_by_ip_regex
  2015-08-05 10:18:54.523 | self.assertIn(self.s3_name, map(lambda x: 
x['name'], servers))
  2015-08-05 10:18:54.523 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 356, in assertIn
  2015-08-05 10:18:54.524 | self.assertThat(haystack, Contains(needle), 
message)
  2015-08-05 10:18:54.524 |   File 
"/opt/stack/new/tempest/.tox/full/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 435, in assertThat
  2015-08-05 10:18:54.524 | raise mismatch_error
  2015-08-05 10:18:54.524 | testtools.matchers._impl.MismatchError: 
'tempest-ListServerFiltersTestJSON-instance-605437920' not in 
[u'tempest-ListServerFiltersTestJSON-instance-1212139707', 
u'tempest-ListServerFiltersTestJSON-instance-237240309']
  2015-08-05 10:18:54.524 | 
  2015-08-05 10:18:54.524 | 
  2015-08-05 10:18:54.524 | Captured pythonlogging:
  2015-08-05 10:18:54.525 | ~~~
  2015-08-05 10:18:54.525 | 2015-08-05 09:59:53,317 7643 INFO 
[tempest_lib.common.rest_client] Request 
(ListServerFiltersTestJSON:test_list_servers_filtered_by_ip_regex): 200 GET 
http://127.0.0.1:8774/v2/83088e4e5eff4fd9b6fce26b64b57a91/servers/bd6dde46-63cb-47f6-83b5-91ea659fc87c
 0.189s
  2015-08-05 10:18:54.525 | 2015-08-05 09:59:53,317 7643 DEBUG
[tempest_lib.common.rest_client] Request - Headers: {'Accept': 
'application/json', 'X-Auth-Token': '', 'Content-Type': 
'application/json'}
  2015-08-05 10:18:54.525 | Body: None
  2015-08-05 10:18:54.525 | Response - Headers: {'connection': 'close', 
'content-length': '1572', 'vary': 'X-OpenStack-Nova-API-Version', 
'content-location': 
'http://127.0.0.1:8774/v2/83088e4e5eff4fd9b6fce26b64b57a91/servers/bd6dde46-63cb-47f6-83b5-91ea659fc87c',
 'x-compute-request-id': 'req-e3126afe-da89-47d9-82e5-5f4d31604ae3', 'date': 
'Wed, 05 Aug 2015 09:59:53 GMT', 'content-type': 'application/json', 'status': 
'200', 'x-openstack-nova-api-version': '2.1'}
  2015-08-05 10:18:54.525 | Body: {"server": {"status": "ACTIVE", 
"updated": "2015-08-05T09:59:51Z", "hostId": 
"f037f08820372dae376b110fd010a1ce5fe9fbf7cb31383f4ab9cbb5", "addresses": 
{"private": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:78:87:13", "version": 4, 
"addr": "10.1.0.5", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": 
"http://127.0.0.1:8774/v2/83088e4e5eff4fd9b6fce26b64b57a91/servers/bd6dde46-63cb-47f6-83b5-91ea659fc87c;,
 "rel": "self"}, {"href": 
"http://127.0.0.1:8774/83088e4e5eff4fd9b6fce26b64b57a91/servers/bd6dde46-63cb-47f6-83b5-91ea659fc87c;,
 "rel": "bookmark"}], "key_name": null, "image": {"id": 
"8e22ebff-2714-4d46-ae34-9a7e00c6cb15", "links": [{"href": 
"http://127.0.0.1:8774/83088e4e5eff4fd9b6fce26b64b57a91/images/8e22ebff-2714-4d46-ae34-9a7e00c6cb15;,
 "rel": "bookmark"}]}, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": 
"active", "OS-SRV-USG:launched_at": "2015-08-05T09:59:14.00", "flavor": 
{"id": "42", "links": [{"href": "http://127.0.0.1:8774
 /83088e4e5eff4fd9b6fce26b64b57a91/flavors/42", "rel": "bookmark"}]}, "id": 
"bd6dde46-63cb-47f6-83b5-91ea659fc87c", "security_groups": [{"name": 
"default"}], "OS-SRV-USG:terminated_at": null, "OS-EXT-AZ:availability_zone": 
"nova", "user_id": "d23d48cc19f94db19e602df2fe5bda5b", "name": 
"tempest-ListServerFiltersTestJSON-instance-237240309", "created": 
"2015-08-05T09:59:10Z", "tenant_id": "83088e4e5eff4fd9b6fce26b64b57a91", 
"OS-DCF:diskConfig": "MANUAL", "os-extended-volumes:volumes_attached": [], 
"accessIPv4": "", "accessIPv6": "", "progress": 0, "OS-EXT-STS:power_state": 1, 
"config_drive": "True", "metadata": {}}}
  2015-08-05 10:18:54.526 | 2015-08-05 09:59:53,423 7643 INFO 
[tempest_lib.common.rest_client] 

[Yahoo-eng-team] [Bug 1675343] Re: vif_type='tap' fails with permission error on /dev/net/tun

2017-05-17 Thread Kevin Benton
** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1675343

Title:
  vif_type='tap' fails with permission error on /dev/net/tun

Status in devstack:
  In Progress
Status in OpenStack Compute (nova):
  Invalid
Status in openstack-manuals:
  New

Bug description:
  *On master branch.*

  I'm working on switching the Linux Bridge plugin in Neutron to return
  vif_type='tap' to Nova so we can avoid the race condition of Nova and
  the Neutron agent trying to create the network bridge and conditional
  logic guessing whether or not the agent should try to add a given port
  to a bridge.[1]

  
  However, Nova can't seem to boot instances with that vif_type with errors 
like the following:

  libvirtError: internal error: process exited while connecting to
  monitor: 2017-03-22T16:37:48.246587Z qemu-system-x86_64: -netdev
  tap,ifname=tap2b1add98-31,script=,id=hostnet0: could not open
  /dev/net/tun: Operation not permitted

  Here is the full gate run for that error above:
  http://logs.openstack.org/50/447150/5/check/gate-tempest-dsvm-neutron-
  linuxbridge-ubuntu-
  xenial/9153647/logs/screen-n-cpu.txt.gz?level=TRACE#_2017-03-22_16_37_48_708

  
  I see https://review.openstack.org/#/c/448203/ was supposed to fix it, but it 
didn't seem to work even though 'script=' is visible in the qemu call.


  1. https://review.openstack.org/#/c/447150/

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1675343/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456073] Re: Connection to an instance with floating IP breaks during block migration when using DVR

2017-05-17 Thread Jay Pipes
The fix for this was merged in Newton. Marking as Fix Released.

** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1456073

Title:
  Connection to an instance with floating IP breaks during block
  migration when using DVR

Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) mitaka series:
  New

Bug description:
  During migration of  an instance, using block migration with a floating IP 
when the router is DVR the connection to the instance breaks (e.g. Having an 
SSH connection to the instance).
  Reconnect to the instance is successful.

  Version
  ==
  RHEL 7.1
  python-nova-2015.1.0-3.el7ost.noarch
  python-neutron-2015.1.0-1.el7ost.noarch

  How to reproduce
  ==
  1. Create a distributed router and attach an internal and an external network 
to it.
  # neutron router-create --distributed True router1
  # neutron router-interface-add router1 
  # neutron router-gateway-set 

  2. Launch an instance and associate it with a floating IP.
  # nova boot --flavor m1.small --image fedora --nic net-id= vm1

  3. SSH into the instance which will be migrated and run a command
  "while true; do echo "Hello"; sleep 1; done"

  4. Migrate the instance using block migration 
   # nova live-migration --block-migrate 

  5. Verify that the connection to the instance is lost.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1456073/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1691340] [NEW] create default network show wrong

2017-05-17 Thread Yan Songming
Public bug reported:

when i create a network use  --default --internal. It's resp is 
"is_default| True "
When i show it, in fact it is None.

[root@localhost auto_allocate]# openstack network create ysm_test --default 
--internal
+---+--+
| Field | Value|
+---+--+
| admin_state_up| UP   |
| availability_zone_hints   |  |
| availability_zones|  |
| created_at| 2017-05-17T05:43:55Z |
| description   |  |
| dns_domain| None |
| id| d508fafa-25c7-4bd8-bfc4-25903f79aa53 |
| ipv4_address_scope| None |
| ipv6_address_scope| None |
| is_default| True |
| mtu   | 1450 |
| name  | ysm_test |
| port_security_enabled | True |
| project_id| bca504c769234d4db32e05142428fd64 |
| provider:network_type | vxlan|
| provider:physical_network | None |
| provider:segmentation_id  | 37   |
| qos_policy_id | None |
| revision_number   | 3|
| router:external   | Internal |
| segments  | None |
| shared| False|
| status| ACTIVE   |
| subnets   |  |
| updated_at| 2017-05-17T05:43:55Z |
+---+--+
[root@localhost auto_allocate]# openstack network show 
d508fafa-25c7-4bd8-bfc4-25903f79aa53
+---+--+
| Field | Value|
+---+--+
| admin_state_up| UP   |
| availability_zone_hints   |  |
| availability_zones|  |
| created_at| 2017-05-17T05:43:55Z |
| description   |  |
| dns_domain| None |
| id| d508fafa-25c7-4bd8-bfc4-25903f79aa53 |
| ipv4_address_scope| None |
| ipv6_address_scope| None |
| is_default| None |
| mtu   | 1450 |
| name  | ysm_test |
| port_security_enabled | True |
| project_id| bca504c769234d4db32e05142428fd64 |
| provider:network_type | vxlan|
| provider:physical_network | None |
| provider:segmentation_id  | 37   |
| qos_policy_id | None |
| revision_number   | 3|
| router:external   | Internal |
| segments  | None |
| shared| False|
| status| ACTIVE   |
| subnets   |  |
| updated_at| 2017-05-17T05:43:55Z |
+---+--+
[root@localhost auto_allocate]#

** Affects: neutron
 Importance: Undecided
 Assignee: Yan Songming (songmingyan)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Yan Songming (songmingyan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1691340

Title:
  create default network show wrong

Status in neutron:
  New

Bug description:
  when i create a network use  --default --internal. It's resp is 
  "is_default| True "
  When i show it, in fact it is None.

  [root@localhost auto_allocate]#