[Yahoo-eng-team] [Bug 1708580] [NEW] ovsfw ignores port_ranges under some conditions

2017-08-03 Thread IWAMOTO Toshihiro
Public bug reported:

ovsfw ignores port_ranges when protocol is not literal udp or tcp.
sctp and numeric protocol values don't work and result in too permissive 
filtering.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovs-fw sg-fw

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1708580

Title:
  ovsfw ignores port_ranges under some conditions

Status in neutron:
  New

Bug description:
  ovsfw ignores port_ranges when protocol is not literal udp or tcp.
  sctp and numeric protocol values don't work and result in too permissive 
filtering.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1708580/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1695861] Re: Invalid availability zone name with ':' is accepted

2017-08-03 Thread Matt Riedemann
** Changed in: nova
   Importance: Undecided => Medium

** Also affects: nova/ocata
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1695861

Title:
  Invalid availability zone name with ':' is accepted

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) ocata series:
  New

Bug description:
  According to the parse_availability_zone() of the API class [1], Nova
  has a legacy hack to allow admins to specify hosts via an availability
  zone using az:host:node. That means ':' cannot be included in the name
  of an availability zone itself. However, the create aggregate API
  accepts requests which have availability zone names including ':'.
  That causes a following bad scenario:

  1. An admin creates a host aggregate with availability_zone = bad:name:example
  2. An admin tries to create a server with availability_zone = bad:name:example
  3. The nova-api parse the request and split the availability_zone value with 
':'
  4. Then it recognizes az=bad, host=name, node=example
  5. Nova returns 'No valid host found' because there is no availability zone 
whose name is 'bad'.

  To solve this problem following fixes are needed:

  Option A:
  * Do not allow admins to create a host aggregate whose availability_zone name 
including ':'.
  * Document this specification.

  Option B:
  * Deprecate the legacy admin hack which uses  az:host:node and allow ':' for 
az name.

  [1]
  
https://review.openstack.org/gitweb?p=openstack/nova.git;a=blob;f=nova/compute/api.py;h=46ed8e91fcc16f3755fd6a5e2e4a6d54f990cb8b;hb=HEAD#l561

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1695861/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1705351] Re: agent notifier getting amqp NotFound exceptions propagated up to it

2017-08-03 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/490305
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=0b5750ca518314a10266f25cb09997b7a859ab1e
Submitter: Jenkins
Branch:master

commit 0b5750ca518314a10266f25cb09997b7a859ab1e
Author: Kevin Benton 
Date:   Wed Aug 2 20:14:10 2017 -0700

Sinkhole workaround for old topics

This patch introduces a workaround on the Neutron side
for bug 1705351 that results in the Neutron server blocking
trying to send to topics the agents no longer subscribe to.

The workaround is to just subscribe to those topics and do
nothing with the messages that come in until oslo.messaging
can properly recover from loss of listeners.

Change-Id: I946a33dfd0c0da26bb47b524f75f53bf59d3fbd5
Closes-Bug: #1705351


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1705351

Title:
  agent notifier getting amqp NotFound exceptions propagated up to it

Status in neutron:
  Fix Released
Status in oslo.messaging:
  In Progress

Bug description:
  There appears to be a scenario where amqp NotFound exceptions are
  being propagated up through oslo messaging to the Neutron RPC client
  attempting to perform a cast.

  Since this appears in a grenade job, my guess is that it's due to the
  agents no longer listening on that particular exchange after they have
  been upgraded (the "port-delete" exchange is no longer used by agents
  in master).

  In neutron we probably need to ignore these exceptions on casts since
  it just means there are no longer agents interested in listening to
  the casts. However, this also seems like it may be an oslo messaging
  bug because in order for us to catch and ignore this in Neutron we
  would need to import AMQP directly for the exception. Directly
  depending on AMQP imports breaks the olso.messaging abstraction.

  From http://logs.openstack.org/85/480185/4/check/gate-grenade-dsvm-
  ironic-inspector-ubuntu-
  
xenial/6ef1e75/logs/new/screen-q-svc.txt.gz?level=ERROR#_2017-07-19_12_53_37_858
  :

  
  2017-07-19 12:53:37.858 6397 ERROR neutron.pecan_wsgi.hooks.translation 
[req-8d272d85-8fc0-4aeb-9180-2bf92cfc67cc tempest-NetworksTest-1317585658 
tempest-NetworksTest-1317585658] DELETE failed.: NotFound: Basic.publish: (404) 
NOT_FOUND - no exchange 'q-agent-notifier-port-delete_fanout' in vhost '/'
  2017-07-19 12:53:37.858 6397 ERROR neutron.pecan_wsgi.hooks.translation 
Traceback (most recent call last):
  2017-07-19 12:53:37.858 6397 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/local/lib/python2.7/dist-packages/pecan/core.py", line 683, in 
__call__
  2017-07-19 12:53:37.858 6397 ERROR neutron.pecan_wsgi.hooks.translation 
self.invoke_controller(controller, args, kwargs, state)
  2017-07-19 12:53:37.858 6397 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/local/lib/python2.7/dist-packages/pecan/core.py", line 574, in 
invoke_controller
  2017-07-19 12:53:37.858 6397 ERROR neutron.pecan_wsgi.hooks.translation 
result = controller(*args, **kwargs)
  2017-07-19 12:53:37.858 6397 ERROR neutron.pecan_wsgi.hooks.translation   
File "/opt/stack/new/neutron/neutron/db/api.py", line 94, in wrapped
  2017-07-19 12:53:37.858 6397 ERROR neutron.pecan_wsgi.hooks.translation 
setattr(e, '_RETRY_EXCEEDED', True)
  2017-07-19 12:53:37.858 6397 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, 
in __exit__
  2017-07-19 12:53:37.858 6397 ERROR neutron.pecan_wsgi.hooks.translation 
self.force_reraise()
  2017-07-19 12:53:37.858 6397 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, 
in force_reraise
  2017-07-19 12:53:37.858 6397 ERROR neutron.pecan_wsgi.hooks.translation 
six.reraise(self.type_, self.value, self.tb)
  2017-07-19 12:53:37.858 6397 ERROR neutron.pecan_wsgi.hooks.translation   
File "/opt/stack/new/neutron/neutron/db/api.py", line 90, in wrapped
  2017-07-19 12:53:37.858 6397 ERROR neutron.pecan_wsgi.hooks.translation 
return f(*args, **kwargs)
  2017-07-19 12:53:37.858 6397 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 150, in 
wrapper
  2017-07-19 12:53:37.858 6397 ERROR neutron.pecan_wsgi.hooks.translation 
ectxt.value = e.inner_exc
  2017-07-19 12:53:37.858 6397 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, 
in __exit__
  2017-07-19 12:53:37.858 6397 ERROR neutron.pecan_wsgi.hooks.translation 
self.force_reraise()
  2017-07-19 12:53:37.858 6397 ERROR neutron.pecan_wsgi.hooks.translation   
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", 

[Yahoo-eng-team] [Bug 1691772] Re: provide a way to seed NoCloud from network without image modification.

2017-08-03 Thread Brian Murray
** Changed in: cloud-init
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1691772

Title:
  provide a way to seed NoCloud from network without image modification.

Status in cloud-init:
  Fix Released
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Confirmed
Status in cloud-init source package in Zesty:
  Confirmed

Bug description:
  === Begin SRU Template ===
  [Impact]
  In bug 1660385, we made imitating the EC2 datasource more difficult.
  By design, that broke some users or platforms who have done so in the past.

  The change here gives users who were using the Ubuntu images in a low-tech
  "No Cloud" fashion an easier way to regain that functionality.

  The solution was to read the 'system-serial-number' field in DMI data and
  consider it as as input to the nocloud datasource in a similar way to
  what we had done in the past with the kernel command line.

  [Test Case]
  a.) download a cloud image, update its cloud-init

 # see below for 'get-proposed-cloudimg'
 $ release=xenial
 $ get-proposed-cloudimg $release

  b.) boot that image with command line pointing at a 'seed'

 $ img=${release}-server-cloudimg-amd64-proposed.img
 # url has to provide '/user-data' and '/meta-data'
 $ 
url=https://git.launchpad.net/~smoser/cloud-init/+git/sru-info/plain/bugs/lp-1691772/
 $ qemu-system-x86_64 -snapshot -enable-kvm -m 512 \
-device virtio-net-pci,netdev=net00 -netdev type=user,id=net00 \
-drive "file=$img,if=virtio" \
-smbios "type=1,serial=ds=nocloud-net;seedfrom=$url" \
-nographic

 # note,  you can hit 'ctrl-a c' to toggle between the qemu monitor
 # and the serial console in '-nographic' mode.

  c.) Log in with 'ubuntu:passw0rd' and check hostname.
 If the above url was correctly used, then:
   * you can log in with 'ubuntu:passw0rd'
   * the hostname will be set to 'nocloud-guest'
   * /run/cloud-init/result.json will show that the url has been used.

 ubuntu@nocloud-guest:~$ hostname
 nocloud-guest
 ubuntu@nocloud-guest$ cat /run/cloud-init/result.json
 {
  "v1": {
   "datasource": "DataSourceNoCloudNet 
[seed=dmi,https://git.launchpad.net/~smoser/cloud-init/+git/sru-info/plain/bugs/lp-1691772/][dsmode=net];,
   "errors": []
  }
 }

  [Regression Potential]
  The code attempts to parse the 'system-serial-number' entry in dmi data as a
  string with data in it.  If that field had the string 'ds=nocloud' that was
  not intended as consumable for cloud-init, a false positive could occur and
  an exception cause the NoCloud datasource to not read data from another
  location.

  This seems somewhat unlikely and other paths should result in simply no
  new action being taken.

  [Other Info]
  Upstream commit at
https://git.launchpad.net/cloud-init/commit/?id=802e7cb2da8

  get-proposed-cloudimg is available at [1], it basically downloads an
  ubuntu cloud image, enables -proposed and upgrade/installs cloud-init.

  --
  [1] 
https://git.launchpad.net/~smoser/cloud-init/+git/sru-info/tree/bin/get-proposed-cloudimg

  === End SRU Template ===

  
  Vladimir suggested this in bug 1660385 comment 12 [1].
  The idea would be to have a supported way that you could seed images with 
cloud-init using Nocloud without any tinkering in the image.  That would mean
   a.) no second block device
   b.) no usage of kernel command line.

  --
  [1] https://bugs.launchpad.net/cloud-init/+bug/1660385/comments/12

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1691772/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1692916] Re: Cloudinit modules should provide schema validation to better alert consumers to unsupported config

2017-08-03 Thread Brian Murray
** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1692916

Title:
  Cloudinit modules should provide schema validation to better alert
  consumers to unsupported config

Status in cloud-init:
  Fix Released
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  New
Status in cloud-init source package in Zesty:
  New

Bug description:
  === Begin SRU Template ===
  [Impact]
  New feature to validate and log invalid schema warnings from cc_ntp 
cloud-config module.

  [Test Case]
  if [ ! -f lxc-proposed-snapshot ]; then
    wget 
https://git.launchpad.net/~smoser/cloud-init/+git/sru-info/plain/bin/lxc-proposed-snapshot
    chmod 755 lxc-proposed-snapshot
  fi
  cat < 1.conf
  #cloud-config
  ntp:
  EOF
  for release in xenial zesty; do
  ref=$release-proposed;
  echo "$release START --";
  ./lxc-proposed-snapshot --proposed --publish $release $ref;
  lxc start test-$release;
  lxc file push 1.conf test-$release/1.conf
  lxc exec test-$release -- python3 
/usr/lib/python3/dist-packages/cloudinit/config/schema.py -c /1.conf | grep 
Valid
  lxc exec test-$release -- apt-cache depends cloud-init | grep 
jsonschema  # should be empty
  done

  [Regression Potential]
  We don't want to introduce a mandatory jsonschema dependency in older series.
  Validate that older releases can run without errors when jsonschema is *not* 
installed.

  [Other Info]
  Upstream commit at
    https://git.launchpad.net/cloud-init/commit/?id=0a448dd034

  === End SRU Template ===

  cloudinit needs a mechanism to parse and validate a strict schema
  definition for modules that parse user created #cloud-config yaml
  files.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1692916/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1691489] Re: fstab entries written by cloud-config may not be mounted

2017-08-03 Thread Brian Murray
** Changed in: cloud-init
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1691489

Title:
  fstab entries written by cloud-config may not be mounted

Status in cloud-init:
  Fix Released
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Confirmed
Status in cloud-init source package in Yakkety:
  Confirmed
Status in cloud-init source package in Zesty:
  Confirmed
Status in cloud-init source package in Artful:
  Fix Released

Bug description:
  As reported in bug 1686514, sometimes /mnt will not get mounted when
  re-delpoying or stopping-then-starting a Azure vm of L32S.  This is
  probably a more generic issue, I suspect shown due to the speed of
  disks on these systems.

  
  Related bugs:
   * bug 1686514: Azure: cloud-init does not handle reformatting GPT partition 
ephemeral disks

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1691489/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1701097] Re: eni rendering of ipv6 gateways fails

2017-08-03 Thread Brian Murray
** Changed in: cloud-init
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1701097

Title:
  eni rendering of ipv6 gateways fails

Status in cloud-init:
  Fix Released
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Confirmed
Status in cloud-init source package in Zesty:
  Confirmed
Status in cloud-init source package in Artful:
  Fix Released

Bug description:
  cloud-init trunk and xenial, yakkety, zesty and artful all fail

  A network config with a ipv6 gateway route like:

  
  subnets:
- type: static
  address: 2001:4800:78ff:1b:be76:4eff:fe06:96b3
  netmask: ':::::'
  routes:
- gateway: 2001:4800:78ff:1b::1
  netmask: '::'
  network: '::'

  For eni rendering, this should create a post-up/post-down route
  command that generates a default ipv6 route entry, like this:

  post-up route add -A inet6 default gw 2001:4800:78ff:1b::1 || true
  pre-down route del -A inet6 default gw 2001:4800:78ff:1b::1 || true

  However, what is currently generated is this:

  post-up route add -net :: netmask :: gw 2001:4800:78ff:1b::1 || true
  pre-down route del -net :: netmask :: gw 2001:4800:78ff:1b::1 || true

  That does not install the route correctly as a default gateway route.

  This is fallout from commit d00da2d5b0d45db5670622a66d833d2abb907388 
  net: normalize data in network_state object

  This commit removed ipv6 route 'netmask' values, and converted them to
  prefix length values, but failed to update the eni renderer's check for
  ipv6 default gateway.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1701097/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1708560] [NEW] Relnotes for Pike contains an entry from a past release

2017-08-03 Thread Akihiro Motoki
Public bug reported:

https://docs.openstack.org/releasenotes/neutron/unreleased.html contains
an entry "vlan-aware-vms" but this is a feature from Ocata. This
regression happened because we touched a release note from a past
release in the master branch. It happened during URL update in the doc-
migration work. Release notes from past releases should not be touched.

The solution for this is to use 'ignore-notes' directive option from reno. This 
is available since reno 2.5.0.
Note that reverting the patch does not solve the problem as reno still 
considers a relnote updated in the master branch

This bug tracker is for errors with the documentation, use the following
as a template and remove or add fields as you see fit. Convert [ ] into
[x] to check boxes:

- [X] This doc is inaccurate in this way: __
- [ ] This is a doc addition request.
- [ ] I have a fix to the document that I can paste below including example: 
input and output. 

If you have a troubleshooting or support issue, use the following
resources:

 - Ask OpenStack: http://ask.openstack.org
 - The mailing list: http://lists.openstack.org
 - IRC: 'openstack' channel on Freenode

---
Release: 11.0.0.0b4.dev37 on 2017-08-03 22:11
SHA: 8f79a92e45c84819cfc4d1cda99c9320ae78bbaf
Source: 
https://git.openstack.org/cgit/openstack/neutron/tree/doc/source/unreleased.rst
URL: https://docs.openstack.org/releasenotes/neutron/unreleased.html

** Affects: neutron
 Importance: High
 Assignee: Akihiro Motoki (amotoki)
 Status: New


** Tags: doc

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1708560

Title:
  Relnotes for Pike contains an entry from a past release

Status in neutron:
  New

Bug description:
  https://docs.openstack.org/releasenotes/neutron/unreleased.html
  contains an entry "vlan-aware-vms" but this is a feature from Ocata.
  This regression happened because we touched a release note from a past
  release in the master branch. It happened during URL update in the
  doc-migration work. Release notes from past releases should not be
  touched.

  The solution for this is to use 'ignore-notes' directive option from reno. 
This is available since reno 2.5.0.
  Note that reverting the patch does not solve the problem as reno still 
considers a relnote updated in the master branch

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [X] This doc is inaccurate in this way: __
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 11.0.0.0b4.dev37 on 2017-08-03 22:11
  SHA: 8f79a92e45c84819cfc4d1cda99c9320ae78bbaf
  Source: 
https://git.openstack.org/cgit/openstack/neutron/tree/doc/source/unreleased.rst
  URL: https://docs.openstack.org/releasenotes/neutron/unreleased.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1708560/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1708548] Re: yahoo support number 1-866-476-3146 yahoo phone number

2017-08-03 Thread William Grant
** Project changed: nova => null-and-void

** Information type changed from Public to Private

** Changed in: null-and-void
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1708548

Title:
  yahoo support number 1-866-476-3146 yahoo phone number

Status in NULL Project:
  Invalid

Bug description:
  Yahoo  Phone Number 1-866-476-3146 Yahoo  Support Number
  Yahoo  Support
  Yahoo  Support Number
  Yahoo  Support Phone Number
  Yahoo  Customer Support
  Yahoo  Customer Support Number
  Yahoo  Customer Support Phone Number
  Yahoo  Customer Service
  Yahoo  Customer Service Number
  Yahoo  Customer Service Phone Number
  Yahoo  Number
  Yahoo  Phone Number
  Yahoo  Support
  Yahoo  Support Number
  Yahoo  Support phone Number
  Yahoo  Customer Support
  Yahoo  Customer Support Number
  Yahoo  Customer Support Phone Number
  Yahoo  Customer Service
  Yahoo  Customer Service Number
  Yahoo  Customer Service
  Yahoo   Number
  Yahoo   Phone Number
  Yahoo   Support
  Yahoo   Support Number
  Yahoo   Support Phone Number
  Yahoo   Customer Support
  Yahoo   Customer Support Number
  Yahoo   Customer Support Phone Number
  Yahoo   Customer Service
  Yahoo   Customer Service Number
  Yahoo   Customer Service Phone Number
  Yahoo  Number
  Yahoo  Phone Number
  Yahoo  Support
  Yahoo  Support Number
  Yahoo  Support Phone Number
  Yahoo  Customer Support
  Yahoo  Customer Support Number
  Yahoo  Customer Service
  Yahoo  Customer Service Number
  Yahoo  Customer Service Phone Number
  Yahoo  Support Number
  Yahoo  Support
  Yahoo  Number
  Yahoo  Customer Support
  Yahoo  Customer Support Number
  Yahoo  Customer Support phone Number
  Yahoo  Customer Service
  Yahoo  Customer Service Number
  Yahoo  Customer Service phone Number
  Yahoo   Support
  Yahoo  Support Number
  Yahoo  Yahoo   Support Phone Number
  Yahoo  Customer Support
  Yahoo   Customer Support Number
  Yahoo  Customer Support Phone Number
  Yahoo   Customer Service
  Yahoo  Yahoo   Customer Service Number
  Yahoo  Yahoo   Customer Service Phone Number
  Yahoo  Number
  Yahoo  Phone Number
  Yahoo  Support
  Yahoo  Support Number
  Yahoo  Support Phone Number
  Yahoo  Customer Support
  Yahoo  Customer Support Number
  Yahoo  Customer Support Phone Number
  Yahoo  Customer Service
  Yahoo  Customer Service Number

To manage notifications about this bug go to:
https://bugs.launchpad.net/null-and-void/+bug/1708548/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1708548] [NEW] yahoo support number 1-866-476-3146 yahoo phone number

2017-08-03 Thread sdfsdfgdgdr
Private bug reported:

Yahoo  Phone Number 1-866-476-3146 Yahoo  Support Number
Yahoo  Support
Yahoo  Support Number
Yahoo  Support Phone Number
Yahoo  Customer Support
Yahoo  Customer Support Number
Yahoo  Customer Support Phone Number
Yahoo  Customer Service
Yahoo  Customer Service Number
Yahoo  Customer Service Phone Number
Yahoo  Number
Yahoo  Phone Number
Yahoo  Support
Yahoo  Support Number
Yahoo  Support phone Number
Yahoo  Customer Support
Yahoo  Customer Support Number
Yahoo  Customer Support Phone Number
Yahoo  Customer Service
Yahoo  Customer Service Number
Yahoo  Customer Service
Yahoo   Number
Yahoo   Phone Number
Yahoo   Support
Yahoo   Support Number
Yahoo   Support Phone Number
Yahoo   Customer Support
Yahoo   Customer Support Number
Yahoo   Customer Support Phone Number
Yahoo   Customer Service
Yahoo   Customer Service Number
Yahoo   Customer Service Phone Number
Yahoo  Number
Yahoo  Phone Number
Yahoo  Support
Yahoo  Support Number
Yahoo  Support Phone Number
Yahoo  Customer Support
Yahoo  Customer Support Number
Yahoo  Customer Service
Yahoo  Customer Service Number
Yahoo  Customer Service Phone Number
Yahoo  Support Number
Yahoo  Support
Yahoo  Number
Yahoo  Customer Support
Yahoo  Customer Support Number
Yahoo  Customer Support phone Number
Yahoo  Customer Service
Yahoo  Customer Service Number
Yahoo  Customer Service phone Number
Yahoo   Support
Yahoo  Support Number
Yahoo  Yahoo   Support Phone Number
Yahoo  Customer Support
Yahoo   Customer Support Number
Yahoo  Customer Support Phone Number
Yahoo   Customer Service
Yahoo  Yahoo   Customer Service Number
Yahoo  Yahoo   Customer Service Phone Number
Yahoo  Number
Yahoo  Phone Number
Yahoo  Support
Yahoo  Support Number
Yahoo  Support Phone Number
Yahoo  Customer Support
Yahoo  Customer Support Number
Yahoo  Customer Support Phone Number
Yahoo  Customer Service
Yahoo  Customer Service Number

** Affects: null-and-void
 Importance: Undecided
 Status: Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1708548

Title:
  yahoo support number 1-866-476-3146 yahoo phone number

Status in NULL Project:
  Invalid

Bug description:
  Yahoo  Phone Number 1-866-476-3146 Yahoo  Support Number
  Yahoo  Support
  Yahoo  Support Number
  Yahoo  Support Phone Number
  Yahoo  Customer Support
  Yahoo  Customer Support Number
  Yahoo  Customer Support Phone Number
  Yahoo  Customer Service
  Yahoo  Customer Service Number
  Yahoo  Customer Service Phone Number
  Yahoo  Number
  Yahoo  Phone Number
  Yahoo  Support
  Yahoo  Support Number
  Yahoo  Support phone Number
  Yahoo  Customer Support
  Yahoo  Customer Support Number
  Yahoo  Customer Support Phone Number
  Yahoo  Customer Service
  Yahoo  Customer Service Number
  Yahoo  Customer Service
  Yahoo   Number
  Yahoo   Phone Number
  Yahoo   Support
  Yahoo   Support Number
  Yahoo   Support Phone Number
  Yahoo   Customer Support
  Yahoo   Customer Support Number
  Yahoo   Customer Support Phone Number
  Yahoo   Customer Service
  Yahoo   Customer Service Number
  Yahoo   Customer Service Phone Number
  Yahoo  Number
  Yahoo  Phone Number
  Yahoo  Support
  Yahoo  Support Number
  Yahoo  Support Phone Number
  Yahoo  Customer Support
  Yahoo  Customer Support Number
  Yahoo  Customer Service
  Yahoo  Customer Service Number
  Yahoo  Customer Service Phone Number
  Yahoo  Support Number
  Yahoo  Support
  Yahoo  Number
  Yahoo  Customer Support
  Yahoo  Customer Support Number
  Yahoo  Customer Support phone Number
  Yahoo  Customer Service
  Yahoo  Customer Service Number
  Yahoo  Customer Service phone Number
  Yahoo   Support
  Yahoo  Support Number
  Yahoo  Yahoo   Support Phone Number
  Yahoo  Customer Support
  Yahoo   Customer Support Number
  Yahoo  Customer Support Phone Number
  Yahoo   Customer Service
  Yahoo  Yahoo   Customer Service Number
  Yahoo  Yahoo   Customer Service Phone Number
  Yahoo  Number
  Yahoo  Phone Number
  Yahoo  Support
  Yahoo  Support Number
  Yahoo  Support Phone Number
  Yahoo  Customer Support
  Yahoo  Customer Support Number
  Yahoo  Customer Support Phone Number
  Yahoo  Customer Service
  Yahoo  Customer Service Number

To manage notifications about this bug go to:
https://bugs.launchpad.net/null-and-void/+bug/1708548/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1708079] Re: rbac creation for network should bump revision

2017-08-03 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/489822
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=32814bb39e4f7738d460035d30cc8b3c0d9667e4
Submitter: Jenkins
Branch:master

commit 32814bb39e4f7738d460035d30cc8b3c0d9667e4
Author: Kevin Benton 
Date:   Tue Aug 1 18:21:11 2017 -0700

Bump network rev on RBAC change

Increment the revision number when RBAC policies are
changed since it impacts the calculation of the 'shared'
field.

Closes-Bug: #1708079
Change-Id: I4c7eeff8745eff3761d54ef6d3665cf3dc6e6222


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1708079

Title:
  rbac creation for network should bump revision

Status in neutron:
  Fix Released

Bug description:
  Since RBAC entries affect the value of the 'shared' field inside of a
  network, the creation of an RBAC entry should bump the revision of the
  network. Otherwise the value of the 'shared' field can flip back and
  forth without the revision_number incrementing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1708079/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1696668] Re: ovo_rpc semantics "warnings" are too verbose

2017-08-03 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/482000
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=5b78a81e2d152f2854b22c25cd7cc47eaff2859b
Submitter: Jenkins
Branch:master

commit 5b78a81e2d152f2854b22c25cd7cc47eaff2859b
Author: YAMAMOTO Takashi 
Date:   Mon Jul 10 14:11:24 2017 +0900

ovo_rpc: Avoid flooding logs with semantic violation warning

Still log them, but not repeatedly.
Also, lower the level from error to warning.

The message has been there long enough. (since [1])
Relevant parties should have already known if their service
plugin needs to be refactored or not.  It's no longer
desirable to keep the warning that loud.

[1] I5efc625c5e8565693e795d70e0f85810552d38cd

Closes-Bug: #1696668
Change-Id: I4f1f399ac8e1c9acec9c7bce2f57c053c98e3031


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1696668

Title:
  ovo_rpc semantics "warnings" are too verbose

Status in neutron:
  Fix Released

Bug description:
  It logs traceback at ERROR level.
  I know it's intentional, but given that it isn't a real issue for some 
configurations,
  it's too verbose and potentially hides other issues by spamming logs too much.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1696668/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1707252] Re: Claims in the scheduler does not account for doubling allocations on resize to same host

2017-08-03 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/490085
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=79c376ec162a0049523f196b3fd10a1585cff1da
Submitter: Jenkins
Branch:master

commit 79c376ec162a0049523f196b3fd10a1585cff1da
Author: Matt Riedemann 
Date:   Wed Aug 2 12:48:01 2017 -0400

Sum allocations in the scheduler when resizing to the same host

When we resize to the same host, with or without a shared provider,
the _move_operation_alloc_request code is currently removing the
compute node resource provider uuid from the new_rp_uuids
variable since the uuid is the same in both the cur_rp_uuids and
new_rp_uuids as it's the same compute node provider. So later when
processing the new destination allocation requests, since new_rp_uuids
is empty, the new destination allocations aren't part of the request.

This change checks for this case and sums the resource allocation
amounts when we're resizing to the same compute node.

Change-Id: I1c9442eed850a3eb7ac9871fafcb0ae93ba8117c
Closes-Bug: #1707252


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1707252

Title:
  Claims in the scheduler does not account for doubling allocations on
  resize to same host

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  This code in the scheduler report client is used by the scheduler when
  making allocation requests against a certain instance during a move
  operation:

  
https://github.com/openstack/nova/blob/09f0795fe0f5d043593f5ae55a6ec5f6298ba5ba/nova/scheduler/client/report.py#L200

  The idea is to retain the source node allocations in Placement when
  also adding allocations for the destination node.

  However, with the set difference code in there, it does not account
  for resizing an instance to the same host, where the compute node for
  the source and destination are the same.

  We need to double the allocations for resize to same host for a case
  like resizing the instance and the VCPU/MEMORY_MB goes down but the
  DISK_GB goes up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1707252/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1707246] Re: Configuration guide references configuration options for policy instead of sample policy file

2017-08-03 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/488508
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=fffc84db79fabb2bb35367387dd4b19e9dafb6d1
Submitter: Jenkins
Branch:master

commit fffc84db79fabb2bb35367387dd4b19e9dafb6d1
Author: Doug Hellmann 
Date:   Fri Jul 28 12:06:12 2017 -0400

use the show-policy directive to show policy settings

Closes-Bug: 1707246

Depends-On: I774b2de5ff59874dfa67811c094735dd74c8083e
Depends-On: Ie836b7a6f3ea7cba1737913b944f36c77f14cfd0
Change-Id: I5ce0931d39b045681ba6d43d7894ae25e6b13146
Signed-off-by: Doug Hellmann 


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1707246

Title:
  Configuration guide references configuration options for policy
  instead of sample policy file

Status in OpenStack Identity (keystone):
  Fix Released
Status in oslo.policy:
  Fix Released

Bug description:
  The configuration guide document should contain all information for
  configuration options, as well as sample policy files. Keystone's
  configuration section uses the wrong directive, which results in the
  configuration options being rendered where the sample policy file
  should be:

  https://docs.openstack.org/keystone/latest/configuration/policy.html

  We should correct this so that the policy section of the configuration
  guide references policy and not configuration options.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1707246/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1708508] [NEW] os-server-groups policy rules are wrong

2017-08-03 Thread Matt Riedemann
Public bug reported:

Before policy was moved into code in Newton, the os-server-groups API
actions had only two policy rules:

"os_compute_api:os-server-groups": "rule:admin_or_owner",
"os_compute_api:os-server-groups:discoverable": "@",

With this change in Ocata:

https://review.openstack.org/#/c/391113/

The actual actions now have granular policy checks
(create/delete/index/show).

The problem is the effective policy check on those went from 
"os_compute_api:os-server-groups" which was rule:admin_or_owner to this:

"os_compute_api:os-server-groups:create": "rule:os_compute_api:os-server-groups"
"os_compute_api:os-server-groups:delete": "rule:os_compute_api:os-server-groups"
"os_compute_api:os-server-groups:index": "rule:os_compute_api:os-server-groups"
"os_compute_api:os-server-groups:show": "rule:os_compute_api:os-server-groups"

And "rule:os_compute_api:os-server-groups" is not a real rule, and is
backward incompatible. I don't really know what oslo.policy does if a
rule is used which is not defined.

I know the admin_or_only rule is defined here:

#"admin_or_owner": "is_admin:True or project_id:%(project_id)s"

But there is no rule defined for "os_compute_api:os-server-groups".

** Affects: nova
 Importance: Undecided
 Status: Invalid

** Changed in: nova
   Status: New => Triaged

** Changed in: nova
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1708508

Title:
  os-server-groups policy rules are wrong

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Before policy was moved into code in Newton, the os-server-groups API
  actions had only two policy rules:

  "os_compute_api:os-server-groups": "rule:admin_or_owner",
  "os_compute_api:os-server-groups:discoverable": "@",

  With this change in Ocata:

  https://review.openstack.org/#/c/391113/

  The actual actions now have granular policy checks
  (create/delete/index/show).

  The problem is the effective policy check on those went from 
  "os_compute_api:os-server-groups" which was rule:admin_or_owner to this:

  "os_compute_api:os-server-groups:create": 
"rule:os_compute_api:os-server-groups"
  "os_compute_api:os-server-groups:delete": 
"rule:os_compute_api:os-server-groups"
  "os_compute_api:os-server-groups:index": 
"rule:os_compute_api:os-server-groups"
  "os_compute_api:os-server-groups:show": "rule:os_compute_api:os-server-groups"

  And "rule:os_compute_api:os-server-groups" is not a real rule, and is
  backward incompatible. I don't really know what oslo.policy does if a
  rule is used which is not defined.

  I know the admin_or_only rule is defined here:

  #"admin_or_owner": "is_admin:True or project_id:%(project_id)s"

  But there is no rule defined for "os_compute_api:os-server-groups".

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1708508/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1707381] Re: Differece behavior about enable subnet service types scenario

2017-08-03 Thread Miguel Lavalle
This bug was discussed today during the L3 subteam meeting and the
agreement was that is is invalid. Please see discussion log here:
http://eavesdrop.openstack.org/meetings/neutron_l3/2017/neutron_l3.2017-08-03-15.00.log.html#l-90

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1707381

Title:
  Differece behavior about enable subnet service types scenario

Status in neutron:
  Invalid

Bug description:
  Repro:
  1. create a net and a subnet 'A-compute-nova' with service_types 
"compute:nova"
  2. exec neutron port-create only specified the network ID/name.
 This will return "No valid service subnet for the given device owner."
 We only can create port with device_owner "compute:nova"
  3. create another subnet 'B-None', no specify the service_types.
  Then create port with device_owner=compute:nova and fixed_ip from 'B-None', 
it create successful.
  But why not raise the error just the same with step 2? As we wish the IPs of 
port always allocate from the subnet which service_type is 'compute:nova'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1707381/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1708260] Re: Sending empty allocations list on a PUT /allocations/{consumer_uuid} results in 500

2017-08-03 Thread Matt Riedemann
This is a latent issue so I'm going to remove the pike-rc-potential tag.

** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Tags removed: pike-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1708260

Title:
  Sending empty allocations list on a PUT /allocations/{consumer_uuid}
  results in 500

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) ocata series:
  Confirmed

Bug description:
  If you send an empty allocation list to the placement server:

  - name: put an allocation empty list
PUT: /allocations/599ffd2d-526a-4b2e-8683-f13ad25f9958
request_headers:
content-type: application/json
data:
allocations: []

  You'll get a 500 response because of an Index error when consumer_id =
  allocs[0].consumer_id.

  Instead we should never reach this code. There should either be a
  schema violation, because we should have at least one allocation, or
  if we're willing to accept an empty list and do nothing, w should skip
  the call to the database.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1708260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1708478] [NEW] Unable to add security rules under manage rules, no response upon clicking "add"

2017-08-03 Thread Xin Zhe Khooi
Public bug reported:

I am using DevStack, I have tried unstack and stack ing it a few times,
I can confirm the issue can be replicated.

Regardless of any security groups (default/ custom), I am only able to DELETE 
rules instead of adding rules.
After filling up the details to be submitted, clicking the "add" button does 
not have any response. No rules were able to be added. Suspected front end 
issue.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1708478

Title:
  Unable to add security rules under manage rules, no response upon
  clicking "add"

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I am using DevStack, I have tried unstack and stack ing it a few
  times, I can confirm the issue can be replicated.

  Regardless of any security groups (default/ custom), I am only able to DELETE 
rules instead of adding rules.
  After filling up the details to be submitted, clicking the "add" button does 
not have any response. No rules were able to be added. Suspected front end 
issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1708478/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1708458] Re: Expose instance system_metadata in compute API

2017-08-03 Thread Sean Dague
Exposing system metadata in the compute API is something we really don't
want to do.

** Changed in: nova
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1708458

Title:
  Expose instance system_metadata in compute API

Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  Via the nova compute API I can see it is possible to GET metadata for
  an instance but it does not seem to be possible to get
  system_metadata. It is useful to be able to GET the system metadata
  fro an instance if you want to query the point in time properties that
  where inherited from an image during the launch.

  I noticed in change
  https://review.openstack.org/#/c/7045/5/nova/db/api.py there is a db
  api definition to read this from the database but this does not seem
  to be exposed in the compute api.

  I assume something like the below in compute/api.py would be a
  starting point to expose this.

  def get_instance_system_metadata(self, context, instance):
  """Get all system metadata associated with an instance."""
  return self.db.instance_system_metadata_get(context, instance.uuid)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1708458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1708424] Re: When a flavor has resource extra_specs disabling all standard fields, nova tries to make a request to the placements API with no resources

2017-08-03 Thread Sean Dague
So, this really seems like going out of the way to break the system. A
better error message would be fine if it was submitted, but this is very
unlikely to hit in the real world.

** Changed in: nova
   Status: New => Opinion

** Changed in: nova
   Importance: Undecided => Wishlist

** Tags added: placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1708424

Title:
  When a flavor has resource extra_specs disabling all standard fields,
  nova tries to make a request to the placements API with no resources

Status in OpenStack Compute (nova):
  Opinion

Bug description:
  When a flavor is defined like:

  
+++
  | Field  | Value  
|
  
+++
  | OS-FLV-DISABLED:disabled   | False  
|
  | OS-FLV-EXT-DATA:ephemeral  | 0  
|
  | access_project_ids | None   
|
  | disk   | 0  
|
  | id | 2f41a9c0-f194-4925-81d2-b62a3c07435a   
|
  | name   | test2  
|
  | os-flavor-access:is_public | True   
|
  | properties | resources:DISK_GB='0', 
resources:MEMORY_MB='0', resources:VCPU='0' |
  | ram| 256
|
  | rxtx_factor| 1.0
|
  | swap   |
|
  | vcpus  | 1  
|
  
+++

  Then it will disable all the standard resources, and pop them from the
  resources dictionary here:

  
https://github.com/openstack/nova/blob/57cd38d1fdc4d6b3439a8374e4713baf0b207184/nova/scheduler/utils.py#L141

  the resources dictionary is then empty but nova still tries to use it
  to make a request to the placements API. This results in a error
  message like:

  ERROR nova.scheduler.client.report [None 
req-030ec9a7-4299-47e0-bd38-b9d40721c210 admin admin] Failed to retrieve 
allocation candidates from placement API for filters {}. Got 400: 
  Aug 03 12:40:11 qa-cld-m1-neo4 nova-scheduler[4057]:   400 Bad 
Request
  Aug 03 12:40:11 qa-cld-m1-neo4 nova-scheduler[4057]:  
  Aug 03 12:40:11 qa-cld-m1-neo4 nova-scheduler[4057]:  
  Aug 03 12:40:11 qa-cld-m1-neo4 nova-scheduler[4057]:   400 Bad 
Request
  Aug 03 12:40:11 qa-cld-m1-neo4 nova-scheduler[4057]:   The server could not 
comply with the request since it is either malformed or otherwise incorrect.
  Aug 03 12:40:11 qa-cld-m1-neo4 nova-scheduler[4057]: Badly formed resources 
parameter. Expected resources query string parameter in form: 
?resources=VCPU:2,MEMORY_MB:1024. Got: empty string.
  Aug 03 12:40:11 qa-cld-m1-neo4 nova-scheduler[4057]:  
  Aug 03 12:40:11 qa-cld-m1-neo4 nova-scheduler[4057]: .

  Nova should catch the case when the flavor has removed all filters and
  provide a more useful error message to indicate what the operator as
  done wrong.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1708424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1708463] [NEW] neutron-db-manage upgrade heads returned 1

2017-08-03 Thread Matthias Runge
Public bug reported:

I'm doing a fresh deployment, and it currently fails with

2017-08-03 11:22:02 | 2017-08-03 11:22:02,236 INFO: ESC[mNotice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
packet.check_error()ESC[0m
2017-08-03 11:22:02 | 2017-08-03 11:22:02,236 INFO: ESC[mNotice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/pymysql/connections.py", line 393, in check
_errorESC[0m
2017-08-03 11:22:02 | 2017-08-03 11:22:02,236 INFO: ESC[mNotice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
err.raise_mysql_exception(self._data)ESC[0m
2017-08-03 11:22:02 | 2017-08-03 11:22:02,237 INFO: ESC[mNotice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/pymysql/err.py", line 107, in raise_mysql_e
xceptionESC[0m
2017-08-03 11:22:02 | 2017-08-03 11:22:02,237 INFO: ESC[mNotice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: raise 
errorclass(errno, errval)ESC[0m
2017-08-03 11:22:02 | 2017-08-03 11:22:02,237 INFO: ESC[mNotice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
oslo_db.exception.DBError: (pymysql.err.InternalError) (1050, u"Table 
'standardattributes' already exists") [SQL: u'\nCREATE TABLE standardattributes 
(\n\tid BIGINT NOT NULL AUTO_INCREMENT, \n\tresource_type VARCHAR(255) NOT 
NULL, \n\tPRIMARY KEY (id)\n)ENGINE=InnoDB\n\n']ESC[0m
2017-08-03 11:22:02 | 2017-08-03 11:22:02,237 INFO: ESC[1;31mError: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]: Failed to call refresh: 
neutron-db-manage  upgrade heads returned 1 instead of one of [0]ESC[0m

The deployment is done with TripleO, I'm using CentOS 7.3, neutron
packages are:

python-neutron-11.0.0-0.20170731021410.63f4d77.el7.centos.noarch
openstack-neutron-ml2-11.0.0-0.20170731021410.63f4d77.el7.centos.noarch
openstack-neutron-11.0.0-0.20170731021410.63f4d77.el7.centos.noarch
openstack-neutron-openvswitch-11.0.0-0.20170731021410.63f4d77.el7.centos.noarch
puppet-neutron-11.2.1-0.20170728192630.342cc06.el7.centos.noarch
python-neutron-lib-1.9.1-0.20170725142825.0ef54c3.el7.centos.noarch
python2-neutronclient-6.4.0-0.20170728181502.3c211f3.el7.centos.noarch
openstack-neutron-common-11.0.0-0.20170731021410.63f4d77.el7.centos.noarch

(the numbers are timestamps and git hashes from upstream)

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1708463

Title:
  neutron-db-manage  upgrade heads returned 1

Status in neutron:
  New

Bug description:
  I'm doing a fresh deployment, and it currently fails with

  2017-08-03 11:22:02 | 2017-08-03 11:22:02,236 INFO: ESC[mNotice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
packet.check_error()ESC[0m
  2017-08-03 11:22:02 | 2017-08-03 11:22:02,236 INFO: ESC[mNotice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/pymysql/connections.py", line 393, in check
  _errorESC[0m
  2017-08-03 11:22:02 | 2017-08-03 11:22:02,236 INFO: ESC[mNotice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
err.raise_mysql_exception(self._data)ESC[0m
  2017-08-03 11:22:02 | 2017-08-03 11:22:02,237 INFO: ESC[mNotice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/pymysql/err.py", line 107, in raise_mysql_e
  xceptionESC[0m
  2017-08-03 11:22:02 | 2017-08-03 11:22:02,237 INFO: ESC[mNotice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: raise 
errorclass(errno, errval)ESC[0m
  2017-08-03 11:22:02 | 2017-08-03 11:22:02,237 INFO: ESC[mNotice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
oslo_db.exception.DBError: (pymysql.err.InternalError) (1050, u"Table 
'standardattributes' already exists") [SQL: u'\nCREATE TABLE standardattributes 
(\n\tid BIGINT NOT NULL AUTO_INCREMENT, \n\tresource_type VARCHAR(255) NOT 
NULL, \n\tPRIMARY KEY (id)\n)ENGINE=InnoDB\n\n']ESC[0m
  2017-08-03 11:22:02 | 2017-08-03 11:22:02,237 INFO: ESC[1;31mError: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]: Failed to call refresh: 
neutron-db-manage  upgrade heads returned 1 instead of one of [0]ESC[0m

  The deployment is done with TripleO, I'm using CentOS 7.3, neutron
  packages are:

  python-neutron-11.0.0-0.20170731021410.63f4d77.el7.centos.noarch
  openstack-neutron-ml2-11.0.0-0.20170731021410.63f4d77.el7.centos.noarch
  openstack-neutron-11.0.0-0.20170731021410.63f4d77.el7.centos.noarch
  
openstack-neutron-openvswitch-11.0.0-0.20170731021410.63f4d77.el7.centos.noarch
  puppet-neutron-11.2.1-0.20170728192630.342cc06.el7.centos.noarch
  python-neutron-lib-1.9.1-0.20170725142825.0ef54c3.el7.centos.noarch
  python2-neutronclient-6.4.0-0.20170728181502.3c211f3.el7.centos.noarch
  

[Yahoo-eng-team] [Bug 1708465] [NEW] Neutron duplicated provider rule for ICMPv6 Router Advertisements

2017-08-03 Thread Maciej Jozefczyk
Public bug reported:

Change https://review.openstack.org/#/c/432506/ introduced new way of
providing provider rules to sg agent. ICMPv6 RA rule generation has been
moved to neutron/db/securitygroups_rpc_base.py, but its not removed from
neutron/agent/linux/iptables_firewall.py.

In result each time we update SG rule in neutron logs there is a warning
about rules duplication:

2017-08-03 10:41:12.873 28184 WARNING
neutron.agent.linux.iptables_manager [-] Duplicate iptables rule
detected. This may indicate a bug in the the iptables rule generation
code. Line: -A neutron-openvswi-PREROUTING -i gwbf6069f7-2cc -j CT


=== How to reproduce ===
1. Spawn devstack.
2. Boot VM
3. Add new rule to SG which this VM uses.
4. Observe neutron-openvswitch-agent logs.


=== Environment ===
Upstream master devstack.

** Affects: neutron
 Importance: Undecided
 Assignee: Maciej Jozefczyk (maciej.jozefczyk)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Maciej Jozefczyk (maciej.jozefczyk)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1708465

Title:
  Neutron duplicated provider rule for ICMPv6 Router Advertisements

Status in neutron:
  New

Bug description:
  Change https://review.openstack.org/#/c/432506/ introduced new way of
  providing provider rules to sg agent. ICMPv6 RA rule generation has
  been moved to neutron/db/securitygroups_rpc_base.py, but its not
  removed from neutron/agent/linux/iptables_firewall.py.

  In result each time we update SG rule in neutron logs there is a
  warning about rules duplication:

  2017-08-03 10:41:12.873 28184 WARNING
  neutron.agent.linux.iptables_manager [-] Duplicate iptables rule
  detected. This may indicate a bug in the the iptables rule generation
  code. Line: -A neutron-openvswi-PREROUTING -i gwbf6069f7-2cc -j CT

  
  === How to reproduce ===
  1. Spawn devstack.
  2. Boot VM
  3. Add new rule to SG which this VM uses.
  4. Observe neutron-openvswitch-agent logs.

  
  === Environment ===
  Upstream master devstack.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1708465/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1708458] [NEW] Expose instance system_metadata in compute API

2017-08-03 Thread sean redmond
Public bug reported:

Via the nova compute API I can see it is possible to GET metadata for an
instance but it does not seem to be possible to get system_metadata. It
is useful to be able to GET the system metadata fro an instance if you
want to query the point in time properties that where inherited from an
image during the launch.

I noticed in change
https://review.openstack.org/#/c/7045/5/nova/db/api.py there is a db api
definition to read this from the database but this does not seem to be
exposed in the compute api.

I assume something like the below in compute/api.py would be a starting
point to expose this.

def get_instance_system_metadata(self, context, instance):
"""Get all system metadata associated with an instance."""
return self.db.instance_system_metadata_get(context, instance.uuid)

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1708458

Title:
  Expose instance system_metadata in compute API

Status in OpenStack Compute (nova):
  New

Bug description:
  Via the nova compute API I can see it is possible to GET metadata for
  an instance but it does not seem to be possible to get
  system_metadata. It is useful to be able to GET the system metadata
  fro an instance if you want to query the point in time properties that
  where inherited from an image during the launch.

  I noticed in change
  https://review.openstack.org/#/c/7045/5/nova/db/api.py there is a db
  api definition to read this from the database but this does not seem
  to be exposed in the compute api.

  I assume something like the below in compute/api.py would be a
  starting point to expose this.

  def get_instance_system_metadata(self, context, instance):
  """Get all system metadata associated with an instance."""
  return self.db.instance_system_metadata_get(context, instance.uuid)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1708458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1708460] [NEW] [RFE] Reaction to network congestion for qos

2017-08-03 Thread Fouad Benamrane
Public bug reported:

Problem description:

In a multi-tenant environment, some VMs could consume the available bandwidth 
and cause network congestion for those that share the same network resources. 
To improve the quality of service, openstack implements a service to limit the 
bandwidth of VM ports and classify the traffic using DSCP field. However, there 
is still a lack of a mechanism policing specific VMs that consume a lot of 
resources and generate network congestion.


API effects:

By adding a new rule in the QoS extension [1] of neutron we could detect and 
react dynamically to the congestion. This rule will police VMs causing 
congestion by limiting the allowed bandwidth in their ports.


Proposal:
*
Our proposal is to use Explicit Congestion Notification (ECN) [2] bits as 
marking mechanism. So, the network near congestion will mark packets as 
Congestion Experienced (CE). Neutron will detect these packets and starts 
monitoring the behavior of the VM causing the congestion. When the congestion 
exceeding a threshold, neutron will react by policing this VM using the 
bandwidth limitation rule of QoS extension. Neutron will keep track of the 
congestion rate and will free the policed VMs when the congestion is over.

References:
***
[1] QoS https://docs.openstack.org/ocata/networking-guide/config-qos.html
[2] ECN in IP https://tools.ietf.org/html/rfc3168

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: qos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1708460

Title:
   [RFE] Reaction to network congestion for qos

Status in neutron:
  New

Bug description:
  Problem description:
  
  In a multi-tenant environment, some VMs could consume the available bandwidth 
and cause network congestion for those that share the same network resources. 
To improve the quality of service, openstack implements a service to limit the 
bandwidth of VM ports and classify the traffic using DSCP field. However, there 
is still a lack of a mechanism policing specific VMs that consume a lot of 
resources and generate network congestion.

  
  API effects:
  
  By adding a new rule in the QoS extension [1] of neutron we could detect and 
react dynamically to the congestion. This rule will police VMs causing 
congestion by limiting the allowed bandwidth in their ports.

  
  Proposal:
  *
  Our proposal is to use Explicit Congestion Notification (ECN) [2] bits as 
marking mechanism. So, the network near congestion will mark packets as 
Congestion Experienced (CE). Neutron will detect these packets and starts 
monitoring the behavior of the VM causing the congestion. When the congestion 
exceeding a threshold, neutron will react by policing this VM using the 
bandwidth limitation rule of QoS extension. Neutron will keep track of the 
congestion rate and will free the policed VMs when the congestion is over.

  References:
  ***
  [1] QoS https://docs.openstack.org/ocata/networking-guide/config-qos.html
  [2] ECN in IP https://tools.ietf.org/html/rfc3168

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1708460/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1700583] Re: No volume Block Device Mapping in assisted snapshot with Quobyte

2017-08-03 Thread Sylvain Bauza
Putting in Opinion as I'm not sure it's a Nova problem.
Silvan, if you need some help for debugging, just ask in our IRC channel if you 
want.


** Changed in: nova
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1700583

Title:
  No volume Block Device Mapping in assisted snapshot with Quobyte

Status in devstack:
  New
Status in OpenStack Compute (nova):
  Opinion

Bug description:
  *Update*: Please see comment #5 for details on how DevStack is involved in 
this issue.
  Also more current CI runs with this issue can be found at:
  - http://78.46.57.153:8081/refs-changes-95-490195-1/
  - http://78.46.57.153:8081/refs-changes-21-490021-3/
  */Update*

  
  Since roughly last friday Quobyte CIs with a Cinder / Nova setup fail with an 
error during assisted snapshots in four tests:
  
test_create_ebs_image_and_check_boot|test_snapshot_create_delete_with_volume_in_use|test_snapshot_create_offline_delete_online|test_volume_boot_pattern

  The error declares that the volume to be snapshotted has no block
  device mapping.

  I currently cannot identify if this is a Nova or Cinder issue, so
  please keep an open mind when looking into this.

  Complete example CI test runs with all logs can be found at e.g.:

  http://78.46.57.153:8081/refs-changes-26-475226-3/
  http://78.46.57.153:8081/refs-changes-02-476402-2/

  n-api log excerpt:
  [...]
  2017-06-23 14:24:18.137 10730 DEBUG nova.api.openstack.wsgi 
[req-484e4bd2-e091-4411-9252-38549f7c9dbd admin admin] Action: 'create', 
calling method: >, body: {"snapshot": {"create_info": {"snapshot_id": 
"30bd2ad6-37a0-4027-a24e-00657adb0cca", "type": "qcow2", "new_file": 
"volume-43245219-bd54-4b03-a371-cca5836e58e5.30bd2ad6-37a0-4027-a24e-00657adb0cca"},
 "volume_id": "43245219-bd54-4b03-a371-cca5836e58e5"}} _process_stack 
/opt/stack/nova/nova/api/openstack/wsgi.py:609
  2017-06-23 14:24:18.438 10730 INFO nova.api.openstack.wsgi 
[req-484e4bd2-e091-4411-9252-38549f7c9dbd admin admin] HTTP exception thrown: 
No volume Block Device Mapping with id 43245219-bd54-4b03-a371-cca5836e58e5.: 
HTTPBadRequest: No volume Block Device Mapping with id 
43245219-bd54-4b03-a371-cca5836e58e5.
  2017-06-23 14:24:18.440 10730 DEBUG nova.api.openstack.wsgi 
[req-484e4bd2-e091-4411-9252-38549f7c9dbd admin admin] Returning 400 to user: 
No volume Block Device Mapping with id 43245219-bd54-4b03-a371-cca5836e58e5. 
__call__ /opt/stack/nova/nova/api/openstack/wsgi.py:1029
  2017-06-23 14:24:18.441 10730 INFO nova.osapi_compute.wsgi.server 
[req-484e4bd2-e091-4411-9252-38549f7c9dbd admin admin] 127.0.0.1 "POST 
/v2.1/os-assisted-volume-snapshots HTTP/1.1" status: 400 len: 543 time: 
0.6530399
  2017-06-23 14:24:18.770 10730 INFO nova.osapi_compute.wsgi.server 
[req-a0c161b6-12ba-4dbf-aa10-5fa34565441c 
tempest-TestVolumeBootPattern-1485741991 
tempest-TestVolumeBootPattern-1485741991] 127.0.0.1 "GET 
/v2.1/servers/0fcae274-2d09-45e7-ba84-df2b53526895 HTTP/1.1" status: 200 len: 
1762 time: 0.8687029
  [...]

  Reading the test logs I see for instance that this error is thrown right 
after the volume attachment
  to the VM is successful. I'd expect the snapshot API call to be able to find 
the mapping in that case. Example snippet:

   [... attaching volume to VM]

  2017-06-26 13:13:43,182 28633 INFO [tempest.lib.common.rest_client] 
Request (VolumesSnapshotTestJSON:test_snapshot_create_offline_delete_online): 
200 GET 
http://127.0.0.1:8776/v2/182a9178e3334660ad978457a2aecc9d/volumes/df77bf3f-8602-4f28-a999-f4c73a3e9097
 0.223s
  2017-06-26 13:13:43,182 28633 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
  Body: None
  Response - Headers: {u'content-type': 'application/json', 
'content-location': 
'http://127.0.0.1:8776/v2/182a9178e3334660ad978457a2aecc9d/volumes/df77bf3f-8602-4f28-a999-f4c73a3e9097',
 u'x-compute-request-id': 'req-fc80f9a0-dd4c-4e9d-b044-39007bbcdbae', 
u'connection': 'close', 'status': '200', u'content-length': '904', 
u'x-openstack-request-id': 'req-fc80f9a0-dd4c-4e9d-b044-39007bbcdbae', u'date': 
'Mon, 26 Jun 2017
   13:13:43 GMT'}
  Body: {"volume": {"status": "attaching", "user_id": 
"8fa70618b61e4290908a964b6c8d11d9", "attachments": [], "links": [{"href": 
"http://127.0.0.1:8776/v2/182a9178e3334660ad978457a2aecc9d/volumes/df77bf3f-8602-4f28-a999-f4c73a3e9097;,
 "rel": "self"}, {"href": 
"http://127.0.0.1:8776/182a9178e3334660ad978457a2aecc9d/volumes/df77bf3f-8602-4f28-a999-f4c73a3e9097;,
 "rel": "bookmark"}], "availability_zone": "nova", "bootable": "false", 
"encrypted": false, "created_at": "2017-06-26T13:13:29.00", "description": 
null, "os-vol-tenant-attr:tenant_id": "182a9178e3334660ad978457a2aecc9d", 
"updated_at": "2017-06-26T13:13:41.00", 

[Yahoo-eng-team] [Bug 1708444] [NEW] Angular role table stays stale after editing a role

2017-08-03 Thread Bence Romsics
Public bug reported:

In the angularized role panel if I edit a role (eg. change its name) the
actual update happens in Keystone, but the role table is not refreshed
and shows the old state until I reload the page.

devstack b79531a
horizon 53dd2db

ANGULAR_FEATURES={
'roles_panel': True,
...
}

A proposed fix is on the way.

** Affects: horizon
 Importance: Undecided
 Assignee: Bence Romsics (bence-romsics)
 Status: In Progress


** Tags: angularjs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1708444

Title:
  Angular role table stays stale after editing a role

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  In the angularized role panel if I edit a role (eg. change its name)
  the actual update happens in Keystone, but the role table is not
  refreshed and shows the old state until I reload the page.

  devstack b79531a
  horizon 53dd2db

  ANGULAR_FEATURES={
  'roles_panel': True,
  ...
  }

  A proposed fix is on the way.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1708444/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1708433] [NEW] Attaching sriov nic VM fail with keyError pci_slot

2017-08-03 Thread Helena
Public bug reported:

Trace back:

2017-08-03 12:03:50.064 DEBUG nova.network.os_vif_util 
[req-f1414ec4-6df7-46d8-9c97-f678c0f94d77 demo admin] No conversion for VIF 
type hw_veb yet from (pid=134902) nova_to_osvif_vif 
/opt/stack/nova/nova/network/os_vif_util.py:435
2017-08-03 12:03:50.119 ERROR oslo_messaging.rpc.server 
[req-f1414ec4-6df7-46d8-9c97-f678c0f94d77 demo admin] Exception during message 
handling: KeyError: 'pci_slot'
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server Traceback (most recent 
call last):
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 
160, in _process_incoming
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
213, in dispatch
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 
183, in _do_dispatch
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server result = func(ctxt, 
**new_args)
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/exception_wrapper.py", line 76, in wrapped
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server function_name, 
call_dict, binary)
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server self.force_reraise()
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/exception_wrapper.py", line 67, in wrapped
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server return f(self, 
context, *args, **kw)
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/compute/manager.py", line 211, in decorated_function
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server kwargs['instance'], 
e, sys.exc_info())
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server self.force_reraise()
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server 
six.reraise(self.type_, self.value, self.tb)
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/compute/manager.py", line 199, in decorated_function
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server return 
function(self, context, *args, **kwargs)
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/compute/manager.py", line 5166, in attach_interface
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server network_info[0])
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 1443, in attach_interface
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server 
self.vif_driver.plug(instance, vif)
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/virt/libvirt/vif.py", line 794, in plug
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server func(instance, vif)
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server   File 
"/opt/stack/nova/nova/virt/libvirt/vif.py", line 650, in plug_hw_veb
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server 
vif['profile']['pci_slot'],
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server KeyError: 'pci_slot'
2017-08-03 12:03:50.119 TRACE oslo_messaging.rpc.server

Steps to recreate:
- Create a VM on the compute node of a multi-node deployment.
- Attach an direct/macvtap bound SRIOV port:
   openstack server add port VM1 port1

Results:
- The above traceback is found in the n-cpu service on the compute node.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1708433

Title:
  Attaching sriov nic VM fail with keyError pci_slot

Status in OpenStack Compute (nova):
  New

Bug description:
  Trace back:

  2017-08-03 12:03:50.064 DEBUG nova.network.os_vif_util 

[Yahoo-eng-team] [Bug 1708084] Re: ClientException: resources.sdc_vm: Unexpected API Error class 'oslo_db.exception.DBError

2017-08-03 Thread Gaurav Mittal
** Also affects: ubuntu
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1708084

Title:
  ClientException: resources.sdc_vm: Unexpected API Error class
  'oslo_db.exception.DBError

Status in OpenStack Compute (nova):
  Incomplete
Status in Ubuntu:
  New

Bug description:
  ClientException: resources.sdc_vm: Unexpected API Error. Please report
  this at http://bugs.launchpad.net/nova/ and attach the Nova API log if
  possible.  (HTTP 500) (Request-ID:
  req-e4510367-3b71-4612-b0a6-fa77a41bb00a)

  This is OCATA Openstack instyalled in a VM with CentOS 7 as an
  operationf system.

  [root@localhost onap(keystone_admin)]# vgs
VG #PV #LV #SN Attr   VSize   VFree
cinder-volumes   1   1   0 wz--n- 412.00g 312.00g
cl   1   2   0 wz--n- 497.77g  0
  [root@localhost onap(keystone_admin)]#

  [root@localhost onap(keystone_admin)]# uname -a
  Linux openstack 3.10.0-514.26.2.el7.x86_64 #1 SMP Tue Jul 4 15:04:05 UTC 2017 
x86_64 x86_64 x86_64 GNU/Linux
  [root@localhost onap(keystone_admin)]#

  
  [root@localhost onap(keystone_admin)]# cat /etc/redhat-release
  CentOS Linux release 7.3.1611 (Core)
  [root@localhost onap(keystone_admin)]#

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1708084/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1700583] Re: No volume Block Device Mapping in assisted snapshot with Quobyte

2017-08-03 Thread Silvan Kaiser
Ok, some more insights:
- i can workaround this issue by pinning the CIs DevStack version to commit 
0d9c896 . Using the following commit f3d5331 shows the issues described above. 
So this seems to be a DevStack related issue
- So far i could not determine why this is caused but the affecting change 
relates to Matts Email note on openstack-dev: 
http://lists.openstack.org/pipermail/openstack-dev/2017-July/120120.html

So, maybe this is only a DevStack config issue with my setup or this
might be a code issue in Nova. Help for debugging this is appreciated.

** Also affects: devstack
   Importance: Undecided
   Status: New

** Description changed:

+ *Update*: Please see comment #5 for details on how DevStack is involved in 
this issue.
+ Also more current CI runs with this issue can be found at:
+ - http://78.46.57.153:8081/refs-changes-95-490195-1/
+ - http://78.46.57.153:8081/refs-changes-21-490021-3/
+ */Update*
+ 
+ 
  Since roughly last friday Quobyte CIs with a Cinder / Nova setup fail with an 
error during assisted snapshots in four tests:
  
test_create_ebs_image_and_check_boot|test_snapshot_create_delete_with_volume_in_use|test_snapshot_create_offline_delete_online|test_volume_boot_pattern
  
  The error declares that the volume to be snapshotted has no block device
  mapping.
  
  I currently cannot identify if this is a Nova or Cinder issue, so please
  keep an open mind when looking into this.
  
  Complete example CI test runs with all logs can be found at e.g.:
  
- 
  http://78.46.57.153:8081/refs-changes-26-475226-3/
  http://78.46.57.153:8081/refs-changes-02-476402-2/
- 
  
  n-api log excerpt:
  [...]
  2017-06-23 14:24:18.137 10730 DEBUG nova.api.openstack.wsgi 
[req-484e4bd2-e091-4411-9252-38549f7c9dbd admin admin] Action: 'create', 
calling method: >, body: {"snapshot": {"create_info": {"snapshot_id": 
"30bd2ad6-37a0-4027-a24e-00657adb0cca", "type": "qcow2", "new_file": 
"volume-43245219-bd54-4b03-a371-cca5836e58e5.30bd2ad6-37a0-4027-a24e-00657adb0cca"},
 "volume_id": "43245219-bd54-4b03-a371-cca5836e58e5"}} _process_stack 
/opt/stack/nova/nova/api/openstack/wsgi.py:609
  2017-06-23 14:24:18.438 10730 INFO nova.api.openstack.wsgi 
[req-484e4bd2-e091-4411-9252-38549f7c9dbd admin admin] HTTP exception thrown: 
No volume Block Device Mapping with id 43245219-bd54-4b03-a371-cca5836e58e5.: 
HTTPBadRequest: No volume Block Device Mapping with id 
43245219-bd54-4b03-a371-cca5836e58e5.
  2017-06-23 14:24:18.440 10730 DEBUG nova.api.openstack.wsgi 
[req-484e4bd2-e091-4411-9252-38549f7c9dbd admin admin] Returning 400 to user: 
No volume Block Device Mapping with id 43245219-bd54-4b03-a371-cca5836e58e5. 
__call__ /opt/stack/nova/nova/api/openstack/wsgi.py:1029
  2017-06-23 14:24:18.441 10730 INFO nova.osapi_compute.wsgi.server 
[req-484e4bd2-e091-4411-9252-38549f7c9dbd admin admin] 127.0.0.1 "POST 
/v2.1/os-assisted-volume-snapshots HTTP/1.1" status: 400 len: 543 time: 
0.6530399
  2017-06-23 14:24:18.770 10730 INFO nova.osapi_compute.wsgi.server 
[req-a0c161b6-12ba-4dbf-aa10-5fa34565441c 
tempest-TestVolumeBootPattern-1485741991 
tempest-TestVolumeBootPattern-1485741991] 127.0.0.1 "GET 
/v2.1/servers/0fcae274-2d09-45e7-ba84-df2b53526895 HTTP/1.1" status: 200 len: 
1762 time: 0.8687029
  [...]
  
- 
- Reading the test logs I see for instance that this error is thrown right 
after the volume attachment 
+ Reading the test logs I see for instance that this error is thrown right 
after the volume attachment
  to the VM is successful. I'd expect the snapshot API call to be able to find 
the mapping in that case. Example snippet:
  
+  [... attaching volume to VM]
  
-  [... attaching volume to VM]
- 
- 2017-06-26 13:13:43,182 28633 INFO [tempest.lib.common.rest_client] 
Request (VolumesSnapshotTestJSON:test_snapshot_create_offline_delete_online): 
200 GET 
http://127.0.0.1:8776/v2/182a9178e3334660ad978457a2aecc9d/volumes/df77bf3f-8602-4f28-a999-f4c73a3e9097
 0.223s
- 2017-06-26 13:13:43,182 28633 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
- Body: None
- Response - Headers: {u'content-type': 'application/json', 
'content-location': 
'http://127.0.0.1:8776/v2/182a9178e3334660ad978457a2aecc9d/volumes/df77bf3f-8602-4f28-a999-f4c73a3e9097',
 u'x-compute-request-id': 'req-fc80f9a0-dd4c-4e9d-b044-39007bbcdbae', 
u'connection': 'close', 'status': '200', u'content-length': '904', 
u'x-openstack-request-id': 'req-fc80f9a0-dd4c-4e9d-b044-39007bbcdbae', u'date': 
'Mon, 26 Jun 2017
-  13:13:43 GMT'}
- Body: {"volume": {"status": "attaching", "user_id": 
"8fa70618b61e4290908a964b6c8d11d9", "attachments": [], "links": [{"href": 
"http://127.0.0.1:8776/v2/182a9178e3334660ad978457a2aecc9d/volumes/df77bf3f-8602-4f28-a999-f4c73a3e9097;,
 "rel": "self"}, {"href": 
"http://127.0.0.1:8776/182a9178e3334660ad978457a2aecc9d/volumes/df77bf3f-8602-4f28-a999-f4c73a3e9097;,
 

[Yahoo-eng-team] [Bug 1708424] [NEW] When a flavor has resource extra_specs disabling all standard fields, nova tries to make a request to the placements API with no resources

2017-08-03 Thread Sam Betts
Public bug reported:

When a flavor is defined like:

+++
| Field  | Value
  |
+++
| OS-FLV-DISABLED:disabled   | False
  |
| OS-FLV-EXT-DATA:ephemeral  | 0
  |
| access_project_ids | None 
  |
| disk   | 0
  |
| id | 2f41a9c0-f194-4925-81d2-b62a3c07435a 
  |
| name   | test2
  |
| os-flavor-access:is_public | True 
  |
| properties | resources:DISK_GB='0', resources:MEMORY_MB='0', 
resources:VCPU='0' |
| ram| 256  
  |
| rxtx_factor| 1.0  
  |
| swap   |  
  |
| vcpus  | 1
  |
+++

Then it will disable all the standard resources, and pop them from the
resources dictionary here:

https://github.com/openstack/nova/blob/57cd38d1fdc4d6b3439a8374e4713baf0b207184/nova/scheduler/utils.py#L141

the resources dictionary is then empty but nova still tries to use it to
make a request to the placements API. This results in a error message
like:

ERROR nova.scheduler.client.report [None 
req-030ec9a7-4299-47e0-bd38-b9d40721c210 admin admin] Failed to retrieve 
allocation candidates from placement API for filters {}. Got 400: 
Aug 03 12:40:11 qa-cld-m1-neo4 nova-scheduler[4057]:   400 Bad 
Request
Aug 03 12:40:11 qa-cld-m1-neo4 nova-scheduler[4057]:  
Aug 03 12:40:11 qa-cld-m1-neo4 nova-scheduler[4057]:  
Aug 03 12:40:11 qa-cld-m1-neo4 nova-scheduler[4057]:   400 Bad Request
Aug 03 12:40:11 qa-cld-m1-neo4 nova-scheduler[4057]:   The server could not 
comply with the request since it is either malformed or otherwise incorrect.
Aug 03 12:40:11 qa-cld-m1-neo4 nova-scheduler[4057]: Badly formed resources 
parameter. Expected resources query string parameter in form: 
?resources=VCPU:2,MEMORY_MB:1024. Got: empty string.
Aug 03 12:40:11 qa-cld-m1-neo4 nova-scheduler[4057]:  
Aug 03 12:40:11 qa-cld-m1-neo4 nova-scheduler[4057]: .

Nova should catch the case when the flavor has removed all filters and
provide a more useful error message to indicate what the operator as
done wrong.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1708424

Title:
  When a flavor has resource extra_specs disabling all standard fields,
  nova tries to make a request to the placements API with no resources

Status in OpenStack Compute (nova):
  New

Bug description:
  When a flavor is defined like:

  
+++
  | Field  | Value  
|
  
+++
  | OS-FLV-DISABLED:disabled   | False  
|
  | OS-FLV-EXT-DATA:ephemeral  | 0  
|
  | access_project_ids | None   
|
  | disk   | 0  
|
  | id | 2f41a9c0-f194-4925-81d2-b62a3c07435a   
|
  | name   | test2  
|
  | os-flavor-access:is_public | True   
|
  | properties | resources:DISK_GB='0', 
resources:MEMORY_MB='0', resources:VCPU='0' |
  | ram| 256
|
  | rxtx_factor| 1.0
|
  | swap   |

[Yahoo-eng-team] [Bug 1708394] Re: Some URL in neutron's documentation cannot found(404).

2017-08-03 Thread Alexandra Settle
Moving to neutron :)

** Project changed: openstack-manuals => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1708394

Title:
  Some URL in neutron's documentation cannot found(404).

Status in neutron:
  New

Bug description:
  The documentation of neutron (https://docs.openstack.org/neutron/latest/) 
contains links to pages that not exist. 
  https://docs.openstack.org/api/openstack-network/2.0/content/
  https://docs.openstack.org/trunk/openstack-network/admin/content/index.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1708394/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1708394] [NEW] Some URL in neutron's documentation cannot found(404).

2017-08-03 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

The documentation of neutron (https://docs.openstack.org/neutron/latest/) 
contains links to pages that not exist. 
https://docs.openstack.org/api/openstack-network/2.0/content/
https://docs.openstack.org/trunk/openstack-network/admin/content/index.html

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
Some URL in neutron's documentation cannot found(404).
https://bugs.launchpad.net/bugs/1708394
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1708389] [NEW] wsgi and rpc-server has issues with dhcp when creating subnet

2017-08-03 Thread Ebbex
Public bug reported:

I'm not sure how to trace this down further, but with neutron-api
running as a wsgi script in a container, and neutron-rpc-server running
in a separate container. When creating a subnet in the openstack-cli,
e.g. "subnet create --network network1 --subnet-range 10.0.0.0/24
subnet1", the rpc-server logs and dhcp-agent logs fill up with these
errors;

ValueError: Unrecognized attribute(s) 'binding:host_id'

I'm attaching some logs with a stack-trace.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: neutron-dhcp-agent neutron-rpc-server

** Attachment added: "stack-traces from neutron-dhcp-agent and 
neutron-rpc-server"
   
https://bugs.launchpad.net/bugs/1708389/+attachment/4926416/+files/dhcp-error.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1708389

Title:
  wsgi and rpc-server has issues with dhcp when creating subnet

Status in neutron:
  New

Bug description:
  I'm not sure how to trace this down further, but with neutron-api
  running as a wsgi script in a container, and neutron-rpc-server
  running in a separate container. When creating a subnet in the
  openstack-cli, e.g. "subnet create --network network1 --subnet-range
  10.0.0.0/24 subnet1", the rpc-server logs and dhcp-agent logs fill up
  with these errors;

  ValueError: Unrecognized attribute(s) 'binding:host_id'

  I'm attaching some logs with a stack-trace.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1708389/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1700583] Re: No volume Block Device Mapping in assisted snapshot with Quobyte

2017-08-03 Thread Silvan Kaiser
This issue is back hitting our Cinder and Nova CIs. I'm digging for more
information and will post updates here.

** Changed in: nova
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1700583

Title:
  No volume Block Device Mapping in assisted snapshot with Quobyte

Status in OpenStack Compute (nova):
  New

Bug description:
  Since roughly last friday Quobyte CIs with a Cinder / Nova setup fail with an 
error during assisted snapshots in four tests:
  
test_create_ebs_image_and_check_boot|test_snapshot_create_delete_with_volume_in_use|test_snapshot_create_offline_delete_online|test_volume_boot_pattern

  The error declares that the volume to be snapshotted has no block
  device mapping.

  I currently cannot identify if this is a Nova or Cinder issue, so
  please keep an open mind when looking into this.

  Complete example CI test runs with all logs can be found at e.g.:

  
  http://78.46.57.153:8081/refs-changes-26-475226-3/
  http://78.46.57.153:8081/refs-changes-02-476402-2/

  
  n-api log excerpt:
  [...]
  2017-06-23 14:24:18.137 10730 DEBUG nova.api.openstack.wsgi 
[req-484e4bd2-e091-4411-9252-38549f7c9dbd admin admin] Action: 'create', 
calling method: >, body: {"snapshot": {"create_info": {"snapshot_id": 
"30bd2ad6-37a0-4027-a24e-00657adb0cca", "type": "qcow2", "new_file": 
"volume-43245219-bd54-4b03-a371-cca5836e58e5.30bd2ad6-37a0-4027-a24e-00657adb0cca"},
 "volume_id": "43245219-bd54-4b03-a371-cca5836e58e5"}} _process_stack 
/opt/stack/nova/nova/api/openstack/wsgi.py:609
  2017-06-23 14:24:18.438 10730 INFO nova.api.openstack.wsgi 
[req-484e4bd2-e091-4411-9252-38549f7c9dbd admin admin] HTTP exception thrown: 
No volume Block Device Mapping with id 43245219-bd54-4b03-a371-cca5836e58e5.: 
HTTPBadRequest: No volume Block Device Mapping with id 
43245219-bd54-4b03-a371-cca5836e58e5.
  2017-06-23 14:24:18.440 10730 DEBUG nova.api.openstack.wsgi 
[req-484e4bd2-e091-4411-9252-38549f7c9dbd admin admin] Returning 400 to user: 
No volume Block Device Mapping with id 43245219-bd54-4b03-a371-cca5836e58e5. 
__call__ /opt/stack/nova/nova/api/openstack/wsgi.py:1029
  2017-06-23 14:24:18.441 10730 INFO nova.osapi_compute.wsgi.server 
[req-484e4bd2-e091-4411-9252-38549f7c9dbd admin admin] 127.0.0.1 "POST 
/v2.1/os-assisted-volume-snapshots HTTP/1.1" status: 400 len: 543 time: 
0.6530399
  2017-06-23 14:24:18.770 10730 INFO nova.osapi_compute.wsgi.server 
[req-a0c161b6-12ba-4dbf-aa10-5fa34565441c 
tempest-TestVolumeBootPattern-1485741991 
tempest-TestVolumeBootPattern-1485741991] 127.0.0.1 "GET 
/v2.1/servers/0fcae274-2d09-45e7-ba84-df2b53526895 HTTP/1.1" status: 200 len: 
1762 time: 0.8687029
  [...]

  
  Reading the test logs I see for instance that this error is thrown right 
after the volume attachment 
  to the VM is successful. I'd expect the snapshot API call to be able to find 
the mapping in that case. Example snippet:

  
   [... attaching volume to VM]

  2017-06-26 13:13:43,182 28633 INFO [tempest.lib.common.rest_client] 
Request (VolumesSnapshotTestJSON:test_snapshot_create_offline_delete_online): 
200 GET 
http://127.0.0.1:8776/v2/182a9178e3334660ad978457a2aecc9d/volumes/df77bf3f-8602-4f28-a999-f4c73a3e9097
 0.223s
  2017-06-26 13:13:43,182 28633 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
  Body: None
  Response - Headers: {u'content-type': 'application/json', 
'content-location': 
'http://127.0.0.1:8776/v2/182a9178e3334660ad978457a2aecc9d/volumes/df77bf3f-8602-4f28-a999-f4c73a3e9097',
 u'x-compute-request-id': 'req-fc80f9a0-dd4c-4e9d-b044-39007bbcdbae', 
u'connection': 'close', 'status': '200', u'content-length': '904', 
u'x-openstack-request-id': 'req-fc80f9a0-dd4c-4e9d-b044-39007bbcdbae', u'date': 
'Mon, 26 Jun 2017
   13:13:43 GMT'}
  Body: {"volume": {"status": "attaching", "user_id": 
"8fa70618b61e4290908a964b6c8d11d9", "attachments": [], "links": [{"href": 
"http://127.0.0.1:8776/v2/182a9178e3334660ad978457a2aecc9d/volumes/df77bf3f-8602-4f28-a999-f4c73a3e9097;,
 "rel": "self"}, {"href": 
"http://127.0.0.1:8776/182a9178e3334660ad978457a2aecc9d/volumes/df77bf3f-8602-4f28-a999-f4c73a3e9097;,
 "rel": "bookmark"}], "availability_zone": "nova", "bootable": "false", 
"encrypted": false, "created_at": "2017-06-26T13:13:29.00", "description": 
null, "os-vol-tenant-attr:tenant_id": "182a9178e3334660ad978457a2aecc9d", 
"updated_at": "2017-06-26T13:13:41.00", "volume_type": "Quobyte", "name": 
"tempest-VolumesSnapshotTestJSON-Volume-1014042071", "replication_status": 
null, "consistencygroup_id": null, "source_volid": null, "snapshot_id": null, 
"multiattach": false, "metadata": {}, "id": 
"df77bf3f-8602-4f28-a999-f4c73a3e9097", "size": 1}}
  2017-06-26 13:13:44,333 28633 INFO 

[Yahoo-eng-team] [Bug 1708358] [NEW] ovsfw ignores icmp_{type, code}

2017-08-03 Thread IWAMOTO Toshihiro
Public bug reported:

The SG code uses port_range_{min,max} fields for ICMP type/code.
iptables_firewall handles that correctly but ovsfw lacks that knowledge, 
resulting in those fields ignored by the ovsfw driver.

** Affects: neutron
 Importance: Undecided
 Assignee: IWAMOTO Toshihiro (iwamoto)
 Status: In Progress


** Tags: ovs-fw sg-fw

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1708358

Title:
  ovsfw ignores icmp_{type,code}

Status in neutron:
  In Progress

Bug description:
  The SG code uses port_range_{min,max} fields for ICMP type/code.
  iptables_firewall handles that correctly but ovsfw lacks that knowledge, 
resulting in those fields ignored by the ovsfw driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1708358/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1708081] Re: Router view issues an error if extension l3_agent_scheduler is not supported

2017-08-03 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/489510
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=245da1d812d447c1da98b57e366a95bfcc0b73de
Submitter: Jenkins
Branch:master

commit 245da1d812d447c1da98b57e366a95bfcc0b73de
Author: Adit Sarfaty 
Date:   Tue Aug 1 10:26:50 2017 +0300

Do not call list_l3_agent_hosting_router if not supported

For the admin router view, list_l3_agent_hosting_router is called.
If this extension is not supported, it returns "resource not found"
and an error popup.
This api should not be called in case the extension is not supported.

Closes-bug: #1708081
Change-Id: I38f7386689d95dde3fc76bbf4d6b5e781b515878


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1708081

Title:
  Router view issues an error if extension l3_agent_scheduler is not
  supported

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Vieing a router from the admin->routers->router view may issue a
  warning popup in case l3_agent_scheduler extension is not supported.

  list_l3_agent_hosting_router should be called only if supported.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1708081/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp