[Yahoo-eng-team] [Bug 1756270] Re: quota on shared provider networks

2018-05-17 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1756270

Title:
  quota on shared provider networks

Status in neutron:
  Expired

Bug description:
  hi,

  Today neutron supports quota on floating IPs, but for private clouds
  the use cases for shared provider networks are increasing. In this
  area neutron offers no quota mechanism as far as i can tell, and it is
  quite painful since we cannot prevent any project from consuming the
  whole provider network. Ideally we would like to have a much more
  granular quota system that can handle limits per network, not just a
  total summary. Private clouds tend to have many external/shared
  networks in different security zones in the enterprise.

  - Floating IP quota per external network.
  - IP/ports quota per shared provider network.

  br,

  bjolo

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1756270/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1756028] Re: ipsec-site-connection-create always remains in PENDING_CREATE

2018-05-17 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1756028

Title:
  ipsec-site-connection-create always remains in PENDING_CREATE

Status in neutron:
  Expired

Bug description:
  When i try to create secure site using ipsec-site-connection-create
  the status always remains in PENDING_CREATE. This makes it impossible
  to ping two VMs on either side of the secure site.

  Steps to reproduce:
  $ neutron ipsec-site-connection-create --vpnservice-id 
a7e325dd-9d34-4720-afa5-a38e1d552157 --ikepolicy-id 
cfb8a527-e646-439b-856f-73042b505c95  --ipsecpolicy-id 
a8513ab5-96ab-45c0-ad9c-434c29510053  --peer-id 10.0.0.0  --peer-address 
10.0.0.0  --psk secret  --peer-cidr 10.0.0.0/24
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  Created a new ipsec_site_connection:
  +---++
  | Field | Value  |
  +---++
  | admin_state_up| True   |
  | auth_mode | psk|
  | description   ||
  | dpd   | {"action": "hold", "interval": 30, "timeout": 120} |
  | id| 5b92ab15-bc3e-47f2-9865-18de4adaac3f   |
  | ikepolicy_id  | cfb8a527-e646-439b-856f-73042b505c95   |
  | initiator | bi-directional |
  | ipsecpolicy_id| a8513ab5-96ab-45c0-ad9c-434c29510053   |
  | local_ep_group_id ||
  | local_id  ||
  | mtu   | 1500   |
  | name  ||
  | peer_address  | 10.0.0.0   |
  | peer_cidrs| 10.0.0.0/24|
  | peer_ep_group_id  ||
  | peer_id   | 10.0.0.0   |
  | project_id| ec82dc95ea564ec39852365ecfca3a09   |
  | psk   | secret |
  | route_mode| static |
  | status| PENDING_CREATE |
  | tenant_id | ec82dc95ea564ec39852365ecfca3a09   |
  | vpnservice_id | a7e325dd-9d34-4720-afa5-a38e1d552157   |
  +---++

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1756028/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1771860] [NEW] instance.availability_zone reports wrong az after live migration

2018-05-17 Thread Matt Riedemann
Public bug reported:

This is a follow up to bug 1768876 and was discovered via functional
testing for that bug here:

https://review.openstack.org/#/c/567701/

The scenario is:

1. create two compute hosts in separate AZs (zone1 and zone2)
2. create an instance without specifying an AZ so it can move freely between 
hosts in different zones
3. live migrate the instance - since there is only one other host, the instance 
has to move to the other zone
4. assert the instance.availability_zone is updated to reflect the other zone

#4 fails because the live migration conductor task doesn't update the
instance.az once a destination host is picked by the scheduler. Note
that the other move operations like unshelve, evacuate and cold migrate
do update the instance.az once a host is picked.

** Affects: nova
 Importance: Medium
 Assignee: Matt Riedemann (mriedem)
 Status: Triaged

** Affects: nova/pike
 Importance: Undecided
 Status: New

** Affects: nova/queens
 Importance: Undecided
 Status: New


** Tags: availability-zones live-migration

** Changed in: nova
 Assignee: (unassigned) => Matt Riedemann (mriedem)

** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Also affects: nova/queens
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1771860

Title:
  instance.availability_zone reports wrong az after live migration

Status in OpenStack Compute (nova):
  Triaged
Status in OpenStack Compute (nova) pike series:
  New
Status in OpenStack Compute (nova) queens series:
  New

Bug description:
  This is a follow up to bug 1768876 and was discovered via functional
  testing for that bug here:

  https://review.openstack.org/#/c/567701/

  The scenario is:

  1. create two compute hosts in separate AZs (zone1 and zone2)
  2. create an instance without specifying an AZ so it can move freely between 
hosts in different zones
  3. live migrate the instance - since there is only one other host, the 
instance has to move to the other zone
  4. assert the instance.availability_zone is updated to reflect the other zone

  #4 fails because the live migration conductor task doesn't update the
  instance.az once a destination host is picked by the scheduler. Note
  that the other move operations like unshelve, evacuate and cold
  migrate do update the instance.az once a host is picked.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1771860/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1771851] [NEW] Image panel doesn't check 'compute:create' policy

2018-05-17 Thread Andrew Bogott
Public bug reported:

The Horizon image panel provides a 'Launch' button to create a server
from a given image.

The django code for this button has correct policy checks; the Angular
code has none.  That means that the 'Launch' button displays even if the
user is not permitted to launch instances, resulting in a frustrating
failure much later in the process.

The button should not display if the user is not permitted to create
VMs.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1771851

Title:
  Image panel doesn't check 'compute:create' policy

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The Horizon image panel provides a 'Launch' button to create a server
  from a given image.

  The django code for this button has correct policy checks; the Angular
  code has none.  That means that the 'Launch' button displays even if
  the user is not permitted to launch instances, resulting in a
  frustrating failure much later in the process.

  The button should not display if the user is not permitted to create
  VMs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1771851/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1771700] Re: nova-lvm tempest job failing with InvalidDiskInfo

2018-05-17 Thread Matt Riedemann
We'll have to backport the fix for this to ocata as well:

https://review.openstack.org/#/q/I464bc2b88123a012cd12213beac4b572c3c20a56

** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Changed in: nova/ocata
   Status: New => Confirmed

** Changed in: nova/pike
   Status: New => Confirmed

** Changed in: nova/queens
   Status: New => Confirmed

** Changed in: nova/pike
   Importance: Undecided => High

** Changed in: nova/ocata
   Importance: Undecided => High

** Changed in: nova/queens
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1771700

Title:
  nova-lvm tempest job failing with InvalidDiskInfo

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) ocata series:
  Confirmed
Status in OpenStack Compute (nova) pike series:
  Confirmed
Status in OpenStack Compute (nova) queens series:
  Confirmed

Bug description:
  There has been a recent regression in the nova-lvm tempest job. The
  most recent passing run was on 2018-05-11 [1][2], so something
  regressed it between then and yesterday 2018-05-15.

  The build fails and the following trace is seen in the n-cpu log:

  May 15 23:01:40.174233 ubuntu-xenial-rax-dfw-0004040560 nova-compute[28718]: 
ERROR nova.compute.manager Traceback (most recent call last):
  May 15 23:01:40.174457 ubuntu-xenial-rax-dfw-0004040560 nova-compute[28718]: 
ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 7343, in 
update_available_resource_for_node
  May 15 23:01:40.174699 ubuntu-xenial-rax-dfw-0004040560 nova-compute[28718]: 
ERROR nova.compute.manager rt.update_available_resource(context, nodename)
  May 15 23:01:40.174922 ubuntu-xenial-rax-dfw-0004040560 nova-compute[28718]: 
ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/compute/resource_tracker.py", line 664, in 
update_available_resource
  May 15 23:01:40.175170 ubuntu-xenial-rax-dfw-0004040560 nova-compute[28718]: 
ERROR nova.compute.manager resources = 
self.driver.get_available_resource(nodename)
  May 15 23:01:40.175414 ubuntu-xenial-rax-dfw-0004040560 nova-compute[28718]: 
ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 6391, in 
get_available_resource
  May 15 23:01:40.175641 ubuntu-xenial-rax-dfw-0004040560 nova-compute[28718]: 
ERROR nova.compute.manager disk_over_committed = 
self._get_disk_over_committed_size_total()
  May 15 23:01:40.175868 ubuntu-xenial-rax-dfw-0004040560 nova-compute[28718]: 
ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 7935, in 
_get_disk_over_committed_size_total
  May 15 23:01:40.176091 ubuntu-xenial-rax-dfw-0004040560 nova-compute[28718]: 
ERROR nova.compute.manager config, block_device_info)
  May 15 23:01:40.176333 ubuntu-xenial-rax-dfw-0004040560 nova-compute[28718]: 
ERROR nova.compute.manager   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 7852, in 
_get_instance_disk_info_from_config
  May 15 23:01:40.176555 ubuntu-xenial-rax-dfw-0004040560 nova-compute[28718]: 
ERROR nova.compute.manager virt_size = disk_api.get_disk_size(path)
  May 15 23:01:40.176773 ubuntu-xenial-rax-dfw-0004040560 nova-compute[28718]: 
ERROR nova.compute.manager   File "/opt/stack/new/nova/nova/virt/disk/api.py", 
line 99, in get_disk_size
  May 15 23:01:40.176994 ubuntu-xenial-rax-dfw-0004040560 nova-compute[28718]: 
ERROR nova.compute.manager return images.qemu_img_info(path).virtual_size
  May 15 23:01:40.177215 ubuntu-xenial-rax-dfw-0004040560 nova-compute[28718]: 
ERROR nova.compute.manager   File "/opt/stack/new/nova/nova/virt/images.py", 
line 87, in qemu_img_info
  May 15 23:01:40.177452 ubuntu-xenial-rax-dfw-0004040560 nova-compute[28718]: 
ERROR nova.compute.manager raise exception.InvalidDiskInfo(reason=msg)
  May 15 23:01:40.177674 ubuntu-xenial-rax-dfw-0004040560 nova-compute[28718]: 
ERROR nova.compute.manager InvalidDiskInfo: Disk info file is invalid: qemu-img 
failed to execute on 
/dev/stack-volumes-default/8a1d5912-13e1-4583-876e-a04396b6b712_disk : 
Unexpected error while running command.
  May 15 23:01:40.177902 ubuntu-xenial-rax-dfw-0004040560 nova-compute[28718]: 
ERROR nova.compute.manager Command: /usr/bin/python -m oslo_concurrency.prlimit 
--as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info 
/dev/stack-volumes-default/8a1d5912-13e1-4583-876e-a04396b6b712_disk 
--force-share
  May 15 23:01:40.178118 ubuntu-xenial-rax-dfw-0004040560 nova-compute[28718]: 
ERROR nova.compute.manager Exit code: 1
  May 15 23:01:40.178344 ubuntu-xenial-rax-dfw-0004040560 nova-compute[28718]: 
ERROR nova.compute.manager Stdout: u''
  May 15 23:0

[Yahoo-eng-team] [Bug 1771841] [NEW] packet loss during backup L3 HA agent restart

2018-05-17 Thread Slawek Kaplonski
Public bug reported:

When backup L3 agent in HA deployment is restarted, there is 20-30 seconds of 
data plane connectivity break to FloatingIP.
It looks that during start of L3 agent, it first enable IPv6 forwarding: 
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/namespaces.py#L106
 and then disable it for non master node: 
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/ha_router.py#L419
That cause send of MLDv2 packet:

08:38:29.815180 Out fa:16:3e:9e:de:0e ethertype IPv6 (0x86dd), length
112: :: > ff02::16: HBH ICMP6, multicast listener report v2, 2 group
record(s), length 48

and that breaks somehow connectivity as packets are sent to backup node
instead of master one.

** Affects: neutron
 Importance: Undecided
 Assignee: Slawek Kaplonski (slaweq)
 Status: Confirmed


** Tags: l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1771841

Title:
  packet loss during backup L3 HA agent restart

Status in neutron:
  Confirmed

Bug description:
  When backup L3 agent in HA deployment is restarted, there is 20-30 seconds of 
data plane connectivity break to FloatingIP.
  It looks that during start of L3 agent, it first enable IPv6 forwarding: 
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/namespaces.py#L106
 and then disable it for non master node: 
https://github.com/openstack/neutron/blob/master/neutron/agent/l3/ha_router.py#L419
  That cause send of MLDv2 packet:

  08:38:29.815180 Out fa:16:3e:9e:de:0e ethertype IPv6 (0x86dd), length
  112: :: > ff02::16: HBH ICMP6, multicast listener report v2, 2 group
  record(s), length 48

  and that breaks somehow connectivity as packets are sent to backup
  node instead of master one.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1771841/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1771810] [NEW] Quota calculation connects to all available cells

2018-05-17 Thread Belmiro Moreira
Public bug reported:

Quota utilisation calculation connects to all cells DBs to get all consumed 
resources for a project.
When having several cells this can be inefficient and can fail if one of the 
cell DBs is not available.

To calculate the quota utilization of a project should be enough to use
only the cells where the project has/had instances. This information is
available in nova_api DB.

** Affects: nova
 Importance: Undecided
 Assignee: Surya Seetharaman (tssurya)
 Status: New


** Tags: cells quotas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1771810

Title:
  Quota calculation connects to all available cells

Status in OpenStack Compute (nova):
  New

Bug description:
  Quota utilisation calculation connects to all cells DBs to get all consumed 
resources for a project.
  When having several cells this can be inefficient and can fail if one of the 
cell DBs is not available.

  To calculate the quota utilization of a project should be enough to
  use only the cells where the project has/had instances. This
  information is available in nova_api DB.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1771810/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1771382] Re: ds-identify: fails to recognize NoCloud datasource on boot cause it does not have /sbin in $PATH and thus does not find blkid

2018-05-17 Thread Bug Watch Updater
** Changed in: cloud-init (openSUSE)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1771382

Title:
  ds-identify: fails to recognize NoCloud datasource on boot cause it
  does not have /sbin in $PATH and thus does not find blkid

Status in cloud-init:
  New
Status in cloud-init package in openSUSE:
  Fix Released

Bug description:
  cloud-init 18.2 from
  http://download.opensuse.org/repositories/Cloud:/Tools/SLE_12_SP3/ on
  SLES 12 SP 3 with NoCloud data source via Cloud Init drive made by
  Proxmox.

  On SLES 12 SP3 NoCloud data source was not working, despite

  slestemplate:~ # blkid -c /dev/null -o export
  […]
  DEVNAME=/dev/sr0
  UUID=2018-05-15-16-34-27-00
  LABEL=cidata
  TYPE=iso9660
  […]

  with necessary files on it. blkid gives 0 as returncode

  Why?

  I only kept parts of the output:

  slestemplate:/etc/cloud # cat /run/cloud-init/ds-identify.log 
  [up 8.63s] ds-identify 
  policy loaded: mode=search report=false found=all maybe=all notfound=disabled
  no datasource_list found, using default: MAAS ConfigDrive NoCloud AltCloud 
Azure Bigstep CloudSigma CloudStack DigitalOcean AliYun Ec2 GCE OpenNebula 
OpenStack OVF SmartOS Scaleway Hetzner IBMCloud
  ERROR: failed running [127]: blkid -c /dev/null -o export
  […]
  FS_LABELS=unavailable:error
  ISO9660_DEVS=unavailable:error

  It might have been that I did not yet add the CloudInit drive in
  Proxmox yet.

  A subsequent call to

  slestemplate:~ # /usr/lib/cloud-init/ds-identify

  did not yet yield a different result.

  Only by analysing the source I found that it caches results and I can
  use the `--force` option to override this. I did this and the NoCloud
  datasource got detected properly. Apparently this is cached now.

  The tool would only inform of the caching as a DEBUG message. However
  I set logging to INFO for all parts of Cloud Init as the FileHandler
  clutters the log with tons of messages how many bytes it read from
  each file. Sure, I could use INFO only for FileHandler.

  Several issues reduce the ease of administration here:

  1. Don´t cache errors. Really… just… don´t.

  2. Don´t cache errors almost *silently* (just as a debug message).

  3. Decide wisely what is a debug message and what is not.

  4. A search for `ds-identify` in the documentation available at
  https://cloudinit.readthedocs.io/en/latest/ did not yield any result.

  5. And in general: Keep it short and simple.

  IMHO the first is the most important: Don´t cache errors. If the
  resource now is there, recognize it, without further discussion.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1771382/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1764125] Re: Re-attaching an encrypted(Barbican) Cinder (RBD) volume to an instance fails

2018-05-17 Thread Lee Yarwood
Thanks to Tzach I was able to get access to an env downstream and
confirm whats going on here.

c-vol appears to be creating a fresh secret for the new volume that
isn't capable of unlocking the volume. IMHO c-vol should just copy the
associated secret UUID during the creation process from an image with
one already associated to it.

Additionally, the create flow here is really weird, I can see that we
download the image twice and try to import into rbd twice. The first
import appears to be a fresh LUKS encrypted image, the second a raw to
raw conversion that does nothing to the original LUKS encryption of the
image.

Anyway, I'm removing nova from this bug and adding cinder. More detailed
notes can be found below.

[ notes ]

I can see multiple keys used by n-cpu :

2018-05-17 11:47:47.382 1 DEBUG barbicanclient.v1.secrets [req-
6c45d622-ecf1-4cbb-a038-b8eaaf776818 ea26e0f59cf44f909a0dbe86f1f21078
3d16a4daf99042d5adbc4f0d55dbf322 - default default] Getting secret -
Secret href: http://172.17.1.12:9311/v1/secrets/a3c400ce-
8b94-4ee5-90e9-564bab6c823b get /usr/lib/python2.7/site-
packages/barbicanclient/v1/secrets.py:457

2018-05-17 11:52:26.413 1 DEBUG barbicanclient.v1.secrets [req-dfe882de-
0b11-4a70-b527-78b47a7faf2e ea26e0f59cf44f909a0dbe86f1f21078
3d16a4daf99042d5adbc4f0d55dbf322 - default default] Getting secret -
Secret href: http://172.17.1.12:9311/v1/secrets/3b88eedc-813e-
4e01-bec7-d8d2b7d2ef42 get /usr/lib/python2.7/site-
packages/barbicanclient/v1/secrets.py:457

Fetching these we can see that they are not the same :

$ curl -vv -H "X-Auth-Token: $TOKEN" -H 'Accept: application/octet-stream' -o 
a3c400ce-8b94-4ee5-90e9-564bab6c823b 
http://10.0.0.106:9311/v1/secrets/a3c400ce-8b94-4ee5-90e9-564bab6c823b
* About to connect() to 10.0.0.106 port 9311 (#0)
*   Trying 10.0.0.106...
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
  0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 0* 
Connected to 10.0.0.106 (10.0.0.106) port 9311 (#0)
> GET /v1/secrets/a3c400ce-8b94-4ee5-90e9-564bab6c823b HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 10.0.0.106:9311
> X-Auth-Token: 
> gABa_W8gLDSMIfr7hzDC385Qpjewpy2awYIrqyO0O8U5VceB4YX_xyDlH7zBBPyR68L5krAEvCzkJq-b335TbGGeqQ_EDFNa9pclZo7Qm3m0_E8ofv0W9Ny8XWwhKERNK-3BxuUUMf1N7CgexHnkIgFye23EpzZF8lcxAKWmNCIiY_p2h9g
> Accept: application/octet-stream
> 
< HTTP/1.1 200 OK
< Date: Thu, 17 May 2018 12:12:16 GMT
< Server: Apache
< x-openstack-request-id: req-e32e0e58-8234-4fd3-90d8-50f9f72d617c
< Content-Length: 32
< Content-Type: application/octet-stream
< 
{ [data not shown]
10032  100320 0115  0 --:--:-- --:--:-- --:--:--   115
* Connection #0 to host 10.0.0.106 left intact

$ curl -vv -H "X-Auth-Token: $TOKEN" -H 'Accept: application/octet-stream' -o 
3b88eedc-813e-4e01-bec7-d8d2b7d2ef42 
http://10.0.0.106:9311/v1/secrets/3b88eedc-813e-4e01-bec7-d8d2b7d2ef42  
 
* About to connect() to 10.0.0.106 port 9311 (#0)
*   Trying 10.0.0.106...
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
  0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 0* 
Connected to 10.0.0.106 (10.0.0.106) port 9311 (#0)
> GET /v1/secrets/3b88eedc-813e-4e01-bec7-d8d2b7d2ef42 HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 10.0.0.106:9311
> X-Auth-Token: 
> gABa_W8gLDSMIfr7hzDC385Qpjewpy2awYIrqyO0O8U5VceB4YX_xyDlH7zBBPyR68L5krAEvCzkJq-b335TbGGeqQ_EDFNa9pclZo7Qm3m0_E8ofv0W9Ny8XWwhKERNK-3BxuUUMf1N7CgexHnkIgFye23EpzZF8lcxAKWmNCIiY_p2h9g
> Accept: application/octet-stream
> 
< HTTP/1.1 200 OK
< Date: Thu, 17 May 2018 12:12:33 GMT
< Server: Apache
< x-openstack-request-id: req-cd6964b5-eaac-4d97-b3a5-ae5dc2d2e474
< Content-Length: 32
< Content-Type: application/octet-stream
< 
{ [data not shown]
10032  100320 0112  0 --:--:-- --:--:-- --:--:--   112
* Connection #0 to host 10.0.0.106 left intact

Decoding as n-cpu does to set the passphrase (urgh!):

$ python 
[..]
>>> for key in ['3b88eedc-813e-4e01-bec7-d8d2b7d2ef42', 
>>> 'a3c400ce-8b94-4ee5-90e9-564bab6c823b']:
>>> 
>>>   
...   with open(key) as f:  


  
... binascii.hexlify(f.read()).decode('utf-8')  

   

[Yahoo-eng-team] [Bug 1771806] [NEW] Ironic nova-compute failover creates new resource provider removing the resource_provider_aggregates link

2018-05-17 Thread Belmiro Moreira
Public bug reported:

When using the request_filter functionality, aggregates are mapped to 
placement_aggregates.
placement_provider_aggregates contains the resource providers mapped in 
aggregate_hosts.

The problem happens when a nova-compute for ironic fails and hosts are
automatically moved to a different nova-compute. In this case a new
compute_node entry is created originating a new resource provider.

As consequence the placement_provider_aggregates doesn't have the new
resource providers.

** Affects: nova
 Importance: Undecided
 Assignee: Surya Seetharaman (tssurya)
 Status: New


** Tags: ironic placement

** Tags removed: placem
** Tags added: ironic placement

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1771806

Title:
  Ironic nova-compute failover creates new resource provider removing
  the resource_provider_aggregates link

Status in OpenStack Compute (nova):
  New

Bug description:
  When using the request_filter functionality, aggregates are mapped to 
placement_aggregates.
  placement_provider_aggregates contains the resource providers mapped in 
aggregate_hosts.

  The problem happens when a nova-compute for ironic fails and hosts are
  automatically moved to a different nova-compute. In this case a new
  compute_node entry is created originating a new resource provider.

  As consequence the placement_provider_aggregates doesn't have the new
  resource providers.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1771806/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1763204] Re: wsgi.py is missing

2018-05-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/561802
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=0ca736e5da47413db6749053e6083b82cbb24825
Submitter: Zuul
Branch:master

commit 0ca736e5da47413db6749053e6083b82cbb24825
Author: Adrian Turjak 
Date:   Tue Apr 17 18:27:27 2018 +1200

Create new wsgi.py file and deprecate old file

Django 1.4 stopped creating django.wsgi files and the common
practice now for a while has been a wsgi.py since it is actually
python code, and should actually be importable.

Right now someone has to copy and rename the existing file if they
want to use it with a server like gunicorn.

This patch adds a new file in location that is importable via python
and adds a deprecation log to the old one.

This also updates the wsgi generation commands to instead  create
'horizon_wsgi.py' and have the apache conf generation also use that
or the default wsgi file.

Change-Id: I0f8bd16c8973ad23bcd8f73b54584dc69e5aed0c
Closes-Bug: #1763204


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1763204

Title:
  wsgi.py is missing

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Horizon was likely started very early along with Django, and thus has the old 
format wsgi file as "django.wsgi".
  https://github.com/openstack/horizon/tree/master/openstack_dashboard/wsgi

  This is not how django names this file anymore, nor how it is really
  used.

  
https://stackoverflow.com/questions/20035252/difference-between-wsgi-py-and-django-wsgi
  https://docs.djangoproject.com/en/1.11/howto/deployment/wsgi/modwsgi/

  The expectation is having a wsgi.py file somewhere along your
  importable python path. Normally this is in the same place as your
  settings.py file when building a default django project.

  Ideally we should rename and move the file to a place it is easier to import 
from:
  horizon/openstack_dashboard/wsgi/django.wsgi  
horizon/openstack_dashboard/wsgi.py

  gunicorn cannot import and run it because it isn't a '.py' file, and
  is one of the most popular wsgi servers around.

  By doing the above move and rename the file can now be imported and run as:
  gunicorn openstack_dashboard.wsgi:application

  
  NOTE: This will likely break anyone using it right now. We may instead want 
to copy the file to the new location and add a deprecation log into the old one 
with a notice to remove in 2 cycles. Ideally also document that deployers 
should be using the new file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1763204/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1771781] [NEW] Quota does not check invalid tenant_id

2018-05-17 Thread Yang Youseok
Public bug reported:

Currently, neutron quota API accept invalid tenant_id value without
validation. Even user can add arbitrary quota entry which is not existed
because by default quota engine create new entry if the queried entry is
not found.

This bug is also found across the other openstack projects (nova, trove
..) using similar quota logic, and nova side there was commit to fix it
(https://review.openstack.org/#/c/435010/). I found neutron did not have
any similar approach, so It worth to talk about the solution. (which
access to keystone API in the middle of quota API).

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1771781

Title:
  Quota does not check invalid tenant_id

Status in neutron:
  New

Bug description:
  Currently, neutron quota API accept invalid tenant_id value without
  validation. Even user can add arbitrary quota entry which is not
  existed because by default quota engine create new entry if the
  queried entry is not found.

  This bug is also found across the other openstack projects (nova,
  trove ..) using similar quota logic, and nova side there was commit to
  fix it (https://review.openstack.org/#/c/435010/). I found neutron did
  not have any similar approach, so It worth to talk about the solution.
  (which access to keystone API in the middle of quota API).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1771781/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1771773] [NEW] Ssl2/3 should not be used for secure VNC access

2018-05-17 Thread Andrey Volkov
Public bug reported:

This report is based on Bandit scanner results.

On
https://git.openstack.org/cgit/openstack/nova/tree/nova/console/rfb/authvencrypt.py?h=refs/heads/master#n137

137 wrapped_sock = ssl.wrap_socket(

wrap_socket is used without ssl_version that means SSLv23 by default.
As server part (QEMU) is based on gnutls supporting all modern TLS versions
it is possible to use stricter tls version on the client (TLSv1.2).
Another option is to make this param configurable.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1771773

Title:
  Ssl2/3 should not be used for secure VNC access

Status in OpenStack Compute (nova):
  New

Bug description:
  This report is based on Bandit scanner results.

  On
  
https://git.openstack.org/cgit/openstack/nova/tree/nova/console/rfb/authvencrypt.py?h=refs/heads/master#n137

  137 wrapped_sock = ssl.wrap_socket(

  wrap_socket is used without ssl_version that means SSLv23 by default.
  As server part (QEMU) is based on gnutls supporting all modern TLS versions
  it is possible to use stricter tls version on the client (TLSv1.2).
  Another option is to make this param configurable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1771773/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1690998] Re: Adding a test to check for using multiple security groups

2018-05-17 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/485564
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=1658e353c336d666ec43f725bd2c87633b43571d
Submitter: Zuul
Branch:master

commit 1658e353c336d666ec43f725bd2c87633b43571d
Author: Dongcan Ye 
Date:   Thu Jul 20 19:27:23 2017 +0800

Fullstack: Add using multiple security groups

Change-Id: I8eadb434be3e7d0849a6e4c3bdf75dbfb5f15a83
Closes-Bug: #1690998


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1690998

Title:
  Adding a test to check for using multiple security groups

Status in neutron:
  Fix Released

Bug description:
  The scenario:

  1. Create a port
  2. Attaching a security group with ICMP and SSH rules to a port and launching 
an instance with this port. 
  3. Then adding another security group with UDP and ICMP rules.
  4. Check ICMP/UDP and SSH.
  5. Removing the first security group and Check SSH fails but ICMP/UDP works

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1690998/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp