[Yahoo-eng-team] [Bug 2061922] [NEW] max_password_length config and logs inconsistent

2024-04-16 Thread Sam Morrison
Public bug reported:

We recently rolled out a config change to update the max_password_length
to avoid all the log messages. We set this to 54 as mentioned in the
release notes which we discovered was a BIG mistake as this broke
everyone authenticating using existing application credentials.

There is a bit of confusion as to what to do here and the code and the
release notes are inconsistent.


Upgrading to zed we got a lot of these in the logs [1]:

"Truncating password to algorithm specific maximum length 72
characters."

In the config help [2] for "max_password_length" it says:

"The bcrypt max_password_length is 72 bytes."

In the release notes [1] it say:

"Currently only bcrypt has fixed allowed lengths defined which is 54
characters."


[1] 
https://github.com/openstack/keystone/blob/9b0b414e3eb915c89c9786abeb1307ba734f5901/keystone/common/password_hashing.py#L89
[2] 
https://github.com/openstack/keystone/blob/9b0b414e3eb915c89c9786abeb1307ba734f5901/keystone/conf/identity.py#L106
[3] https://docs.openstack.org/releasenotes/keystone/zed.html

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/2061922

Title:
  max_password_length config and logs inconsistent

Status in OpenStack Identity (keystone):
  New

Bug description:
  We recently rolled out a config change to update the
  max_password_length to avoid all the log messages. We set this to 54
  as mentioned in the release notes which we discovered was a BIG
  mistake as this broke everyone authenticating using existing
  application credentials.

  There is a bit of confusion as to what to do here and the code and the
  release notes are inconsistent.

  
  Upgrading to zed we got a lot of these in the logs [1]:

  "Truncating password to algorithm specific maximum length 72
  characters."

  In the config help [2] for "max_password_length" it says:

  "The bcrypt max_password_length is 72 bytes."

  In the release notes [1] it say:

  "Currently only bcrypt has fixed allowed lengths defined which is 54
  characters."

  
  [1] 
https://github.com/openstack/keystone/blob/9b0b414e3eb915c89c9786abeb1307ba734f5901/keystone/common/password_hashing.py#L89
  [2] 
https://github.com/openstack/keystone/blob/9b0b414e3eb915c89c9786abeb1307ba734f5901/keystone/conf/identity.py#L106
  [3] https://docs.openstack.org/releasenotes/keystone/zed.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/2061922/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1998268] [NEW] Fernet uid/gid logic issue

2022-11-29 Thread Sam Morrison
Public bug reported:

Running

keystone-manage fernet_rotate --keystone-user root --keystone-group
keystone

Will not work as expected due to some wrong logic when uid is set to 0
due to 0 == False

The new 0 key will have ownership of root:root, not root:keystone

** Affects: keystone
 Importance: Undecided
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1998268

Title:
  Fernet uid/gid logic issue

Status in OpenStack Identity (keystone):
  In Progress

Bug description:
  Running

  keystone-manage fernet_rotate --keystone-user root --keystone-group
  keystone

  Will not work as expected due to some wrong logic when uid is set to 0
  due to 0 == False

  The new 0 key will have ownership of root:root, not root:keystone

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1998268/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1997185] [NEW] ML2 When OVN mech driver is enabled dhcp extension is disabled

2022-11-20 Thread Sam Morrison
Public bug reported:

We use ML2 with linuxbridge and ovn mech drivers. When upgrading to yoga
DHCP stopped working as the DHCP extension was disabled.

** Affects: neutron
 Importance: Undecided
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1997185

Title:
  ML2 When OVN mech driver is enabled dhcp extension is disabled

Status in neutron:
  In Progress

Bug description:
  We use ML2 with linuxbridge and ovn mech drivers. When upgrading to
  yoga DHCP stopped working as the DHCP extension was disabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1997185/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1971228] Re: cell conductor tries to connect the other cells's DB when it is started

2022-06-15 Thread Sam Morrison
Been looking into this, this is definitely a bug. The issue is because
cell conductors still need to be configured with the API database to
allow for some upcalls
https://docs.openstack.org/nova/latest/admin/cells.html#operations-
requiring-upcalls . Unless this doc is out of date?

When api_database is set it'll cause nova-conductor to check all cells
service versions.

It has in nova/utils.py
if CONF.api_database.connection is not None: 

I think this check needs to be different, I'm not sure though if there
is a good way to determine if the conductor is running at api level of
cell level. The check for the api_database isn't good enough I think.

** Changed in: nova
   Status: Invalid => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1971228

Title:
  cell conductor tries to connect the other cells's DB when it is
  started

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Description
  ===

  We observe some nova-conductor behavior during its startup, and think
  that might be not that correct, would like to see if any improvement
  can be made for this.

  Steps to reproduce
  ==

  - We have a large cellv2 setup, the nova services are running in victoria 
version.
  - Just start/restart nova-conductor in one cell and check logs

  Actual result
  =

  The conductor will try to connect other cell's DB in order to get 
get_minimum_version_all_cells.
  Since each cell is guarded by firewall, the connection attempt is failed and 
has to wait for its timeout(60 seconds) to start the following nova-conductor 
services.

  This will be fine for the superconductor but the cell conductor
  usually cannot reach other cell's DB. Wonder if this behavior can be
  changed, or there are some other considerations behind this?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1971228/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1967183] [NEW] Filtering by vcpus throws an error

2022-03-30 Thread Sam Morrison
Public bug reported:

When I try to filter instances by the vcpus filter I get the following
error:

'vcpus' does not match any of regexes: '^_' (HTTP 400)

When I look at the nova compute api spec I see that filtering by vcpus
isn't supported https://docs.openstack.org/api-
ref/compute/?expanded=list-servers-detail#list-servers

** Affects: horizon
 Importance: Undecided
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1967183

Title:
  Filtering by vcpus throws an error

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When I try to filter instances by the vcpus filter I get the following
  error:

  'vcpus' does not match any of regexes: '^_' (HTTP 400)

  When I look at the nova compute api spec I see that filtering by vcpus
  isn't supported https://docs.openstack.org/api-
  ref/compute/?expanded=list-servers-detail#list-servers

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1967183/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1966365] [NEW] Filtering instances by az, key_name etc. doesn't work

2022-03-24 Thread Sam Morrison
Public bug reported:

When using the filters on the instance table view when trying to filter
by several attributes like availability_zone or key_name this has no
effect. This is because the microversion used is not high enough.

Using these filters came in from microversion 2.83

https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id76
 
https://docs.openstack.org/api-ref/compute/?expanded=list-servers-detail,show-server-details-detail#list-servers

** Affects: horizon
 Importance: Undecided
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1966365

Title:
  Filtering instances by az, key_name etc. doesn't work

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When using the filters on the instance table view when trying to
  filter by several attributes like availability_zone or key_name this
  has no effect. This is because the microversion used is not high
  enough.

  Using these filters came in from microversion 2.83

  
https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id76
 
  
https://docs.openstack.org/api-ref/compute/?expanded=list-servers-detail,show-server-details-detail#list-servers

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1966365/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1940341] [NEW] Adding a second region breaks existing clients

2021-08-17 Thread Sam Morrison
Public bug reported:

We currently have an installation with a single region. We want to
expand to another region but this is proving difficult.

When I add a new endpoint for our new region, clients that are
configured not to use a region can (depending on the ID of the endpoint)
all of a sudden now start hitting the new endpoint.

It is not practical to get all of our thousands of users using lots of
different clients to explicitly set a region when authing.

So I'm wondering if it is possible to support a default region or
default endpoint. This endpoint would be listed first when listing the
endpoints for a service. Currently this is ordered based on the endpoint
ID so a work around is to hack the endpoint IDs to ensure the one you
want listed first is set.

Thoughts?

** Affects: keystone
 Importance: Undecided
 Status: New

** Summary changed:

- Default regions and endpoints
+ Adding a second region breaks existing clients

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1940341

Title:
  Adding a second region breaks existing clients

Status in OpenStack Identity (keystone):
  New

Bug description:
  We currently have an installation with a single region. We want to
  expand to another region but this is proving difficult.

  When I add a new endpoint for our new region, clients that are
  configured not to use a region can (depending on the ID of the
  endpoint) all of a sudden now start hitting the new endpoint.

  It is not practical to get all of our thousands of users using lots of
  different clients to explicitly set a region when authing.

  So I'm wondering if it is possible to support a default region or
  default endpoint. This endpoint would be listed first when listing the
  endpoints for a service. Currently this is ordered based on the
  endpoint ID so a work around is to hack the endpoint IDs to ensure the
  one you want listed first is set.

  Thoughts?

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1940341/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1746627] Re: Reverse floating IP records are not removed when floating IP is deleted

2021-01-21 Thread Sam Morrison
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1746627

Title:
  Reverse floating IP records are not removed when floating IP is
  deleted

Status in Designate:
  Triaged
Status in neutron:
  New

Bug description:
  When I release/delete a floating IP from my project that has a
  corresponding FloatingIP PTR record the record is not deleted.

  Steps to reproduce:

  Assign a floating IP to my project
  Set a PTR record on my floating IP using the reverse floating IP API
  Release floatingIP

  PTR record still exists in designate.

  I have the sink running and this picks up the notification if you have
  the neutron_floatingip handler but this is for something else. I think
  this needs to be modified to also handle the reverse PTR records
  (managed_resource_type = ptr:floatingip)

To manage notifications about this bug go to:
https://bugs.launchpad.net/designate/+bug/1746627/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1837252] Re: [OSSA-2019-004] Ageing time of 0 disables linuxbridge MAC learning (CVE-2019-15753)

2020-12-21 Thread Sam Morrison
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1837252

Title:
  [OSSA-2019-004] Ageing time of 0 disables linuxbridge MAC learning
  (CVE-2019-15753)

Status in Ubuntu Cloud Archive:
  New
Status in neutron:
  Invalid
Status in OpenStack Compute (nova):
  Invalid
Status in os-vif:
  Fix Released
Status in os-vif stein series:
  Fix Committed
Status in os-vif trunk series:
  Fix Released
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  Release: OpenStack Stein
  Driver: LinuxBridge

  Using Stein w/ the LinuxBridge mech driver/agent, we have found that
  traffic is being flooded across bridges. Using tcpdump inside an
  instance, you can see unicast traffic for other instances.

  We have confirmed the macs table shows the aging timer set to 0 for
  permanent entries, and the bridge is NOT learning new MACs:

  root@lab-compute01:~# brctl showmacs brqd0084ac0-f7
  port no   mac addris local?   ageing timer
5   24:be:05:a3:1f:e1   yes0.00
5   24:be:05:a3:1f:e1   yes0.00
1   fe:16:3e:02:62:18   yes0.00
1   fe:16:3e:02:62:18   yes0.00
7   fe:16:3e:07:65:47   yes0.00
7   fe:16:3e:07:65:47   yes0.00
4   fe:16:3e:1d:d6:33   yes0.00
4   fe:16:3e:1d:d6:33   yes0.00
9   fe:16:3e:2b:2f:f0   yes0.00
9   fe:16:3e:2b:2f:f0   yes0.00
8   fe:16:3e:3c:42:64   yes0.00
8   fe:16:3e:3c:42:64   yes0.00
   10   fe:16:3e:5c:a6:6c   yes0.00
   10   fe:16:3e:5c:a6:6c   yes0.00
2   fe:16:3e:86:9c:dd   yes0.00
2   fe:16:3e:86:9c:dd   yes0.00
6   fe:16:3e:91:9b:45   yes0.00
6   fe:16:3e:91:9b:45   yes0.00
   11   fe:16:3e:b3:30:00   yes0.00
   11   fe:16:3e:b3:30:00   yes0.00
3   fe:16:3e:dc:c3:3e   yes0.00
3   fe:16:3e:dc:c3:3e   yes0.00

  root@lab-compute01:~# bridge fdb show | grep brqd0084ac0-f7
  01:00:5e:00:00:01 dev brqd0084ac0-f7 self permanent
  fe:16:3e:02:62:18 dev tap74af38f9-2e master brqd0084ac0-f7 permanent
  fe:16:3e:02:62:18 dev tap74af38f9-2e vlan 1 master brqd0084ac0-f7 permanent
  fe:16:3e:86:9c:dd dev tapb00b3c18-b3 master brqd0084ac0-f7 permanent
  fe:16:3e:86:9c:dd dev tapb00b3c18-b3 vlan 1 master brqd0084ac0-f7 permanent
  fe:16:3e:dc:c3:3e dev tap7284d235-2b master brqd0084ac0-f7 permanent
  fe:16:3e:dc:c3:3e dev tap7284d235-2b vlan 1 master brqd0084ac0-f7 permanent
  fe:16:3e:1d:d6:33 dev tapbeb9441a-99 vlan 1 master brqd0084ac0-f7 permanent
  fe:16:3e:1d:d6:33 dev tapbeb9441a-99 master brqd0084ac0-f7 permanent
  24:be:05:a3:1f:e1 dev eno1.102 vlan 1 master brqd0084ac0-f7 permanent
  24:be:05:a3:1f:e1 dev eno1.102 master brqd0084ac0-f7 permanent
  fe:16:3e:91:9b:45 dev tapc8ad2cec-90 master brqd0084ac0-f7 permanent
  fe:16:3e:91:9b:45 dev tapc8ad2cec-90 vlan 1 master brqd0084ac0-f7 permanent
  fe:16:3e:07:65:47 dev tap86e2c412-24 master brqd0084ac0-f7 permanent
  fe:16:3e:07:65:47 dev tap86e2c412-24 vlan 1 master brqd0084ac0-f7 permanent
  fe:16:3e:3c:42:64 dev tap37bcb70e-9e master brqd0084ac0-f7 permanent
  fe:16:3e:3c:42:64 dev tap37bcb70e-9e vlan 1 master brqd0084ac0-f7 permanent
  fe:16:3e:2b:2f:f0 dev tap40f6be7c-2d vlan 1 master brqd0084ac0-f7 permanent
  fe:16:3e:2b:2f:f0 dev tap40f6be7c-2d master brqd0084ac0-f7 permanent
  fe:16:3e:b3:30:00 dev tap6548bacb-c0 vlan 1 master brqd0084ac0-f7 permanent
  fe:16:3e:b3:30:00 dev tap6548bacb-c0 master brqd0084ac0-f7 permanent
  fe:16:3e:5c:a6:6c dev tap61107236-1e vlan 1 master brqd0084ac0-f7 permanent
  fe:16:3e:5c:a6:6c dev tap61107236-1e master brqd0084ac0-f7 permanent

  The ageing time for the bridge is set to 0:

  root@lab-compute01:~# brctl showstp brqd0084ac0-f7
  brqd0084ac0-f7
   bridge id8000.24be05a31fe1
   designated root  8000.24be05a31fe1
   root port   0path cost  0
   max age20.00 bridge max age20.00
   hello time  2.00 bridge hello time  2.00
   forward delay   0.00 bridge forward delay
   0.00
   ageing time 0.00
   hello timer 0.00 tcn timer  0.00
   topology change timer   0.00 gc timer
   0.00
   flags

  The default ageing time of 300 is being overridden by the value set
  here:

  Stein: 

[Yahoo-eng-team] [Bug 1890065] [NEW] Impossible to migrate affinity instances

2020-08-02 Thread Sam Morrison
Public bug reported:

We have a hypervisor that needs to go down for maintenance. There are 2
instances on the host within a server group with affinity.

It seems to be impossible to live migrate them both to a different host.

Looks like there used to be a force argument to live migration but this
was removed in microversion 2.68

"Remove support for forced live migration and evacuate server actions."

Doesn't mention why this was removed sadly.


Is there a way to temporarily break the affinity contract for maintenance?

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1890065

Title:
  Impossible to migrate affinity instances

Status in OpenStack Compute (nova):
  New

Bug description:
  We have a hypervisor that needs to go down for maintenance. There are
  2 instances on the host within a server group with affinity.

  It seems to be impossible to live migrate them both to a different
  host.

  Looks like there used to be a force argument to live migration but
  this was removed in microversion 2.68

  "Remove support for forced live migration and evacuate server
  actions."

  Doesn't mention why this was removed sadly.

  
  Is there a way to temporarily break the affinity contract for maintenance?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1890065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1885987] [NEW] evacuate can fail in multi cell environment, doesn't restrict to source cell

2020-07-01 Thread Sam Morrison
Public bug reported:

When running multiple cells and calling evacuate on an instance when
determining a source host the scheduler is not restricting hosts to be
in the same destination cell.

This is affecting us with stein version but looking in the code it looks
to affect master too

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1885987

Title:
  evacuate can fail in multi cell environment, doesn't restrict to
  source cell

Status in OpenStack Compute (nova):
  New

Bug description:
  When running multiple cells and calling evacuate on an instance when
  determining a source host the scheduler is not restricting hosts to be
  in the same destination cell.

  This is affecting us with stein version but looking in the code it
  looks to affect master too

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1885987/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1741810] Re: Filter AggregateImagePropertiesIsolation doesn't Work

2020-06-30 Thread Sam Morrison
** Changed in: nova
   Status: Expired => Confirmed

** Changed in: nova
   Status: Confirmed => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1741810

Title:
  Filter AggregateImagePropertiesIsolation doesn't Work

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  I tried to use filter AggregateImagePropertiesIsolation to isolate Windows 
instance for reducing number of Windows Licenses.

  I think nova scheduler in pike release, filter
  AggregateImagePropertiesIsolation always returned all hosts. If this
  is a bug, filter AggregateImagePropertiesIsolation needs to be fixed.

  
  Steps to reproduce
  ==
  # add filter to nova.conf and restart nova scheduler
  [filter_scheduler]
  enabled_filters = 
AggregateImagePropertiesIsolation,RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter

  # image create with os property
  openstack image create --min-disk 3 --min-ram 512 --disk-format qcow2 
--public --file windows.img img_windows
  openstack image create --min-disk 1 --min-ram 64 --disk-format qcow2 --public 
--file cirros-0.3.5-x86_64-disk.img img_linux
  openstack image set --property os=windows img_windows
  openstack image set --property os=linux img_linux

  # host aggregate create with os property
  openstack aggregate create os_win
  openstack aggregate add host os_win compute01
  openstack aggregate add host os_win compute02
  openstack aggregate set --property os=windows os_win
   
  openstack aggregate create os_linux
  openstack aggregate add host os_linux compute03
  openstack aggregate add host os_linux compute04
  openstack aggregate add host os_linux compute05
  openstack aggregate set --property os=linux os_linux

  # create flavor
  openstack flavor create --ram 1024 --disk 1 --vcpus 1 --public small
  openstack flavor create --ram 4096 --disk 20 --vcpus 2 --public medium

  # create windows instances
  openstack server create --image img_windows --network test-net --flavor 
medium --max 10 test-win

  
  Expected result
  ===
  Windows instances can be found in compute01, compute02 only

  Actual result
  =
  Windows instance was found in every hosts.


  Environment
  ===
  1. Nova's version
  (nova-scheduler)[nova@control01 /]$ rpm -qa | grep nova
  python-nova-17.0.0-0.20171206190932.cbdc893.el7.centos.noarch
  openstack-nova-scheduler-17.0.0-0.20171206190932.cbdc893.el7.centos.noarch
  openstack-nova-common-17.0.0-0.20171206190932.cbdc893.el7.centos.noarch
  python2-novaclient-9.1.0-0.20170804194758.0a53d19.el7.centos.noarch

  2. hypervisor
  (nova-libvirt)[root@compute01 /]# rpm -qa | grep kvm
  qemu-kvm-common-ev-2.9.0-16.el7_4.11.1.x86_64
  libvirt-daemon-kvm-3.2.0-14.el7_4.5.x86_64
  qemu-kvm-ev-2.9.0-16.el7_4.11.1.x86_64

  2. Storage
  ceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e) luminous 
(stable)

  3. Networking
  Neutron with OpenVSwitch

  
  Logs & Configs
  ==
  $ tail -f nova-scheduler.log | grep AggregateImagePropertiesIsolation
  2018-01-08 11:52:53.964 6 DEBUG nova.filters 
[req-3828686f-1d46-407a-bebb-14f7a573c52e 9b1f4f0bcea2428c93b8b4276ba67cb7 
188be4011b2b49529cbdd6eade152233 - default default] Filter 
AggregateImagePropertiesIsolation returned 5 host(s) get_filtered_objects 
/usr/lib/python2.7/site-packages/nova/filters.py:104

  # add filter to nova.conf and restart nova scheduler
  [filter_scheduler]
  enabled_filters = 
AggregateImagePropertiesIsolation,RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1741810/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1885647] [NEW] Unable to allow users to see role assignments on all their projects

2020-06-29 Thread Sam Morrison
Public bug reported:

I'm trying to allow users to see what roles they have on all of their
projects.

It would seem that this should do this in policy

"identity:list_role_assignments": "rule:admin_or_monitoring or
project_id:%(scope.project.id)s or user_id:%(scope.user.id)s"

However this doesn't work.

With project_id:%(scope.project.id)s it allows a user to list the roles
of the project they are authed to but it doesn't work with
user_id:%(scope.user.id)s"

I notice that when using the keystone client it treats filtering by
user_id and project_id differently

When filtering by project it does:
/v3/role_assignments?scope.project.id=094ae1e2c08f4eddb444a9d9db71ab40

But when filtering by user it does:
/v3/role_assignments?user.id=d1fa8867e42444cf8724e65fef1da549


Is there something I'm missing here or is this possibly a bug?

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1885647

Title:
  Unable to allow users to see role assignments on all their projects

Status in OpenStack Identity (keystone):
  New

Bug description:
  I'm trying to allow users to see what roles they have on all of their
  projects.

  It would seem that this should do this in policy

  "identity:list_role_assignments": "rule:admin_or_monitoring or
  project_id:%(scope.project.id)s or user_id:%(scope.user.id)s"

  However this doesn't work.

  With project_id:%(scope.project.id)s it allows a user to list the
  roles of the project they are authed to but it doesn't work with
  user_id:%(scope.user.id)s"

  I notice that when using the keystone client it treats filtering by
  user_id and project_id differently

  When filtering by project it does:
  /v3/role_assignments?scope.project.id=094ae1e2c08f4eddb444a9d9db71ab40

  But when filtering by user it does:
  /v3/role_assignments?user.id=d1fa8867e42444cf8724e65fef1da549

  
  Is there something I'm missing here or is this possibly a bug?

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1885647/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1837339] Re: CIDR's of the form 12.34.56.78/0 should be an error

2020-04-01 Thread Sam Morrison
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1837339

Title:
  CIDR's of the form 12.34.56.78/0 should be an error

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in neutron:
  New
Status in OpenStack Security Advisory:
  Incomplete
Status in OpenStack Security Notes:
  New

Bug description:
  The problem is that some users do not understand how CIDRs work, and
  incorrectly use /0 when they are trying to specify a single IP or a
  subnet in an Access Rule.  Unfortunately 12.34.56.78/0 means the same
  thing as 0.0.0.0/0.

  The proposed fix is to insist that /0 only be used with 0.0.0.0/0 and
  the IPv6 equivalent ::/0 when entering or updating Access Rule CIDRs
  in via the dashboard.

  I am labeling this as a security vulnerability since it leads to naive
  users creating instances with ports open to the world when they didn't
  intend to do that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1837339/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1868033] [NEW] Booting instance with pci_device fails during rocky->stein live upgrade

2020-03-18 Thread Sam Morrison
Public bug reported:

Environment:

Stein nova-conductor having set upgrade_levels to rocky 
Rocky nova-compute

Boot an instance with a flavour that has a pci_device

Error:

Failed to publish message to topic 'nova': maximum recursion depth
exceeded: RuntimeError: maximum recursion depth exceeded


Tracked this down it it continually trying to backport the InstancePCIRequests:

It gets as arguments:
objinst={u'nova_object.version': u'1.1', u'nova_object.name': 
u'InstancePCIRequests', u'nova_object.data': {u'instance_uuid': 
u'08212b12-8fa8-42d9-8d3e-52ed60a64135', u'requests': [{u'nova_object.version': 
u'1.3', u'nova_object.name': u'InstancePCIRequest', u'nova_object.data': 
{u'count': 1, u'is_new': False, u'numa_policy': None, u'request_id': None, 
u'requester_id': None, u'alias_name': u'V100-32G', u'spec': [{u'vendor_id': 
u'10de', u'product_id': u'1db6'}]}, u'nova_object.namespace': u'nova'}]}, 
u'nova_object.namespace': u'nova'}, 

object_versions={u'InstancePCIRequests': '1.1', 'InstancePCIRequest':
'1.2'}


It fails because it doesn't backport the individual InstancePCIRequest inside 
the InstancePCIRequests object and so keeps trying.

Error it shows is: IncompatibleObjectVersion: Version 1.3 of
InstancePCIRequest is not supported, supported version is 1.2


I have fixed this in our setup by altering obj_make_compatible to
downgrade the individual requests to version 1.2 which seems to work and
all is good

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1868033

Title:
  Booting instance with pci_device fails during rocky->stein live
  upgrade

Status in OpenStack Compute (nova):
  New

Bug description:
  Environment:

  Stein nova-conductor having set upgrade_levels to rocky 
  Rocky nova-compute

  Boot an instance with a flavour that has a pci_device

  Error:

  Failed to publish message to topic 'nova': maximum recursion depth
  exceeded: RuntimeError: maximum recursion depth exceeded

  
  Tracked this down it it continually trying to backport the 
InstancePCIRequests:

  It gets as arguments:
  objinst={u'nova_object.version': u'1.1', u'nova_object.name': 
u'InstancePCIRequests', u'nova_object.data': {u'instance_uuid': 
u'08212b12-8fa8-42d9-8d3e-52ed60a64135', u'requests': [{u'nova_object.version': 
u'1.3', u'nova_object.name': u'InstancePCIRequest', u'nova_object.data': 
{u'count': 1, u'is_new': False, u'numa_policy': None, u'request_id': None, 
u'requester_id': None, u'alias_name': u'V100-32G', u'spec': [{u'vendor_id': 
u'10de', u'product_id': u'1db6'}]}, u'nova_object.namespace': u'nova'}]}, 
u'nova_object.namespace': u'nova'}, 

  object_versions={u'InstancePCIRequests': '1.1', 'InstancePCIRequest':
  '1.2'}

  
  It fails because it doesn't backport the individual InstancePCIRequest inside 
the InstancePCIRequests object and so keeps trying.

  Error it shows is: IncompatibleObjectVersion: Version 1.3 of
  InstancePCIRequest is not supported, supported version is 1.2


  I have fixed this in our setup by altering obj_make_compatible to
  downgrade the individual requests to version 1.2 which seems to work
  and all is good

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1868033/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1863201] [NEW] stein regression listing security group rules

2020-02-13 Thread Sam Morrison
Public bug reported:

Upgrading neutron from rocky -> stein and get a considerable slow down when 
listing all security groups for a project. Goes from ~2 seconds to almost 2 
minutes. Looking into the code it looks like it is very inefficient because it 
gets all rules from the DB and then filters after the fact.
We have around 7000 rules in our QA env.

Very keen to get this sorted but don't know the neutron code base that
well so can offer testing of patches if there are any out there already.

It looks like this happened with listing ports too for stein and found
this https://bugzilla.redhat.com/show_bug.cgi?id=1737012 so wonder if
this is related?

With Rocky:
time openstack security group rule list 
+--+-+---+++--+--+
| ID   | IP Protocol | Ethertype | IP Range 
  | Port Range | Remote Security Group| Security Group  
 |
+--+-+---+++--+--+
| 01b877cc-1621-44cd-8e69-1345ab01a1ef | None| IPv4  | 0.0.0.0/0
  || None | 
3dcbd4fa-d017-4361-b0b0-b7508e923087 |
| 0c744788-6319-42e5-931a-5e7b0df166c4 | None| IPv6  | ::/0 
  || None | 
3dcbd4fa-d017-4361-b0b0-b7508e923087 |
| 0fc6b79d-d211-4201-ac76-60fb8ea40c9c | None| IPv4  | 0.0.0.0/0
  || None | 
8f55c18b-cd8c-4d84-afef-f8b83d5eb128 |
| 17d6c8a3-7894-42a6-92f2-1bd56a30ef1d | tcp | IPv4  | 0.0.0.0/0
  | 80:80  | None | 
ed257fd7-d825-4014-96a8-c16adfea70f0 |
| 19d3ba79-65f1-4c89-a1c2-b32049ceb25a | None| IPv6  | ::/0 
  || None | 
008510a7-d176-4ee5-87e2-e74da06c55ba |
| 21f1d173-b99f-47a7-9983-6926f7bc58f3 | icmp| IPv4  | 0.0.0.0/0
  || None | 
008510a7-d176-4ee5-87e2-e74da06c55ba |
| 3321d5ff-11c3-4104-be13-107c789e4bf8 | None| IPv6  | ::/0 
  || None | 
57cb14de-dd5f-4f0c-b0cf-a7effc36fca5 |
| 381c6816-9b5c-42b7-9dd3-dae12a49c08b | None| IPv4  | 0.0.0.0/0
  || None | 
3f63cfbb-87ee-4aa2-8193-7e86cb542881 |
| 3886ad10-99ea-4f60-a36c-ffbe80d92907 | None| IPv6  | ::/0 
  || None | 
ed257fd7-d825-4014-96a8-c16adfea70f0 |
| 5be4853a-75d1-435c-87ca-56c54a243f70 | None| IPv4  | 0.0.0.0/0
  || None | 
57cb14de-dd5f-4f0c-b0cf-a7effc36fca5 |
| 71656249-4454-410e-8e7d-24910df127ba | None| IPv6  | ::/0 
  || None | 
8f55c18b-cd8c-4d84-afef-f8b83d5eb128 |
| 783324ac-6844-4d4d-985c-936015bcb66e | icmp| IPv4  | 0.0.0.0/0
  || None | 
3f63cfbb-87ee-4aa2-8193-7e86cb542881 |
| 7ca7f0cc-b4df-401f-aaa4-662f17afcfb0 | None| IPv4  | 0.0.0.0/0
  || None | 
008510a7-d176-4ee5-87e2-e74da06c55ba |
| 825a33ff-b693-456d-811e-a0b494e8e308 | None| IPv6  | ::/0 
  || 008510a7-d176-4ee5-87e2-e74da06c55ba | 
008510a7-d176-4ee5-87e2-e74da06c55ba |
| 89fd2d18-45d3-4a86-a020-09d240912e5c | tcp | IPv4  | 
128.250.116.173/32 | 22:22  | None | 
008510a7-d176-4ee5-87e2-e74da06c55ba |
| 8a1f45f1-e4c8-41e4-b6f3-80ab48b7e38d | None| IPv6  | ::/0 
  || None | 
bf7abb53-e5ca-428d-9fce-6a2e37a25ee0 |
| 9ebc6d15-e3eb-4d20-88d4-6737367ffc08 | None| IPv4  | 0.0.0.0/0
  || None | 
ed257fd7-d825-4014-96a8-c16adfea70f0 |
| 9f29f539-a80a-4a8d-89cc-f714224b5f8c | icmp| IPv4  | 0.0.0.0/0
  || None | 
8f55c18b-cd8c-4d84-afef-f8b83d5eb128 |
| a1bc8f05-3a20-48c2-bae5-a60f4ffed514 | None| IPv4  | 0.0.0.0/0
  || 008510a7-d176-4ee5-87e2-e74da06c55ba | 
008510a7-d176-4ee5-87e2-e74da06c55ba |
| bef999d6-669a-47f6-988c-e69bab6df87a | tcp | IPv4  | 0.0.0.0/0
  | 22:22  | 57cb14de-dd5f-4f0c-b0cf-a7effc36fca5 | 
bf7abb53-e5ca-428d-9fce-6a2e37a25ee0 |
| c5ce339b-cd92-492c-9af4-6eab875027ce | tcp | IPv4  | 0.0.0.0/0
  | 80:80  | 

[Yahoo-eng-team] [Bug 1831954] [NEW] error in floating IP allocation rest call

2019-06-06 Thread Sam Morrison
Public bug reported:

Getting the following error when POSTing to network/floatingip/

[Thu Jun 06 23:24:09.770231 2019] [wsgi:error] [pid 26848:tid 140634143590144] 
[remote 172.26.25.159:58730] ERROR openstack_dashboard.api.rest.utils error 
invoking apiclient
[Thu Jun 06 23:24:09.770306 2019] [wsgi:error] [pid 26848:tid 140634143590144] 
[remote 172.26.25.159:58730] Traceback (most recent call last):
[Thu Jun 06 23:24:09.770315 2019] [wsgi:error] [pid 26848:tid 140634143590144] 
[remote 172.26.25.159:58730]   File 
"/usr/lib/python3/dist-packages/openstack_dashboard/api/rest/utils.py", line 
128, in _wrapped
[Thu Jun 06 23:24:09.770322 2019] [wsgi:error] [pid 26848:tid 140634143590144] 
[remote 172.26.25.159:58730] data = function(self, request, *args, **kw)
[Thu Jun 06 23:24:09.770328 2019] [wsgi:error] [pid 26848:tid 140634143590144] 
[remote 172.26.25.159:58730]   File 
"/usr/lib/python3/dist-packages/openstack_dashboard/api/rest/network.py", line 
70, in post
[Thu Jun 06 23:24:09.770335 2019] [wsgi:error] [pid 26848:tid 140634143590144] 
[remote 172.26.25.159:58730] None, params)
[Thu Jun 06 23:24:09.770343 2019] [wsgi:error] [pid 26848:tid 140634143590144] 
[remote 172.26.25.159:58730] TypeError: tenant_floating_ip_allocate() takes 
from 1 to 3 positional arguments but 4 were given
[Thu Jun 06 23:24:09.770356 2019] [wsgi:error] [pid 26848:tid 140634143590144] 
[remote 172.26.25.159:58730] 


** Affects: horizon
 Importance: Undecided
 Assignee: Sam Morrison (sorrison)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1831954

Title:
  error in floating IP allocation rest call

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Getting the following error when POSTing to network/floatingip/

  [Thu Jun 06 23:24:09.770231 2019] [wsgi:error] [pid 26848:tid 
140634143590144] [remote 172.26.25.159:58730] ERROR 
openstack_dashboard.api.rest.utils error invoking apiclient
  [Thu Jun 06 23:24:09.770306 2019] [wsgi:error] [pid 26848:tid 
140634143590144] [remote 172.26.25.159:58730] Traceback (most recent call last):
  [Thu Jun 06 23:24:09.770315 2019] [wsgi:error] [pid 26848:tid 
140634143590144] [remote 172.26.25.159:58730]   File 
"/usr/lib/python3/dist-packages/openstack_dashboard/api/rest/utils.py", line 
128, in _wrapped
  [Thu Jun 06 23:24:09.770322 2019] [wsgi:error] [pid 26848:tid 
140634143590144] [remote 172.26.25.159:58730] data = function(self, 
request, *args, **kw)
  [Thu Jun 06 23:24:09.770328 2019] [wsgi:error] [pid 26848:tid 
140634143590144] [remote 172.26.25.159:58730]   File 
"/usr/lib/python3/dist-packages/openstack_dashboard/api/rest/network.py", line 
70, in post
  [Thu Jun 06 23:24:09.770335 2019] [wsgi:error] [pid 26848:tid 
140634143590144] [remote 172.26.25.159:58730] None, params)
  [Thu Jun 06 23:24:09.770343 2019] [wsgi:error] [pid 26848:tid 
140634143590144] [remote 172.26.25.159:58730] TypeError: 
tenant_floating_ip_allocate() takes from 1 to 3 positional arguments but 4 were 
given
  [Thu Jun 06 23:24:09.770356 2019] [wsgi:error] [pid 26848:tid 
140634143590144] [remote 172.26.25.159:58730] 
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1831954/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1819568] [NEW] network_data.json doesn't contain information about floating IPs

2019-03-11 Thread Sam Morrison
Public bug reported:

Don't seem to be able to get floating IP information from openstack metadata 
network_data.json
I can get this via EC2 metadata and would be good if it was in openstack 
metadata too

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1819568

Title:
  network_data.json doesn't contain information about floating IPs

Status in OpenStack Compute (nova):
  New

Bug description:
  Don't seem to be able to get floating IP information from openstack metadata 
network_data.json
  I can get this via EC2 metadata and would be good if it was in openstack 
metadata too

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1819568/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1803627] [NEW] Nova requires you to name your volumev3 service cinderv3

2018-11-15 Thread Sam Morrison
Public bug reported:

Nova by default looks for the cinder endpoint by looking for a service
type of volumev3 and that also has a name of "cinderv3"

I think it should only be looking for an endpoint with a type of
volumev3

The name attribute of an endpoint should be free to set by the operator.

In our keystone we name this "Volume API V3" which is friendly as it also 
appears in horizon and 
using the CLI tools. (cinderv3 on the other hand is not so friendly)

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1803627

Title:
  Nova requires you to name your volumev3 service cinderv3

Status in OpenStack Compute (nova):
  New

Bug description:
  Nova by default looks for the cinder endpoint by looking for a service
  type of volumev3 and that also has a name of "cinderv3"

  I think it should only be looking for an endpoint with a type of
  volumev3

  The name attribute of an endpoint should be free to set by the
  operator.

  In our keystone we name this "Volume API V3" which is friendly as it also 
appears in horizon and 
  using the CLI tools. (cinderv3 on the other hand is not so friendly)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1803627/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1799113] [NEW] queens/pike compute rpcapi version mismatch

2018-10-21 Thread Sam Morrison
Public bug reported:

Doing a live upgrade from pike to queens noticed that resizes weren't
working.

In queens source it says pike version of the compute rpcapi is 4.18

https://github.com/openstack/nova/blob/eae37a27caa5ca8b0ca50187928bde81f28a24e1/nova/compute/rpcapi.py#L361

Looking at latest stable/pike the latest version there is 4.17

https://github.com/openstack/nova/blob/6ef30d5078595108c1c0f2b5c258ae6ef2db1eeb/nova/compute/rpcapi.py#L330

** Affects: nova
 Importance: Undecided
 Assignee: Sam Morrison (sorrison)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1799113

Title:
  queens/pike compute rpcapi version mismatch

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Doing a live upgrade from pike to queens noticed that resizes weren't
  working.

  In queens source it says pike version of the compute rpcapi is 4.18

  
https://github.com/openstack/nova/blob/eae37a27caa5ca8b0ca50187928bde81f28a24e1/nova/compute/rpcapi.py#L361

  Looking at latest stable/pike the latest version there is 4.17

  
https://github.com/openstack/nova/blob/6ef30d5078595108c1c0f2b5c258ae6ef2db1eeb/nova/compute/rpcapi.py#L330

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1799113/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1796599] [NEW] Listing all flavors is hard coded for admin

2018-10-07 Thread Sam Morrison
Public bug reported:

Currently listing all flavors, including disabled and non public flavors
is hard coded to only allow context.is_admin.

This should be controller by policy so operators can allow other roles
to list these type of flavors too.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1796599

Title:
  Listing all flavors is hard coded for admin

Status in OpenStack Compute (nova):
  New

Bug description:
  Currently listing all flavors, including disabled and non public
  flavors is hard coded to only allow context.is_admin.

  This should be controller by policy so operators can allow other roles
  to list these type of flavors too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1796599/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1792411] [NEW] Unable to create an image with community or shared visibility

2018-09-13 Thread Sam Morrison
Public bug reported:

Currently no way to create an image with visibility of community or
shared. This has been supported in glance for a few releases now.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1792411

Title:
  Unable to create an image with community or shared visibility

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Currently no way to create an image with visibility of community or
  shared. This has been supported in glance for a few releases now.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1792411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1790750] [NEW] Error in db archiver

2018-09-04 Thread Sam Morrison
Public bug reported:

Getting the following error when running the db archiver:

An error has occurred:
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/nova/cmd/manage.py", line 1691, in main
ret = fn(*fn_args, **fn_kwargs)
  File "/usr/lib/python2.7/dist-packages/nova/cmd/manage.py", line 704, in 
archive_deleted_rows
run = db.archive_deleted_rows(max_rows, before=before_date)
  File "/usr/lib/python2.7/dist-packages/nova/db/api.py", line 2049, in 
archive_deleted_rows
return IMPL.archive_deleted_rows(max_rows=max_rows, before=before)
  File "/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 6577, 
in archive_deleted_rows
before=before)
  File "/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 6541, 
in _archive_deleted_rows_for_table
conn, limit, before)
  File "/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 6436, 
in _archive_if_instance_deleted
{'table': table.__tablename__,
AttributeError: 'Table' object has no attribute '__tablename__'

** Affects: nova
 Importance: Undecided
 Assignee: Sam Morrison (sorrison)
 Status: In Progress


** Tags: db nova-manage

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1790750

Title:
  Error in db archiver

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Getting the following error when running the db archiver:

  An error has occurred:
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/nova/cmd/manage.py", line 1691, in 
main
  ret = fn(*fn_args, **fn_kwargs)
File "/usr/lib/python2.7/dist-packages/nova/cmd/manage.py", line 704, in 
archive_deleted_rows
  run = db.archive_deleted_rows(max_rows, before=before_date)
File "/usr/lib/python2.7/dist-packages/nova/db/api.py", line 2049, in 
archive_deleted_rows
  return IMPL.archive_deleted_rows(max_rows=max_rows, before=before)
File "/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 
6577, in archive_deleted_rows
  before=before)
File "/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 
6541, in _archive_deleted_rows_for_table
  conn, limit, before)
File "/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py", line 
6436, in _archive_if_instance_deleted
  {'table': table.__tablename__,
  AttributeError: 'Table' object has no attribute '__tablename__'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1790750/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1762880] [NEW] Launch instance availability-zone ordering

2018-04-10 Thread Sam Morrison
Public bug reported:

In the launch instance view the drop down list for selecting an
availability zone is in a random order. Would be good if this was sorted
alphabetically.

This is in Queens dashboard

** Affects: horizon
 Importance: Undecided
 Assignee: Sam Morrison (sorrison)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1762880

Title:
  Launch instance availability-zone ordering

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  In the launch instance view the drop down list for selecting an
  availability zone is in a random order. Would be good if this was
  sorted alphabetically.

  This is in Queens dashboard

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1762880/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1762879] [NEW] CREATE_INSTANCE_FLAVOR_SORT doesn't work

2018-04-10 Thread Sam Morrison
Public bug reported:

The setting CREATE_INSTANCE_FLAVOR_SORT looks like it has no affect
anymore, possibly due to the change to using the angular launch instance
version?

This is using the queens dashboard

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1762879

Title:
  CREATE_INSTANCE_FLAVOR_SORT doesn't work

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The setting CREATE_INSTANCE_FLAVOR_SORT looks like it has no affect
  anymore, possibly due to the change to using the angular launch
  instance version?

  This is using the queens dashboard

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1762879/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1761070] [NEW] iptables rules for linuxbridge ignore bridge_mappings

2018-04-04 Thread Sam Morrison
Public bug reported:

We have bridge_mappings set for linuxbridge agent to use a non standard
bridge naming convention.

This works in all places apart from the setting zone rules in iptables.

The code in neutron/agent/linux/iptables_firewall.py doesn't take into
account mappings and just uses the default bridge name which is derived
from the network ID.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1761070

Title:
  iptables rules for linuxbridge ignore bridge_mappings

Status in neutron:
  New

Bug description:
  We have bridge_mappings set for linuxbridge agent to use a non
  standard bridge naming convention.

  This works in all places apart from the setting zone rules in
  iptables.

  The code in neutron/agent/linux/iptables_firewall.py doesn't take into
  account mappings and just uses the default bridge name which is
  derived from the network ID.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1761070/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1751692] [NEW] os_region_name an unnecessary required option for placement

2018-02-25 Thread Sam Morrison
Public bug reported:

When configuring placement service for nova-computes it is required to
put in the region name for the placement services.

When talking to other services like neutron or cinder specifying a
region name isn't required and if you just have 1 region (possibly the
most common type of installation?) then it will pick that.

Would be nice if we didn't have to specify the region for placement too.


Code is in nova/compute/manager.py

# NOTE(sbauza): We want the compute node to hard fail if it won't be


# able to provide its resources to the placement API, or it will not


# be able to be eligible as a destination.  


if CONF.placement.os_region_name is None:   


raise exception.PlacementNotConfigured()

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1751692

Title:
  os_region_name an unnecessary required option for placement

Status in OpenStack Compute (nova):
  New

Bug description:
  When configuring placement service for nova-computes it is required to
  put in the region name for the placement services.

  When talking to other services like neutron or cinder specifying a
  region name isn't required and if you just have 1 region (possibly the
  most common type of installation?) then it will pick that.

  Would be nice if we didn't have to specify the region for placement
  too.


  Code is in nova/compute/manager.py

  # NOTE(sbauza): We want the compute node to hard fail if it won't be  

  
  # able to provide its resources to the placement API, or it will not  

  
  # be able to be eligible as a destination.

  
  if CONF.placement.os_region_name is None: 

  
  raise exception.PlacementNotConfigured()

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1751692/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1743480] [NEW] No way to differenciate beween floating IP networks and external provider networks

2018-01-15 Thread Sam Morrison
Public bug reported:

We have a bunch of external shared provider networks that users can
attach a port to and get direct access to the Internet.

We also have a bunch of floating IP networks that users can use for
floating IPs.

The two types of networks are shared and external.


The issue is that our users can't develop code against our cloud to figure out 
what network to use for floating IPs.

Luckily for us our provider networks use a different
provider:network_type to our floating IP networks so they can do:

search_opts = {'provider:network_type': 'midonet', 'router:external': True}
client.list_networks(**search_opts).get('networks')

But this is not very nice and by no means portable with other openstack
clouds they use.

I think https://bugs.launchpad.net/neutron/+bug/1721305 is related to
this bug

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1743480

Title:
  No way to differenciate beween floating IP networks and external
  provider networks

Status in neutron:
  New

Bug description:
  We have a bunch of external shared provider networks that users can
  attach a port to and get direct access to the Internet.

  We also have a bunch of floating IP networks that users can use for
  floating IPs.

  The two types of networks are shared and external.

  
  The issue is that our users can't develop code against our cloud to figure 
out what network to use for floating IPs.

  Luckily for us our provider networks use a different
  provider:network_type to our floating IP networks so they can do:

  search_opts = {'provider:network_type': 'midonet', 'router:external': True}
  client.list_networks(**search_opts).get('networks')

  But this is not very nice and by no means portable with other
  openstack clouds they use.

  I think https://bugs.launchpad.net/neutron/+bug/1721305 is related to
  this bug

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1743480/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1737050] [NEW] No way to allow non admins the ability to filter on attributes such as host

2017-12-07 Thread Sam Morrison
Public bug reported:

We have a special read_only role in keystone and have given that role
the ability to list all instances via the policy rule:
index:get_all_tenants.

It can't however list all instances on a specific host for instance. I'm
not sure if a new policy rule should be added or it should be covered in
the existing rule "index:get_all_tenants"?

The offending code is in nova/api/openstack/compute/servers.py in the
remove_invalid_options method

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1737050

Title:
  No way to allow non admins the ability to filter on attributes such as
  host

Status in OpenStack Compute (nova):
  New

Bug description:
  We have a special read_only role in keystone and have given that role
  the ability to list all instances via the policy rule:
  index:get_all_tenants.

  It can't however list all instances on a specific host for instance.
  I'm not sure if a new policy rule should be added or it should be
  covered in the existing rule "index:get_all_tenants"?

  The offending code is in nova/api/openstack/compute/servers.py in the
  remove_invalid_options method

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1737050/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1736650] [NEW] linuxbrige manages non linuxbridge ports

2017-12-05 Thread Sam Morrison
Public bug reported:

In our ML2 environment we have 2 mech drivers, linuxbridge and midonet.

We have linuxbridge and midnet networks bound to instances on the same
compute nodes. All works well except the midonet ports get marked as
DOWN. I've traced this back to the linuxbridge agent.

It seems to mark the midonet ports as DOWN. I can see the midonet port
IDs in the linuxbridge logs.

Steps to reproduce:

Config:

[ml2]
type_drivers=flat,midonet,uplink
path_mtu=0
tenant_network_types=midonet
mechanism_drivers=linuxbridge,midonet


Boot an instance with a midonet nic, you will note port is DOWN.
Stop linuxbridge agent and repeat, port will be ACTIVE
start linux bridge agent and existing midonet ports will change to DOWN


This is running stable/ocata

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1736650

Title:
  linuxbrige manages non linuxbridge ports

Status in neutron:
  New

Bug description:
  In our ML2 environment we have 2 mech drivers, linuxbridge and
  midonet.

  We have linuxbridge and midnet networks bound to instances on the same
  compute nodes. All works well except the midonet ports get marked as
  DOWN. I've traced this back to the linuxbridge agent.

  It seems to mark the midonet ports as DOWN. I can see the midonet port
  IDs in the linuxbridge logs.

  Steps to reproduce:

  Config:

  [ml2]
  type_drivers=flat,midonet,uplink
  path_mtu=0
  tenant_network_types=midonet
  mechanism_drivers=linuxbridge,midonet

  
  Boot an instance with a midonet nic, you will note port is DOWN.
  Stop linuxbridge agent and repeat, port will be ACTIVE
  start linux bridge agent and existing midonet ports will change to DOWN

  
  This is running stable/ocata

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1736650/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1733747] Re: No way to find out which instances are using a security group

2017-11-22 Thread Sam Morrison
Have submitted an RFE for this at
https://bugs.launchpad.net/neutron/+bug/1734026

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1733747

Title:
  No way to find out which instances are using a security group

Status in neutron:
  Invalid

Bug description:
  I'm trying to figure out which instances are using a specific security
  group but it doesn't look possible via the API (unless I'm missing
  something).

  The only way to do this is by looking in the database and doing some
  sql on the securitygroupportbindings table.

  Is there another way?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1733747/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1734026] [NEW] [RFE] Add ability to see what devices use a certain security group

2017-11-22 Thread Sam Morrison
Public bug reported:

Given a security group ID I would like an API to determine which devices
(nova instances) use this security group.

Currently the only way to do this is by looking in the database and
doing some SQL on the securitygroupportbindings table.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1734026

Title:
  [RFE] Add ability to see what devices use a certain security group

Status in neutron:
  New

Bug description:
  Given a security group ID I would like an API to determine which
  devices (nova instances) use this security group.

  Currently the only way to do this is by looking in the database and
  doing some SQL on the securitygroupportbindings table.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1734026/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1729767] Re: Ocata upgrade, midonet port binding fails in mixed ml2 environment

2017-11-22 Thread Sam Morrison
OK I've figured it out, very sorry, not a bug. In newton we had
mech_driver set to midonet_ext and in ocata this is now just midonet
again so this is why everything was failing.

** Changed in: networking-midonet
   Status: New => Invalid

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1729767

Title:
  Ocata upgrade, midonet port binding fails in mixed ml2 environment

Status in networking-midonet:
  Invalid
Status in neutron:
  Invalid

Bug description:
  We have a mixed ml2 environment with some networks are linuxbridge and some 
are midonet.
  We can bind the 2 different ports to instances on the same compute nodes (can 
do to the same instance too)

  Testing out Ocata this is now not working due to some extra checks that have 
been introduced.
  In the logs I get:

  
  2017-11-03 15:50:47.967 25598 DEBUG neutron.plugins.ml2.drivers.mech_agent 
[req-0779656d-919a-4517-9a5c-12b63d77fd02 ac2c53521bd74ab89acb7b705f2b49ff 
2f3e9e705b0b460b9de90d9844e88fd2 - - -] Checking agent: {'binary': 
u'neutron-linuxbridge-agent', 'description': None, 'admin_state_up': True, 
'heartbeat_timestamp': datetime.datetime(2017, 11, 3, 4, 50, 31), 
'availability_zone': None, 'alive': True, 'topic': u'N/A', 'host': u'cn2-qh2', 
'agent_type': u'Linux bridge agent', 'resource_versions': {u'SubPort': u'1.0', 
u'QosPolicy': u'1.3', u'Trunk': u'1.0'}, 'created_at': datetime.datetime(2017, 
8, 9, 3, 43, 15), 'started_at': datetime.datetime(2017, 11, 3, 3, 38, 31), 
'id': u'afba1ad9-b880-4943-aea9-7faae20f787a', 'configurations': 
{u'bridge_mappings': {}, u'interface_mappings': {u'other': u'bond0.3082'}, 
u'extensions': [], u'devices': 9}} bind_port 
/opt/neutron/neutron/plugins/ml2/drivers/mech_agent.py:105
  2017-11-03 15:50:47.967 25598 DEBUG neutron.plugins.ml2.drivers.mech_agent 
[req-0779656d-919a-4517-9a5c-12b63d77fd02 ac2c53521bd74ab89acb7b705f2b49ff 
2f3e9e705b0b460b9de90d9844e88fd2 - - -] Checking segment: {'segmentation_id': 
None, 'physical_network': None, 'id': u'4d716b78-d0b6-4503-b22d-1e42f7d5667a', 
'network_type': u'midonet'} for mappings: {u'other': u'bond0.3082'} with 
network types: ['local', 'flat', 'vlan'] check_segment_for_agent 
/opt/neutron/neutron/plugins/ml2/drivers/mech_agent.py:231
  2017-11-03 15:50:47.968 25598 DEBUG neutron.plugins.ml2.drivers.mech_agent 
[req-0779656d-919a-4517-9a5c-12b63d77fd02 ac2c53521bd74ab89acb7b705f2b49ff 
2f3e9e705b0b460b9de90d9844e88fd2 - - -] Network 
4d716b78-d0b6-4503-b22d-1e42f7d5667a is of type midonet but agent cn2-qh2 or 
mechanism driver only support ['local', 'flat', 'vlan']. 
check_segment_for_agent 
/opt/neutron/neutron/plugins/ml2/drivers/mech_agent.py:242
  2017-11-03 15:50:47.969 25598 ERROR neutron.plugins.ml2.managers 
[req-0779656d-919a-4517-9a5c-12b63d77fd02 ac2c53521bd74ab89acb7b705f2b49ff 
2f3e9e705b0b460b9de90d9844e88fd2 - - -] Failed to bind port 
1c68f72b-1405-43cd-b0d1-ba128f211f51 on host cn2-qh2 for vnic_type normal using 
segments [{'segmentation_id': None, 'physical_network': None, 'id': 
u'4d716b78-d0b6-4503-b22d-1e42f7d5667a', 'network_type': u'midonet'}]

  
  It seems that because there is a linuxbridge agent on the compute node that 
it thinks that is the only type of network that can be bound.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1729767/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1733747] Re: No way to find out which instances are using a security group

2017-11-22 Thread Sam Morrison
Sorry what you are explaining is the reverse of what I want and doesn't
help, I have a security group ID and I want to know what instances have
that security group applied.

We have thousands of instances and querying each one to see if they have
the security group applied is very inefficient and time consuming

** Changed in: neutron
   Status: Opinion => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1733747

Title:
  No way to find out which instances are using a security group

Status in neutron:
  New

Bug description:
  I'm trying to figure out which instances are using a specific security
  group but it doesn't look possible via the API (unless I'm missing
  something).

  The only way to do this is by looking in the database and doing some
  sql on the securitygroupportbindings table.

  Is there another way?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1733747/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1733747] [NEW] No way to find out which instances are using a security group

2017-11-21 Thread Sam Morrison
Public bug reported:

I'm trying to figure out which instances are using a specific security
group but it doesn't look possible via the API (unless I'm missing
something).

The only way to do this is by looking in the database and doing some sql
on the securitygroupportbindings table.

Is there another way?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1733747

Title:
  No way to find out which instances are using a security group

Status in neutron:
  New

Bug description:
  I'm trying to figure out which instances are using a specific security
  group but it doesn't look possible via the API (unless I'm missing
  something).

  The only way to do this is by looking in the database and doing some
  sql on the securitygroupportbindings table.

  Is there another way?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1733747/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1731361] [NEW] Can't change the domain of a project

2017-11-09 Thread Sam Morrison
Public bug reported:

I'm wanting to change the domain of a project but it doesn't look like
this is supported via the API.

Changing via some SQL in the DB works fine so would be great if this can
be achieved via the API.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1731361

Title:
  Can't change the domain of a project

Status in OpenStack Identity (keystone):
  New

Bug description:
  I'm wanting to change the domain of a project but it doesn't look like
  this is supported via the API.

  Changing via some SQL in the DB works fine so would be great if this
  can be achieved via the API.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1731361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1729449] [NEW] Horizon keystone api doesn't support include_names for role assignment list

2017-11-01 Thread Sam Morrison
Public bug reported:

Since keystone API version 3.6 the role assignment list operation has
supported include_names see https://developer.openstack.org/api-
ref/identity/v3/#what-s-new-in-version-3-6

I'm writing a third party plugin and need to use this option, instead of
me having to replicate all the keystone api in horizon it would be good
if it could be supported natively.

** Affects: horizon
 Importance: Undecided
 Assignee: Sam Morrison (sorrison)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1729449

Title:
  Horizon keystone api doesn't support include_names for role assignment
  list

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Since keystone API version 3.6 the role assignment list operation has
  supported include_names see https://developer.openstack.org/api-
  ref/identity/v3/#what-s-new-in-version-3-6

  I'm writing a third party plugin and need to use this option, instead
  of me having to replicate all the keystone api in horizon it would be
  good if it could be supported natively.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1729449/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1729175] [NEW] Downloading ec2 creds doesn't work if nova is at newton or newer

2017-10-31 Thread Sam Morrison
Public bug reported:

In Newton nova removed the nova-cert process. This means downloading the
ec2 certificates doesn't work anymore. Getting the EC2 credentials from
keystone still works so just need to only return EC2 creds now.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1729175

Title:
  Downloading ec2 creds doesn't work if nova is at newton or newer

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In Newton nova removed the nova-cert process. This means downloading
  the ec2 certificates doesn't work anymore. Getting the EC2 credentials
  from keystone still works so just need to only return EC2 creds now.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1729175/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1687616] Re: KeyError 'options' while doing zero downtime upgrade from N to O

2017-10-30 Thread Sam Morrison
I have just done the N -> O upgrade and have seen this error.

We have done the expand and migrate db syncs.

We have 3 newton keystones and when I added an ocata one I saw this
issue on the ocata one.

Its happening on a POST to /v3/auth/tokens and is affecting about 3% of
requests (we have around 10 requests per second on our keystone)

Happy to provide more information.

Currently I have rolled back but am thinking this might just be an issue
during the transition so could bite the bullet and do it quickly.

** Changed in: keystone
   Status: Expired => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1687616

Title:
  KeyError 'options' while doing zero downtime upgrade from N to O

Status in OpenStack Identity (keystone):
  New

Bug description:
  I am trying to do a zero downtime upgrade from N release to O release
  following [1].

  I have 3 controller nodes running behind a HAProxy. Everytime, when I
  upgraded one of the keystone and bring it back to the cluster, it
  would encounter this error [2] when I tried to update a created user
  for about 5 minutes. After I brought back all the 3 upgraded keystone
  nodes, and 5 or more minutes later, this error will disappear and
  everything works fine.

  I am using the same conf file for both releases as shown in [3].

  [1]. https://docs.openstack.org/keystone/latest/admin/identity-upgrading.html
  [2]. http://paste.openstack.org/show/608557/
  [3]. http://paste.openstack.org/show/608558/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1687616/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715542] [NEW] Remove console doesn't work when using nova API V2.1

2017-09-06 Thread Sam Morrison
Public bug reported:

Upgraded horizon to Ocata and now remote consoles don't work.

This is due to horizon switching to using the new server-remote-consoles
API see:

https://developer.openstack.org/api-ref/compute/#server-remote-consoles

Also deprecation about the old way at

https://developer.openstack.org/api-ref/compute/#get-vnc-console-os-
getvncconsole-action-deprecated


The error is masked as there is a dangerous catch all exception (removing it 
shows the following)


  File 
"/opt/horizon/openstack_dashboard/dashboards/project/instances/views.py", line 
312, in get_context_data
context = super(DetailView, self).get_context_data(**kwargs)
  File "/opt/horizon/horizon/tabs/views.py", line 56, in get_context_data
exceptions.handle(self.request)
  File "/opt/horizon/horizon/exceptions.py", line 354, in handle
six.reraise(exc_type, exc_value, exc_traceback)
  File "/opt/horizon/horizon/tabs/views.py", line 54, in get_context_data
context["tab_group"].load_tab_data()
  File "/opt/horizon/horizon/tabs/base.py", line 128, in load_tab_data
exceptions.handle(self.request)
  File "/opt/horizon/horizon/exceptions.py", line 354, in handle
six.reraise(exc_type, exc_value, exc_traceback)
  File "/opt/horizon/horizon/tabs/base.py", line 125, in load_tab_data
tab._data = tab.get_context_data(self.request)
  File "/opt/horizon/openstack_dashboard/dashboards/project/instances/tabs.py", 
line 74, in get_context_data
request, console_type, instance)
  File 
"/opt/horizon/openstack_dashboard/dashboards/project/instances/console.py", 
line 53, in get_console
console = api_call(request, instance.id)
  File "/opt/horizon/openstack_dashboard/api/nova.py", line 504, in 
server_vnc_console
instance_id, console_type)['console'])
KeyError: 'console'

** Affects: horizon
 Importance: Undecided
 Assignee: Sam Morrison (sorrison)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1715542

Title:
  Remove console doesn't work when using nova API V2.1

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Upgraded horizon to Ocata and now remote consoles don't work.

  This is due to horizon switching to using the new server-remote-
  consoles API see:

  https://developer.openstack.org/api-ref/compute/#server-remote-
  consoles

  Also deprecation about the old way at

  https://developer.openstack.org/api-ref/compute/#get-vnc-console-os-
  getvncconsole-action-deprecated

  
  The error is masked as there is a dangerous catch all exception (removing it 
shows the following)

  
File 
"/opt/horizon/openstack_dashboard/dashboards/project/instances/views.py", line 
312, in get_context_data
  context = super(DetailView, self).get_context_data(**kwargs)
File "/opt/horizon/horizon/tabs/views.py", line 56, in get_context_data
  exceptions.handle(self.request)
File "/opt/horizon/horizon/exceptions.py", line 354, in handle
  six.reraise(exc_type, exc_value, exc_traceback)
File "/opt/horizon/horizon/tabs/views.py", line 54, in get_context_data
  context["tab_group"].load_tab_data()
File "/opt/horizon/horizon/tabs/base.py", line 128, in load_tab_data
  exceptions.handle(self.request)
File "/opt/horizon/horizon/exceptions.py", line 354, in handle
  six.reraise(exc_type, exc_value, exc_traceback)
File "/opt/horizon/horizon/tabs/base.py", line 125, in load_tab_data
  tab._data = tab.get_context_data(self.request)
File 
"/opt/horizon/openstack_dashboard/dashboards/project/instances/tabs.py", line 
74, in get_context_data
  request, console_type, instance)
File 
"/opt/horizon/openstack_dashboard/dashboards/project/instances/console.py", 
line 53, in get_console
  console = api_call(request, instance.id)
File "/opt/horizon/openstack_dashboard/api/nova.py", line 504, in 
server_vnc_console
  instance_id, console_type)['console'])
  KeyError: 'console'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1715542/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1715525] [NEW] v3 openrc file is missing PROJECT_DOMAIN_NAME

2017-09-06 Thread Sam Morrison
Public bug reported:

The generated v3 openrc file doesn't specify either PROJECT_DOMAIN_ID or
PROJECT_DOMAIN_NAME.


If your project isn't in the default domain then this openrc file won't work 
for you.

** Affects: horizon
 Importance: Undecided
 Assignee: Sam Morrison (sorrison)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1715525

Title:
  v3 openrc file is missing PROJECT_DOMAIN_NAME

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The generated v3 openrc file doesn't specify either PROJECT_DOMAIN_ID
  or PROJECT_DOMAIN_NAME.

  
  If your project isn't in the default domain then this openrc file won't work 
for you.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1715525/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1689888] [NEW] /v3/users is unproportionally slow

2017-05-10 Thread Sam Morrison
Public bug reported:

We have 11,000 users, doing a `client.users.list()` takes around 14-20
seconds

We have 14,000 projects and doing a `client.projects.list()` takes
around 7-10 seconds.

So you can see we have more projects however it takes about double the
time to list users.

I should mention we are using mitaka and our keystone is using apache

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1689888

Title:
  /v3/users is unproportionally slow

Status in OpenStack Identity (keystone):
  New

Bug description:
  We have 11,000 users, doing a `client.users.list()` takes around 14-20
  seconds

  We have 14,000 projects and doing a `client.projects.list()` takes
  around 7-10 seconds.

  So you can see we have more projects however it takes about double the
  time to list users.

  I should mention we are using mitaka and our keystone is using apache

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1689888/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1555384] Re: ML2: routers and multiple mechanism drivers

2017-01-31 Thread Sam Morrison
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1555384

Title:
  ML2: routers and multiple mechanism drivers

Status in networking-midonet:
  Fix Released
Status in neutron:
  New

Bug description:
  We have an ML2 environment with linuxbridge and midonet networks. For
  L3 we use the midonet driver.

  If a user tries to bind a linuxbridge port to a midonet router it
  returns the following error:

  {"message":"There is no NeutronPort with ID eddb3d08-97f6-480d-
  a1a4-9dfe3d22e62c.","code":404}

  However the port is created (user can't see it)  and doing a router
  show shows the router as having that interface.

  Ideally this should not be allowed and should error out gracefully.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1555384/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1655504] Re: pagination and href bookmarks are wrong when using https

2017-01-10 Thread Sam Morrison
Also noted this affects nova too, pagination works but href links to
things like flavours are returned as http links not https

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1655504

Title:
  pagination and href bookmarks are wrong when using https

Status in Cinder:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  We have an SSL LB in front of our cinder-api service.

  I have set 
  public_endpoint = https://cinder.rc.nectar.org.au:8776/
  in cinder.conf

  
  When doing a cinder --debug show I can see 

  curl -g -i -X GET
  
https://cinder.rc.nectar.org.au:8776/v2/1/volumes/1f9366e9-8080-4a41-9c94-e4c3a73abbc5

  In the response I see:

  {"href":
  
"http://cinder.rc.nectar.org.au:8776/1/volumes/1f9366e9-8080-4a41-9c94-e4c3a73abbc5;,
  "rel": "bookmark"}

  Note it is http not https.

  This also breaks pagination:

  cinder list --all-tenants
  ERROR: Unable to establish connection to 
http://cinder.rc.nectar.org.au:8776/v2/1/volumes/detail?all_tenants=1=1f9366e9-8080-4a41-9c94-e4c3a73abbc4

  
  Sever version is stable/mitaka
  Client version is 1.6.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1655504/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649183] [NEW] image category buttons don't appear in newton

2016-12-11 Thread Sam Morrison
Public bug reported:

Just installed the Newton version of the openstack dashboard and when
listing images the buttons to filter by category no longer appear.


I can see the option IMAGES_LIST_FILTER_TENANTS appears at:
openstack_dashboard/dashboards/project/images/images/tables.py

So it looks as if this is a bug as opposed to the removal of this great
feature?

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1649183

Title:
  image category buttons don't appear in newton

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Just installed the Newton version of the openstack dashboard and when
  listing images the buttons to filter by category no longer appear.

  
  I can see the option IMAGES_LIST_FILTER_TENANTS appears at:
  openstack_dashboard/dashboards/project/images/images/tables.py

  So it looks as if this is a bug as opposed to the removal of this
  great feature?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1649183/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1648643] [NEW] nova-api-metadata ignores firewall driver

2016-12-08 Thread Sam Morrison
Public bug reported:

In my nova.conf I have

firewall_driver = nova.virt.firewall.NoopFirewallDriver

When I start nova-api-metadata it installs some iptables rules (and
blows away what is already there)

I want to make it not manage any iptables rules by using the noop driver
however it has no affect on nova-api-metadata.

I'm using stable/mitaka although a look at the code in master would
indicate this affects master too.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1648643

Title:
  nova-api-metadata ignores firewall driver

Status in OpenStack Compute (nova):
  New

Bug description:
  In my nova.conf I have

  firewall_driver = nova.virt.firewall.NoopFirewallDriver

  When I start nova-api-metadata it installs some iptables rules (and
  blows away what is already there)

  I want to make it not manage any iptables rules by using the noop
  driver however it has no affect on nova-api-metadata.

  I'm using stable/mitaka although a look at the code in master would
  indicate this affects master too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1648643/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585515] Re: Paramiko doesn't work with Nova

2016-11-13 Thread Sam Morrison
This affects mitaka, not sure how to make it say that in launchpad

** Changed in: nova
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1585515

Title:
  Paramiko doesn't work with Nova

Status in OpenStack Compute (nova):
  New

Bug description:
  It looks like Paramiko 2.0.0 again breaks nova which currently has a
  requirement for 'paramiko>=1.16.0 # LGPL'.

  
nova.tests.unit.api.openstack.compute.test_keypairs.KeypairsTestV210.test_keypair_create_duplicate
  
--

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "nova/tests/unit/api/openstack/compute/test_keypairs.py", line 
237, in test_keypair_create_duplicate
  self.controller.create, self.req, body=body)
File 
"/root/upstream/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 485, in assertRaises
  self.assertThat(our_callable, matcher)
File 
"/root/upstream/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 496, in assertThat
  mismatch_error = self._matchHelper(matchee, matcher, message, verbose)
File 
"/root/upstream/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 547, in _matchHelper
  mismatch = matcher.match(matchee)
File 
"/root/upstream/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 108, in match
  mismatch = self.exception_matcher.match(exc_info)
File 
"/root/upstream/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_higherorder.py",
 line 62, in match
  mismatch = matcher.match(matchee)
File 
"/root/upstream/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 475, in match
  reraise(*matchee)
File 
"/root/upstream/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 101, in match
  result = matchee()
File 
"/root/upstream/nova/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 1049, in __call__
  return self._callable_object(*self._args, **self._kwargs)
File "nova/api/openstack/wsgi.py", line 961, in version_select
  return func.func(self, *args, **kwargs)
File "nova/api/openstack/extensions.py", line 504, in wrapped
  raise webob.exc.HTTPInternalServerError(explanation=msg)
  webob.exc.HTTPInternalServerError: Unexpected API Error. Please report 
this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
  
  

  Captured pythonlogging:
  ~~~
  2016-05-25 09:55:14,571 INFO [nova.api.openstack] Loaded extensions: 
['os-keypairs', 'servers']
  2016-05-25 09:55:16,314 ERROR [nova.api.openstack.extensions] Unexpected 
exception in API method
  Traceback (most recent call last):
File "nova/api/openstack/extensions.py", line 478, in wrapped
  return f(*args, **kwargs)
File "nova/api/validation/__init__.py", line 73, in wrapper
  return func(*args, **kwargs)
File "nova/api/openstack/compute/keypairs.py", line 72, in create
  return self._create(req, body, type=True, user_id=user_id)
File "nova/api/openstack/compute/keypairs.py", line 132, in _create
  context, user_id, name, key_type)
File "nova/exception.py", line 110, in wrapped
  payload)
File 
"/root/upstream/nova/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 221, in __exit__
  self.force_reraise()
File 
"/root/upstream/nova/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 197, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File "nova/exception.py", line 89, in wrapped
  return f(self, context, *args, **kw)
File "nova/compute/api.py", line 4040, in create_key_pair
  user_id, key_type)
File "nova/compute/api.py", line 4062, in _generate_key_pair
  return crypto.generate_key_pair()
File "nova/crypto.py", line 152, in generate_key_pair
  key = generate_key(bits)
File "nova/crypto.py", line 144, in generate_key
  key = paramiko.RSAKey(vals=(rsa.e, rsa.n))
  TypeError: __init__() got an unexpected keyword argument 'vals'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1585515/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615582] Re: Error: Unable to create the server. Unexpected API Error.

2016-11-13 Thread Sam Morrison
Reopening this bug, going through an upgrade from liberty -> mitaka and
getting this bug

** Changed in: nova
   Status: Expired => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1615582

Title:
  Error: Unable to create the server. Unexpected API Error.

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  After upgrading fresh OpenStack Liberty installation to Mitaka on Trusty with 
Juju, I am no longer able to create Nova instances.

  Steps to reproduce
  ==
  * Install OpenStack Liberty with Juju from the attached "bundle.yaml" file.
  * Upgrade OpenStack Liberty to Mitaka by executing the following commands:

  juju upgrade-charm keystone
  juju upgrade-charm ceilometer
  juju upgrade-charm ceilometer-agent
  juju upgrade-charm ceph
  juju upgrade-charm ceph-osd
  juju upgrade-charm cinder
  juju upgrade-charm glance
  juju upgrade-charm nova-cloud-controller
  juju upgrade-charm nova-compute
  juju upgrade-charm neutron-api
  juju upgrade-charm neutron-gateway
  juju upgrade-charm openstack-dashboard
  juju set-config ceph source="cloud:trusty-mitaka"
  juju set-config ceph-osd source="cloud:trusty-mitaka"
  juju set-config keystone openstack-origin="cloud:trusty-mitaka"
  juju ssh keystone/0 sudo keystone-manage db_sync
  juju set-config ceilometer openstack-origin="cloud:trusty-mitaka"
  juju set-config ceilometer-agent openstack-origin="cloud:trusty-mitaka"
  juju set-config cinder openstack-origin="cloud:trusty-mitaka"
  juju ssh cinder/0 sudo cinder-manage db sync
  juju set-config glance openstack-origin="cloud:trusty-mitaka"
  juju ssh glance/0 sudo glance-manage db_sync
  juju set-config nova-cloud-controller openstack-origin="cloud:trusty-mitaka"
  juju ssh nova-cloud-controller/0 sudo nova-manage db sync
  juju set-config nova-compute openstack-origin="cloud:trusty-mitaka"
  juju set-config neutron-api openstack-origin="cloud:trusty-mitaka"
  juju ssh neutron-api/0 sudo neutron-db-manage upgrade heads
  juju set-config neutron-gateway openstack-origin="cloud:trusty-mitaka"
  juju set-config openstack-dashboard openstack-origin="cloud:trusty-mitaka"
  juju ssh ceph/0 sudo reboot
  juju ssh ceph/1 sudo reboot
  juju ssh ceph/2 sudo reboot
  juju ssh ceph-osd/0 sudo service ceph restart
  juju ssh ceph-osd/1 sudo service ceph restart
  juju ssh ceph-osd/2 sudo service ceph restart
  juju ssh keystone/0 sudo reboot
  juju ssh keystone/1 sudo reboot
  juju ssh keystone/2 sudo reboot
  juju ssh ceilometer/0 sudo reboot
  juju ssh ceilometer/1 sudo reboot
  juju ssh ceilometer/2 sudo reboot
  juju ssh ceilometer-agent/0 sudo service ceilometer-agent-compute restart
  juju ssh ceilometer-agent/1 sudo service ceilometer-agent-compute restart
  juju ssh ceilometer-agent/2 sudo service ceilometer-agent-compute restart
  juju ssh cinder/0 sudo reboot
  juju ssh cinder/1 sudo reboot
  juju ssh cinder/2 sudo reboot
  juju ssh glance/0 sudo reboot
  juju ssh glance/1 sudo reboot
  juju ssh glance/2 sudo reboot
  juju ssh nova-cloud-controller/0 sudo reboot
  juju ssh nova-cloud-controller/1 sudo reboot
  juju ssh nova-cloud-controller/2 sudo reboot
  juju ssh nova-compute/0 sudo service nova-compute restart
  juju ssh nova-compute/1 sudo service nova-compute restart
  juju ssh nova-compute/2 sudo service nova-compute restart
  juju ssh neutron-api/0 sudo reboot
  juju ssh neutron-api/1 sudo reboot
  juju ssh neutron-api/2 sudo reboot
  juju ssh neutron-gateway/0 sudo service neutron-dhcp-agent restart
  juju ssh neutron-gateway/0 sudo service neutron-lbaas-agent restart
  juju ssh neutron-gateway/0 sudo service neutron-metadata-agent restart
  juju ssh neutron-gateway/0 sudo service neutron-metering-agent restart
  juju ssh neutron-gateway/0 sudo service neutron-openvswitch-agent restart
  juju ssh neutron-gateway/0 sudo service neutron-vpn-agent restart
  juju ssh nova-compute/0 sudo service neutron-openvswitch-agent restart
  juju ssh nova-compute/1 sudo service neutron-openvswitch-agent restart
  juju ssh nova-compute/2 sudo service neutron-openvswitch-agent restart
  juju ssh openstack-dashboard/0 sudo reboot
  juju ssh openstack-dashboard/1 sudo reboot
  juju ssh openstack-dashboard/2 sudo reboot

  * Attempt to create Nova instance.

  Expected result
  ===
  Nova instance being created.

  Actual result
  =
  * Nova instance not being created.
  * The following error messages being displayed:
  ** from GUI:

     Error: Unable to create the server.

  ** from CLI:

     Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.
      (HTTP 500) (Request-ID: 
req-c035d518-50c0-4cab-913e-9a5263392a2a)

  Environment
  ===
  1. Exact version of OpenStack you are running.

  ubuntu@tkurek-maas:~$ juju ssh 

[Yahoo-eng-team] [Bug 1624973] [NEW] Not possible to allow non "admin" user to change owner on image

2016-09-18 Thread Sam Morrison
Public bug reported:

I want to allow a special role to update the owner attribute of an
image.

It looks as if this action is hard coded to only allow context
"is_admin" to to this operation.

This should be configurable via policy

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1624973

Title:
  Not possible to allow non "admin" user to change owner on image

Status in Glance:
  New

Bug description:
  I want to allow a special role to update the owner attribute of an
  image.

  It looks as if this action is hard coded to only allow context
  "is_admin" to to this operation.

  This should be configurable via policy

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1624973/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1607602] [NEW] policy.json ignored for most instance actions

2016-07-28 Thread Sam Morrison
Public bug reported:

I'm trying to allow a certain role to do certain things to any projects
instances through policy.json and it isn't working as expected.

I've set the following policies to allow my role to do a "nova show" but
with no luck, the same is with any other instance action like start,
reboot etc.


"compute:get": "rule:default_or_monitoring",
"compute:get_all": "rule:default_or_monitoring",
"compute:get_all_tenants": "rule:admin_or_monitoring",
"os_compute_api:servers:detail:get_all_tenants": "rule:admin_or_monitoring",
"os_compute_api:servers:index:get_all_tenants": "rule:admin_or_monitoring",
"os_compute_api:servers:detail": "rule:default_or_monitoring",
"os_compute_api:servers:index": "rule:default_or_monitoring",
"os_compute_api:servers:show": "rule:default_or_monitoring",

Upon looking in the code I see that in the DB layer the instance_get
function is hard coded to filter by project if the context isn't admin
see: HEAD (as of writing)

https://github.com/openstack/nova/blob/d0905df10a48212950c0854597a2df923e6ddd0c/nova/db/sqlalchemy/api.py#L1885

If I remove this project=True flag then everything works as expected.

Nova api otherwise just returns a 404

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1607602

Title:
  policy.json ignored for most instance actions

Status in OpenStack Compute (nova):
  New

Bug description:
  I'm trying to allow a certain role to do certain things to any
  projects instances through policy.json and it isn't working as
  expected.

  I've set the following policies to allow my role to do a "nova show"
  but with no luck, the same is with any other instance action like
  start, reboot etc.

  
  "compute:get": "rule:default_or_monitoring",
  "compute:get_all": "rule:default_or_monitoring",
  "compute:get_all_tenants": "rule:admin_or_monitoring",
  "os_compute_api:servers:detail:get_all_tenants": "rule:admin_or_monitoring",
  "os_compute_api:servers:index:get_all_tenants": "rule:admin_or_monitoring",
  "os_compute_api:servers:detail": "rule:default_or_monitoring",
  "os_compute_api:servers:index": "rule:default_or_monitoring",
  "os_compute_api:servers:show": "rule:default_or_monitoring",

  Upon looking in the code I see that in the DB layer the instance_get
  function is hard coded to filter by project if the context isn't admin
  see: HEAD (as of writing)

  
https://github.com/openstack/nova/blob/d0905df10a48212950c0854597a2df923e6ddd0c/nova/db/sqlalchemy/api.py#L1885

  If I remove this project=True flag then everything works as expected.

  Nova api otherwise just returns a 404

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1607602/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606426] [NEW] Upgrading to Mitaka casues significant slow down on user-list

2016-07-25 Thread Sam Morrison
Public bug reported:

With Kilo doing a user-list on V2 or V3 would take approx. 2-4 seconds

In Mitaka it takes 19-22 seconds. This is a significant slow down.

We have ~9,000 users

We also changed from going under eventlet to moving to apache wsgi

We have ~10,000 project and this api (project-list) hasn't slowed down
so I think this is something specific to the user-list api

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1606426

Title:
  Upgrading to Mitaka casues significant slow down on user-list

Status in OpenStack Identity (keystone):
  New

Bug description:
  With Kilo doing a user-list on V2 or V3 would take approx. 2-4 seconds

  In Mitaka it takes 19-22 seconds. This is a significant slow down.

  We have ~9,000 users

  We also changed from going under eventlet to moving to apache wsgi

  We have ~10,000 project and this api (project-list) hasn't slowed down
  so I think this is something specific to the user-list api

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1606426/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1579604] [NEW] project delete returns 501 NotImplemented

2016-05-08 Thread Sam Morrison
Public bug reported:

Have upgraded to Mitaka and getting a 501 when deleting a project. This
happens in both v2 and v3 api. The project actually deletes.

Am using stable/mitaka branch and the sql backend


$ keystone tenant-create --name deleteme

+-+--+
|   Property  |  Value   |
+-+--+
| description |  |
|   enabled   |   True   |
|  id | 5fafe2512fb3404ead999c30a23d0107 |
| name| deleteme |
+-+--+


$ keystone tenant-delete 5fafe2512fb3404ead999c30a23d0107

The action you have requested has not been implemented. (HTTP 501)
(Request-ID: req-7ad5ee51-539f-4780-a39a-0f4e9ad092dc)


$ keystone tenant-get 5fafe2512fb3404ead999c30a23d0107

No tenant with a name or ID of '5fafe2512fb3404ead999c30a23d0107'
exists.


In logs:

2016-05-09 12:06:40.265 16723 WARNING keystone.common.wsgi 
[req-7ad5ee51-539f-4780-a39a-0f4e9ad092dc c0645ff94b864d3d84c438d9855f9cea 
9427903ca1544f0795ba4117d55ed9b2 - default default] The action you have 
requested has not been implemented.
2016-05-09 12:06:40.269 16723 INFO eventlet.wsgi.server 
[req-7ad5ee51-539f-4780-a39a-0f4e9ad092dc c0645ff94b864d3d84c438d9855f9cea 
9427903ca1544f0795ba4117d55ed9b2 - default default] 128.250.116.173 - - 
[09/May/2016 12:06:40] "DELETE /v2.0/tenants/5fafe2512fb3404ead999c30a23d0107 
HTTP/1.1" 501 354 0.223312

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1579604

Title:
  project delete returns 501 NotImplemented

Status in OpenStack Identity (keystone):
  New

Bug description:
  Have upgraded to Mitaka and getting a 501 when deleting a project.
  This happens in both v2 and v3 api. The project actually deletes.

  Am using stable/mitaka branch and the sql backend


  
  $ keystone tenant-create --name deleteme

  +-+--+
  |   Property  |  Value   |
  +-+--+
  | description |  |
  |   enabled   |   True   |
  |  id | 5fafe2512fb3404ead999c30a23d0107 |
  | name| deleteme |
  +-+--+

  
  $ keystone tenant-delete 5fafe2512fb3404ead999c30a23d0107

  The action you have requested has not been implemented. (HTTP 501)
  (Request-ID: req-7ad5ee51-539f-4780-a39a-0f4e9ad092dc)

  
  $ keystone tenant-get 5fafe2512fb3404ead999c30a23d0107

  No tenant with a name or ID of '5fafe2512fb3404ead999c30a23d0107'
  exists.



  In logs:

  2016-05-09 12:06:40.265 16723 WARNING keystone.common.wsgi 
[req-7ad5ee51-539f-4780-a39a-0f4e9ad092dc c0645ff94b864d3d84c438d9855f9cea 
9427903ca1544f0795ba4117d55ed9b2 - default default] The action you have 
requested has not been implemented.
  2016-05-09 12:06:40.269 16723 INFO eventlet.wsgi.server 
[req-7ad5ee51-539f-4780-a39a-0f4e9ad092dc c0645ff94b864d3d84c438d9855f9cea 
9427903ca1544f0795ba4117d55ed9b2 - default default] 128.250.116.173 - - 
[09/May/2016 12:06:40] "DELETE /v2.0/tenants/5fafe2512fb3404ead999c30a23d0107 
HTTP/1.1" 501 354 0.223312

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1579604/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1576425] [NEW] Security group changes can trigger thousands of messages

2016-04-28 Thread Sam Morrison
Public bug reported:

There seems to be a scale issue with security groups and large number or
agents.

Users spinning up lots of instances in the same security group with
source group rules can trigger orders of magnatutes more messages in
rabbit that normal operation.

Possibly this can be done more efficiently.

I'm using Liberty

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1576425

Title:
  Security group changes can trigger thousands of messages

Status in neutron:
  New

Bug description:
  There seems to be a scale issue with security groups and large number
  or agents.

  Users spinning up lots of instances in the same security group with
  source group rules can trigger orders of magnatutes more messages in
  rabbit that normal operation.

  Possibly this can be done more efficiently.

  I'm using Liberty

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1576425/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1576416] [NEW] Lots of agents all talking to one rabbit doesn't scale

2016-04-28 Thread Sam Morrison
Public bug reported:

Having thousands of agents all talking to the same rabbit is hard to
manage

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1576416

Title:
  Lots of agents all talking to one rabbit doesn't scale

Status in neutron:
  New

Bug description:
  Having thousands of agents all talking to the same rabbit is hard to
  manage

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1576416/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1572341] [NEW] Failed migration 90 -> 91 Can't DROP 'ixu_user_name_domain_id'

2016-04-19 Thread Sam Morrison
Public bug reported:

Get the following error running DB migration when  upgrading from kilo
-> mitaka

2016-04-20 09:31:37.560 10471 INFO migrate.versioning.api [-] 90 -> 91... 
2016-04-20 09:31:37.822 10471 CRITICAL keystone [-] OperationalError: 
(_mysql_exceptions.OperationalError) (1091, "Can't DROP 
'ixu_user_name_domain_id'; check that column/key exists") [SQL: u'ALTER TABLE 
user DROP INDEX ixu_user_name_domain_id']
2016-04-20 09:31:37.822 10471 ERROR keystone Traceback (most recent call last):
2016-04-20 09:31:37.822 10471 ERROR keystone   File "/usr/bin/keystone-manage", 
line 10, in 
2016-04-20 09:31:37.822 10471 ERROR keystone sys.exit(main())
2016-04-20 09:31:37.822 10471 ERROR keystone   File 
"/opt/keystone/keystone/cmd/manage.py", line 47, in main
2016-04-20 09:31:37.822 10471 ERROR keystone cli.main(argv=sys.argv, 
config_files=config_files)
2016-04-20 09:31:37.822 10471 ERROR keystone   File 
"/opt/keystone/keystone/cmd/cli.py", line 992, in main
2016-04-20 09:31:37.822 10471 ERROR keystone CONF.command.cmd_class.main()
2016-04-20 09:31:37.822 10471 ERROR keystone   File 
"/opt/keystone/keystone/cmd/cli.py", line 371, in main
2016-04-20 09:31:37.822 10471 ERROR keystone 
migration_helpers.sync_database_to_version(extension, version)
2016-04-20 09:31:37.822 10471 ERROR keystone   File 
"/opt/keystone/keystone/common/sql/migration_helpers.py", line 210, in 
sync_database_to_version
2016-04-20 09:31:37.822 10471 ERROR keystone _sync_common_repo(version)
2016-04-20 09:31:37.822 10471 ERROR keystone   File 
"/opt/keystone/keystone/common/sql/migration_helpers.py", line 136, in 
_sync_common_repo
2016-04-20 09:31:37.822 10471 ERROR keystone init_version=init_version, 
sanity_check=False)
2016-04-20 09:31:37.822 10471 ERROR keystone   File 
"/opt/mitaka/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/migration.py",
 line 79, in db_sync
2016-04-20 09:31:37.822 10471 ERROR keystone migration = 
versioning_api.upgrade(engine, repository, version)
2016-04-20 09:31:37.822 10471 ERROR keystone   File 
"/opt/mitaka/local/lib/python2.7/site-packages/migrate/versioning/api.py", line 
186, in upgrade
2016-04-20 09:31:37.822 10471 ERROR keystone return _migrate(url, 
repository, version, upgrade=True, err=err, **opts)
2016-04-20 09:31:37.822 10471 ERROR keystone   File "", line 
2, in _migrate
2016-04-20 09:31:37.822 10471 ERROR keystone   File 
"/opt/mitaka/local/lib/python2.7/site-packages/migrate/versioning/util/__init__.py",
 line 160, in with_engine
2016-04-20 09:31:37.822 10471 ERROR keystone return f(*a, **kw)
2016-04-20 09:31:37.822 10471 ERROR keystone   File 
"/opt/mitaka/local/lib/python2.7/site-packages/migrate/versioning/api.py", line 
366, in _migrate
2016-04-20 09:31:37.822 10471 ERROR keystone schema.runchange(ver, change, 
changeset.step)
2016-04-20 09:31:37.822 10471 ERROR keystone   File 
"/opt/mitaka/local/lib/python2.7/site-packages/migrate/versioning/schema.py", 
line 93, in runchange
2016-04-20 09:31:37.822 10471 ERROR keystone change.run(self.engine, step)
2016-04-20 09:31:37.822 10471 ERROR keystone   File 
"/opt/mitaka/local/lib/python2.7/site-packages/migrate/versioning/script/py.py",
 line 148, in run
2016-04-20 09:31:37.822 10471 ERROR keystone script_func(engine)
2016-04-20 09:31:37.822 10471 ERROR keystone   File 
"/opt/keystone/keystone/common/sql/migrate_repo/versions/091_migrate_data_to_local_user_and_password_tables.py",
 line 61, in upgrade
2016-04-20 09:31:37.822 10471 ERROR keystone 
name='ixu_user_name_domain_id').drop()
2016-04-20 09:31:37.822 10471 ERROR keystone   File 
"/opt/mitaka/local/lib/python2.7/site-packages/migrate/changeset/constraint.py",
 line 59, in drop
2016-04-20 09:31:37.822 10471 ERROR keystone 
self.__do_imports('constraintdropper', *a, **kw)
2016-04-20 09:31:37.822 10471 ERROR keystone   File 
"/opt/mitaka/local/lib/python2.7/site-packages/migrate/changeset/constraint.py",
 line 32, in __do_imports
2016-04-20 09:31:37.822 10471 ERROR keystone run_single_visitor(engine, 
visitorcallable, self, *a, **kw)
2016-04-20 09:31:37.822 10471 ERROR keystone   File 
"/opt/mitaka/local/lib/python2.7/site-packages/migrate/changeset/databases/visitor.py",
 line 85, in run_single_visitor
2016-04-20 09:31:37.822 10471 ERROR keystone fn(element, **kwargs)
2016-04-20 09:31:37.822 10471 ERROR keystone   File 
"/opt/mitaka/local/lib/python2.7/site-packages/migrate/changeset/ansisql.py", 
line 294, in visit_migrate_unique_constraint
2016-04-20 09:31:37.822 10471 ERROR keystone self._visit_constraint(*p, **k)
2016-04-20 09:31:37.822 10471 ERROR keystone   File 
"/opt/mitaka/local/lib/python2.7/site-packages/migrate/changeset/ansisql.py", 
line 306, in _visit_constraint
2016-04-20 09:31:37.822 10471 ERROR keystone self.execute()
2016-04-20 09:31:37.822 10471 ERROR keystone   File 
"/opt/mitaka/local/lib/python2.7/site-packages/migrate/changeset/ansisql.py", 
line 44, in execute
2016-04-20 

[Yahoo-eng-team] [Bug 1555384] [NEW] ML2: routers and multiple mechanism drivers

2016-03-09 Thread Sam Morrison
Public bug reported:

We have an ML2 environment with linuxbridge and midonet networks. For L3
we use the midonet driver.

If a user tries to bind a linuxbridge port to a midonet router it
returns the following error:

{"message":"There is no NeutronPort with ID eddb3d08-97f6-480d-
a1a4-9dfe3d22e62c.","code":404}

However the port is created (user can't see it)  and doing a router show
shows the router as having that interface.

Ideally this should not be allowed and should error out gracefully.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1555384

Title:
  ML2: routers and multiple mechanism drivers

Status in neutron:
  New

Bug description:
  We have an ML2 environment with linuxbridge and midonet networks. For
  L3 we use the midonet driver.

  If a user tries to bind a linuxbridge port to a midonet router it
  returns the following error:

  {"message":"There is no NeutronPort with ID eddb3d08-97f6-480d-
  a1a4-9dfe3d22e62c.","code":404}

  However the port is created (user can't see it)  and doing a router
  show shows the router as having that interface.

  Ideally this should not be allowed and should error out gracefully.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1555384/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506701] [NEW] metadata service security-groups doesn't work with neutron

2015-10-15 Thread Sam Morrison
Public bug reported:

Using the metadata to get the security groups for an instance by

curl http://169.254.169.254/latest/meta-data/security-groups

Doesn't work when you are using neutron. This is because the metadata
server is hard coded to look for security groups in the nova DB.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1506701

Title:
  metadata service security-groups doesn't work with neutron

Status in OpenStack Compute (nova):
  New

Bug description:
  Using the metadata to get the security groups for an instance by

  curl http://169.254.169.254/latest/meta-data/security-groups

  Doesn't work when you are using neutron. This is because the metadata
  server is hard coded to look for security groups in the nova DB.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1506701/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501556] [NEW] periodic task for erroring build timeouts tries to set error state on deleted instances

2015-09-30 Thread Sam Morrison
Public bug reported:

In our nova-compute logs we get a ton of these messages over and over

2015-10-01 11:01:54.781 30811 WARNING nova.compute.manager [req-
f61f4f85-72e7-481b-a8a3-90551bdc4b58 - - - - -] [instance: 75f733b5
-842e-4bde-9570-efa2735e6f12] Instance build timed out. Set to error
state.

Upon looking in the DB they are all deleted

select deleted_at, deleted, vm_state, task_state from instances where uuid = 
'75f733b5-842e-4bde-9570-efa2735e6f12';
+-+-+--++
| deleted_at  | deleted | vm_state | task_state |
+-+-+--++
| 2015-08-17 00:47:18 |   12283 | building | deleting   |
+-+-+--++

We have instance_build_timeout = 3600

I think _check_instance_build_time in compute.manager needs to filter on
deleted instances but there may be a reason it checks deleted instances
too.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1501556

Title:
  periodic task for erroring build timeouts tries to set error state on
  deleted instances

Status in OpenStack Compute (nova):
  New

Bug description:
  In our nova-compute logs we get a ton of these messages over and over

  2015-10-01 11:01:54.781 30811 WARNING nova.compute.manager [req-
  f61f4f85-72e7-481b-a8a3-90551bdc4b58 - - - - -] [instance: 75f733b5
  -842e-4bde-9570-efa2735e6f12] Instance build timed out. Set to error
  state.

  Upon looking in the DB they are all deleted

  select deleted_at, deleted, vm_state, task_state from instances where uuid = 
'75f733b5-842e-4bde-9570-efa2735e6f12';
  +-+-+--++
  | deleted_at  | deleted | vm_state | task_state |
  +-+-+--++
  | 2015-08-17 00:47:18 |   12283 | building | deleting   |
  +-+-+--++

  We have instance_build_timeout = 3600

  I think _check_instance_build_time in compute.manager needs to filter
  on deleted instances but there may be a reason it checks deleted
  instances too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1501556/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500658] [NEW] Unhandled 404 when neutron doesn't support floating IPs

2015-09-28 Thread Sam Morrison
Public bug reported:

Getting the following error in nova-api logs when neutron doesn't
support floating IPs

2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/__init__.py", line 125, in 
__call__
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack return 
req.get_response(self.application)
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1320, in send
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack application, 
catch_exc_info=False)
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1284, in 
call_application
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack return resp(environ, 
start_response)
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py", 
line 634, in __call__
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack return 
self._call_app(env, start_response)
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/keystonemiddleware/auth_token/__init__.py", 
line 554, in _call_app
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack return self._app(env, 
_fake_start_response)
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack return resp(environ, 
start_response)
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack return resp(environ, 
start_response)
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/routes/middleware.py", line 131, in __call__
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack response = 
self.app(environ, start_response)
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in __call__
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack return resp(environ, 
start_response)
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 130, in __call__
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack return self.func(req, 
*args, **kwargs)
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py", line 756, in 
__call__
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack content_type, body, 
accept)
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py", line 821, in 
_process_stack
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/wsgi.py", line 911, in 
dispatch
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack return 
method(req=request, **action_args)
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/api/openstack/compute/contrib/floating_ips.py",
 line 108, in index
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack floating_ips = 
self.network_api.get_floating_ips_by_project(context)
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py", line 1262, in 
get_floating_ips_by_project
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack fips = 
client.list_floatingips(tenant_id=project_id)['floatingips']
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 102, in 
with_params
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack ret = 
self.function(instance, *args, **kwargs)
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 691, in 
list_floatingips
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack **_params)
2015-09-29 10:04:50.551 6376 TRACE nova.api.openstack   File 
"/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 307, in 
list

[Yahoo-eng-team] [Bug 1484745] [NEW] DB migration juno- kilo fails: Can't create table nsxv_internal_networks

2015-08-13 Thread Sam Morrison
Public bug reported:

Get the following error when upgrading my juno DB to kilo

neutron-db-manage --config-file /etc/neutron/neutron.conf   --config-file 
/etc/neutron/plugins/ml2/ml2_conf.ini upgrade kilo
INFO  [alembic.migration] Context impl MySQLImpl.
INFO  [alembic.migration] Will assume non-transactional DDL.
INFO  [alembic.migration] Context impl MySQLImpl.
INFO  [alembic.migration] Will assume non-transactional DDL.
INFO  [alembic.migration] Running upgrade 38495dc99731 - 4dbe243cd84d, nsxv
Traceback (most recent call last):
  File /usr/bin/neutron-db-manage, line 10, in module
sys.exit(main())
  File /usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py, line 
238, in main
CONF.command.func(config, CONF.command.name)
  File /usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py, line 
106, in do_upgrade
do_alembic_command(config, cmd, revision, sql=CONF.command.sql)
  File /usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py, line 72, 
in do_alembic_command
getattr(alembic_command, cmd)(config, *args, **kwargs)
  File /usr/lib/python2.7/dist-packages/alembic/command.py, line 165, in 
upgrade
script.run_env()
  File /usr/lib/python2.7/dist-packages/alembic/script.py, line 382, in 
run_env
util.load_python_file(self.dir, 'env.py')
  File /usr/lib/python2.7/dist-packages/alembic/util.py, line 241, in 
load_python_file
module = load_module_py(module_id, path)
  File /usr/lib/python2.7/dist-packages/alembic/compat.py, line 79, in 
load_module_py
mod = imp.load_source(module_id, path, fp)
  File 
/usr/lib/python2.7/dist-packages/neutron/db/migration/alembic_migrations/env.py,
 line 109, in module
run_migrations_online()
  File 
/usr/lib/python2.7/dist-packages/neutron/db/migration/alembic_migrations/env.py,
 line 100, in run_migrations_online
context.run_migrations()
  File string, line 7, in run_migrations
  File /usr/lib/python2.7/dist-packages/alembic/environment.py, line 742, in 
run_migrations
self.get_context().run_migrations(**kw)
  File /usr/lib/python2.7/dist-packages/alembic/migration.py, line 305, in 
run_migrations
step.migration_fn(**kw)
  File 
/usr/lib/python2.7/dist-packages/neutron/db/migration/alembic_migrations/versions/4dbe243cd84d_nsxv.py,
 line 65, in upgrade
sa.PrimaryKeyConstraint('network_purpose'))
  File string, line 7, in create_table
  File /usr/lib/python2.7/dist-packages/alembic/operations.py, line 936, in 
create_table
self.impl.create_table(table)
  File /usr/lib/python2.7/dist-packages/alembic/ddl/impl.py, line 182, in 
create_table
self._exec(schema.CreateTable(table))
  File /usr/lib/python2.7/dist-packages/alembic/ddl/impl.py, line 106, in 
_exec
return conn.execute(construct, *multiparams, **params)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 729, 
in execute
return meth(self, multiparams, params)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/sql/ddl.py, line 69, in 
_execute_on_connection
return connection._execute_ddl(self, multiparams, params)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 783, 
in _execute_ddl
compiled
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 958, 
in _execute_context
context)
  File 
/usr/lib/python2.7/dist-packages/oslo_db/sqlalchemy/compat/handle_error.py, 
line 261, in _handle_dbapi_exception
e, statement, parameters, cursor, context)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 1155, 
in _handle_dbapi_exception
util.raise_from_cause(newraise, exc_info)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/util/compat.py, line 199, 
in raise_from_cause
reraise(type(exception), exception, tb=exc_tb)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 951, 
in _execute_context
context)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py, line 
436, in do_execute
cursor.execute(statement, parameters)
  File /usr/lib/python2.7/dist-packages/MySQLdb/cursors.py, line 174, in 
execute
self.errorhandler(self, exc, value)
  File /usr/lib/python2.7/dist-packages/MySQLdb/connections.py, line 36, in 
defaulterrorhandler
raise errorclass, errorvalue
sqlalchemy.exc.OperationalError: (OperationalError) (1005, Can't create table 
'neutron_rctest.nsxv_internal_networks' (errno: 150)) \nCREATE TABLE 
nsxv_internal_networks (\n\tnetwork_purpose ENUM('inter_edge_net') NOT NULL, 
\n\tnetwork_id VARCHAR(36), \n\tPRIMARY KEY (network_purpose), \n\tFOREIGN 
KEY(network_id) REFERENCES networks (id) ON DELETE CASCADE\n)ENGINE=InnoDB\n\n 
()

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1484745

Title:
  DB migration juno- kilo fails: Can't create table
  nsxv_internal_networks

Status in neutron:
  New

Bug 

[Yahoo-eng-team] [Bug 1484738] [NEW] keyerror when refreshing instance security groups

2015-08-13 Thread Sam Morrison
Public bug reported:

On a clean kilo install using source security groups I am seeing the
following trace on boot and delete


a2413f7] Deallocating network for instance _deallocate_network 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:2098
2015-08-14 09:46:06.688 11618 ERROR oslo_messaging.rpc.dispatcher 
[req-b8f44d34-96b2-4e40-ac22-15ccc6e44e59 - - - - -] Exception during message 
handling: 'metadata'
2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 142, 
in _dispatch_and_reply
2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 186, 
in _dispatch
2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 130, 
in _do_dispatch
2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 6772, in 
refresh_instance_security_rules
2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher return 
self.manager.refresh_instance_security_rules(ctxt, instance)
2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 434, in 
decorated_function
2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher args = 
(_load_instance(args[0]),) + args[1:]
2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 425, in 
_load_instance
2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher 
expected_attrs=metas)
2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/objects/instance.py, line 506, in 
_from_db_object
2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher 
instance['metadata'] = utils.instance_meta(db_inst)
2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/utils.py, line 817, in instance_meta
2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher if 
isinstance(instance['metadata'], dict):
2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher KeyError: 
'metadata'

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1484738

Title:
  keyerror when refreshing instance security groups

Status in OpenStack Compute (nova):
  New

Bug description:
  On a clean kilo install using source security groups I am seeing the
  following trace on boot and delete


  a2413f7] Deallocating network for instance _deallocate_network 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:2098
  2015-08-14 09:46:06.688 11618 ERROR oslo_messaging.rpc.dispatcher 
[req-b8f44d34-96b2-4e40-ac22-15ccc6e44e59 - - - - -] Exception during message 
handling: 'metadata'
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 142, 
in _dispatch_and_reply
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 186, 
in _dispatch
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 130, 
in _do_dispatch
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher result 
= func(ctxt, **new_args)
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 6772, in 
refresh_instance_security_rules
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher return 
self.manager.refresh_instance_security_rules(ctxt, instance)
  2015-08-14 09:46:06.688 11618 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 434, in 
decorated_function
  2015-08-14 

[Yahoo-eng-team] [Bug 1479181] [NEW] Cells: Build instance doesn't work with kilo api, juno compute

2015-07-28 Thread Sam Morrison
Public bug reported:

When Kilo api cell sends an instance_build to a juno compute cell it
sends down objects, juno is expecting primitives.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1479181

Title:
  Cells: Build instance doesn't work with kilo api, juno compute

Status in OpenStack Compute (nova):
  New

Bug description:
  When Kilo api cell sends an instance_build to a juno compute cell it
  sends down objects, juno is expecting primitives.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1479181/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469029] [NEW] Migrations fail going from juno - kilo

2015-06-26 Thread Sam Morrison
Public bug reported:

Trying to upgrade from Juno - Kilo

keystone-manage db_version
55

keystone-manage db_sync
2015-06-26 16:52:47.494 6169 CRITICAL keystone [-] ProgrammingError: 
(ProgrammingError) (1146, Table 'keystone_k.identity_provider' doesn't exist) 
'ALTER TABLE identity_provider Engine=InnoDB' ()
2015-06-26 16:52:47.494 6169 TRACE keystone Traceback (most recent call last):
2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/kilo/bin/keystone-manage, line 10, in module
2015-06-26 16:52:47.494 6169 TRACE keystone execfile(__file__)
2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/keystone/bin/keystone-manage, line 44, in module
2015-06-26 16:52:47.494 6169 TRACE keystone cli.main(argv=sys.argv, 
config_files=config_files)
2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/keystone/keystone/cli.py, line 585, in main
2015-06-26 16:52:47.494 6169 TRACE keystone CONF.command.cmd_class.main()
2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/keystone/keystone/cli.py, line 76, in main
2015-06-26 16:52:47.494 6169 TRACE keystone 
migration_helpers.sync_database_to_version(extension, version)
2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/keystone/keystone/common/sql/migration_helpers.py, line 247, in 
sync_database_to_version
2015-06-26 16:52:47.494 6169 TRACE keystone 
_sync_extension_repo(default_extension, version)
2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/keystone/keystone/common/sql/migration_helpers.py, line 232, in 
_sync_extension_repo
2015-06-26 16:52:47.494 6169 TRACE keystone _fix_federation_tables(engine)
2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/keystone/keystone/common/sql/migration_helpers.py, line 167, in 
_fix_federation_tables
2015-06-26 16:52:47.494 6169 TRACE keystone engine.execute(ALTER TABLE 
identity_provider Engine=InnoDB)
2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/kilo/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py, line 
1863, in execute
2015-06-26 16:52:47.494 6169 TRACE keystone return 
connection.execute(statement, *multiparams, **params)
2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/kilo/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py, line 
833, in execute
2015-06-26 16:52:47.494 6169 TRACE keystone return 
self._execute_text(object, multiparams, params)
2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/kilo/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py, line 
982, in _execute_text
2015-06-26 16:52:47.494 6169 TRACE keystone statement, parameters
2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/kilo/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py, line 
1070, in _execute_context
2015-06-26 16:52:47.494 6169 TRACE keystone context)
2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/kilo/local/lib/python2.7/site-packages/oslo_db/sqlalchemy/compat/handle_error.py,
 line 261, in _handle_dbapi_exception
2015-06-26 16:52:47.494 6169 TRACE keystone e, statement, parameters, 
cursor, context)
2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/kilo/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py, line 
1267, in _handle_dbapi_exception
2015-06-26 16:52:47.494 6169 TRACE keystone util.raise_from_cause(newraise, 
exc_info)
2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/kilo/local/lib/python2.7/site-packages/sqlalchemy/util/compat.py, line 
199, in raise_from_cause
2015-06-26 16:52:47.494 6169 TRACE keystone reraise(type(exception), 
exception, tb=exc_tb)
2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/kilo/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py, line 
1063, in _execute_context
2015-06-26 16:52:47.494 6169 TRACE keystone context)
2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/kilo/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py, 
line 442, in do_execute
2015-06-26 16:52:47.494 6169 TRACE keystone cursor.execute(statement, 
parameters)
2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/kilo/local/lib/python2.7/site-packages/MySQLdb/cursors.py, line 205, in 
execute
2015-06-26 16:52:47.494 6169 TRACE keystone self.errorhandler(self, exc, 
value)
2015-06-26 16:52:47.494 6169 TRACE keystone   File 
/opt/kilo/local/lib/python2.7/site-packages/MySQLdb/connections.py, line 36, 
in defaulterrorhandler
2015-06-26 16:52:47.494 6169 TRACE keystone raise errorclass, errorvalue
2015-06-26 16:52:47.494 6169 TRACE keystone ProgrammingError: 
(ProgrammingError) (1146, Table 'keystone_k.identity_provider' doesn't exist) 
'ALTER TABLE identity_provider Engine=InnoDB' ()
2015-06-26 16:52:47.494 6169 TRACE keystone 


If I run the command again I get:

keystone-manage db_version
67

keystone-manage db_sync
2015-06-26 16:53:25.489 6186 CRITICAL keystone [-] ValueError: Tables 
endpoint_group,project_endpoint,project_endpoint_group have non utf8 
collation, please make sure all tables 

[Yahoo-eng-team] [Bug 1466258] [NEW] Metadata agent can't override endpoint_url

2015-06-17 Thread Sam Morrison
Public bug reported:

The metadata agent has no ability to override which url neutron uses. It
relies on neutron being in the keystone catalog.

If neutron isn't in the catalog metadata agent will fail.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1466258

Title:
  Metadata agent can't override endpoint_url

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The metadata agent has no ability to override which url neutron uses.
  It relies on neutron being in the keystone catalog.

  If neutron isn't in the catalog metadata agent will fail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1466258/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466305] [NEW] Booting from volume nolonger can be bigger that flavor size

2015-06-17 Thread Sam Morrison
Public bug reported:

Upgrading to Juno you can no longer boot a volume that is bigger than
the flavours disk size.

There should be no need to take this into account when using a volume.

** Affects: nova
 Importance: Undecided
 Assignee: Sam Morrison (sorrison)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1466305

Title:
  Booting from volume nolonger can be bigger that flavor size

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Upgrading to Juno you can no longer boot a volume that is bigger than
  the flavours disk size.

  There should be no need to take this into account when using a volume.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1466305/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456381] [NEW] Cells: attach/detach interface not supported

2015-05-18 Thread Sam Morrison
Public bug reported:

Attach and detach interface are not supported when using cells

** Affects: nova
 Importance: Undecided
 Assignee: Sam Morrison (sorrison)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1456381

Title:
  Cells: attach/detach interface not supported

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Attach and detach interface are not supported when using cells

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1456381/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1422049] Re: Security group checking action permissions raise error

2015-05-12 Thread Sam Morrison
** Changed in: horizon/juno
   Status: Invalid = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1422049

Title:
  Security group   checking action permissions raise error

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) juno series:
  Confirmed

Bug description:
  When using nova-network, I got the output on horizon:

  [Sun Feb 15 02:48:41.965163 2015] [:error] [pid 21259:tid 140656137611008] 
Error while checking action permissions.
  [Sun Feb 15 02:48:41.965184 2015] [:error] [pid 21259:tid 140656137611008] 
Traceback (most recent call last):
  [Sun Feb 15 02:48:41.965193 2015] [:error] [pid 21259:tid 140656137611008]   
File 
/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py, 
line 1260, in _filter_action
  [Sun Feb 15 02:48:41.965199 2015] [:error] [pid 21259:tid 140656137611008]
 return action._allowed(request, datum) and row_matched
  [Sun Feb 15 02:48:41.965205 2015] [:error] [pid 21259:tid 140656137611008]   
File 
/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/actions.py, 
line 137, in _allowed
  [Sun Feb 15 02:48:41.965211 2015] [:error] [pid 21259:tid 140656137611008]
 return self.allowed(request, datum)
  [Sun Feb 15 02:48:41.965440 2015] [:error] [pid 21259:tid 140656137611008]   
File 
/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/access_and_security/security_groups/tables.py,
 line 83, in allowed
  [Sun Feb 15 02:48:41.965457 2015] [:error] [pid 21259:tid 140656137611008]
 if usages['security_groups']['available'] = 0:
  [Sun Feb 15 02:48:41.965466 2015] [:error] [pid 21259:tid 140656137611008] 
KeyError: 'available'
  [Sun Feb 15 02:48:41.986480 2015] [:error] [pid 21259:tid 140656137611008] 
Error while checking action permissions.
  [Sun Feb 15 02:48:41.986533 2015] [:error] [pid 21259:tid 140656137611008] 
Traceback (most recent call last):
  [Sun Feb 15 02:48:41.986569 2015] [:error] [pid 21259:tid 140656137611008]   
File 
/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py, 
line 1260, in _filter_action
  [Sun Feb 15 02:48:41.986765 2015] [:error] [pid 21259:tid 140656137611008]
 return action._allowed(request, datum) and row_matched
  [Sun Feb 15 02:48:41.986806 2015] [:error] [pid 21259:tid 140656137611008]   
File 
/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/actions.py, 
line 137, in _allowed
  [Sun Feb 15 02:48:41.986841 2015] [:error] [pid 21259:tid 140656137611008]
 return self.allowed(request, datum)
  [Sun Feb 15 02:48:41.987010 2015] [:error] [pid 21259:tid 140656137611008]   
File 
/opt/stack/horizon/openstack_dashboard/wsgi/../../openstack_dashboard/dashboards/project/access_and_security/security_groups/tables.py,
 line 83, in allowed
  [Sun Feb 15 02:48:41.987051 2015] [:error] [pid 21259:tid 140656137611008]
 if usages['security_groups']['available'] = 0:
  [Sun Feb 15 02:48:41.987088 2015] [:error] [pid 21259:tid 140656137611008] 
KeyError: 'available'

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1422049/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453629] [NEW] Creating neutron ports uses incorrect instance availability zone

2015-05-10 Thread Sam Morrison
Public bug reported:

The AZ of an instance is calculated wrong when creating neutron ports.
It uses the requested instance AZ as opposed to the actual AZ of the
instance.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1453629

Title:
  Creating neutron ports uses incorrect instance availability zone

Status in OpenStack Compute (Nova):
  New

Bug description:
  The AZ of an instance is calculated wrong when creating neutron ports.
  It uses the requested instance AZ as opposed to the actual AZ of the
  instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1453629/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450318] [NEW] Create security group button gone

2015-04-29 Thread Sam Morrison
Public bug reported:

We upgraded our dashboard to juno and now the Create Security Group
button has disappeared.

I've tracked this down to a key error in class
CreateGroup(tables.LinkAction) method allowed:

if usages['security_groups']['available'] = 0:

KeyError: ('available',)

pp usages['security_groups']
{'quota': 10}

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1450318

Title:
  Create security group button gone

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  We upgraded our dashboard to juno and now the Create Security Group
  button has disappeared.

  I've tracked this down to a key error in class
  CreateGroup(tables.LinkAction) method allowed:

  if usages['security_groups']['available'] = 0:

  KeyError: ('available',)

  pp usages['security_groups']
  {'quota': 10}

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1450318/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362766] Re: ConnectionFailed: Connection to XXXXXX failed: 'HTTPSConnectionPool' object has no attribute 'insecure'

2015-03-31 Thread Sam Morrison
Fixed in 0.14.2

** Changed in: python-glanceclient
   Status: New = Fix Released

** Also affects: python-glanceclient (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362766

Title:
  ConnectionFailed: Connection to XX failed: 'HTTPSConnectionPool'
  object has no attribute 'insecure'

Status in OpenStack Identity  (Keystone) Middleware:
  Incomplete
Status in OpenStack Compute (Nova):
  Incomplete
Status in Python client library for Glance:
  Fix Released
Status in Python client library for Neutron:
  Incomplete
Status in python-glanceclient package in Ubuntu:
  New

Bug description:
  While compute manager was trying to authenticate with neutronclient,
  we see the following:

  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager Traceback 
(most recent call last):
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager   File 
/usr/lib/python2.7/site-packages/powervc_nova/compute/manager.py, line 672, 
in _populate_admin_context
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager 
nclient.authenticate()
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager   File 
/usr/lib/python2.7/site-packages/neutronclient/client.py, line 231, in 
authenticate
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager 
self._authenticate_keystone()
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager   File 
/usr/lib/python2.7/site-packages/neutronclient/client.py, line 209, in 
_authenticate_keystone
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager 
allow_redirects=True)
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager   File 
/usr/lib/python2.7/site-packages/neutronclient/client.py, line 113, in 
_cs_request
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager raise 
exceptions.ConnectionFailed(reason=e)
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager 
ConnectionFailed: Connection to neutron failed: 'HTTPSConnectionPool' object 
has no attribute 'insecure'

  Setting a pdb breakpoint and stepping into the code, I see that the
  requests library is getting a connection object from a pool.  The
  interesting thing is that the connection object is actually from
  glanceclient.common.https.HTTPSConnectionPool.  It seems odd to me
  that neutronclient is using a connection object from glanceclient
  pool, but I do not know this requests code.  Here is the stack just
  before failure:

/usr/lib/python2.7/site-packages/neutronclient/client.py(234)authenticate()
  - self._authenticate_keystone()

/usr/lib/python2.7/site-packages/neutronclient/client.py(212)_authenticate_keystone()
  - allow_redirects=True)
/usr/lib/python2.7/site-packages/neutronclient/client.py(106)_cs_request()
  - resp, body = self.request(*args, **kargs)
/usr/lib/python2.7/site-packages/neutronclient/client.py(151)request()
  - **kwargs)
/usr/lib/python2.7/site-packages/requests/api.py(44)request()
  - return session.request(method=method, url=url, **kwargs)
/usr/lib/python2.7/site-packages/requests/sessions.py(335)request()
  - resp = self.send(prep, **send_kwargs)
/usr/lib/python2.7/site-packages/requests/sessions.py(438)send()
  - r = adapter.send(request, **kwargs)
/usr/lib/python2.7/site-packages/requests/adapters.py(292)send()
  - timeout=timeout
/usr/lib/python2.7/site-packages/urllib3/connectionpool.py(454)urlopen()
  - conn = self._get_conn(timeout=pool_timeout)
/usr/lib/python2.7/site-packages/urllib3/connectionpool.py(272)_get_conn()
  - return conn or self._new_conn()
   
/usr/lib/python2.7/site-packages/glanceclient/common/https.py(100)_new_conn()
  - return VerifiedHTTPSConnection(host=self.host,

  The code about to run there is this:

  class HTTPSConnectionPool(connectionpool.HTTPSConnectionPool):
  
  HTTPSConnectionPool will be instantiated when a new
  connection is requested to the HTTPSAdapter.This
  implementation overwrites the _new_conn method and
  returns an instances of glanceclient's VerifiedHTTPSConnection
  which handles no compression.

  ssl_compression is hard-coded to False because this will
  be used just when the user sets --no-ssl-compression.
  

  scheme = 'https'

  def _new_conn(self):
  self.num_connections += 1
  return VerifiedHTTPSConnection(host=self.host,
 port=self.port,
 key_file=self.key_file,
 cert_file=self.cert_file,
 cacert=self.ca_certs,
 insecure=self.insecure,
 ssl_compression=False)

  Note the self.insecure, which 

[Yahoo-eng-team] [Bug 1362766] Re: ConnectionFailed: Connection to neutron failed: 'HTTPSConnectionPool' object has no attribute 'insecure'

2015-03-30 Thread Sam Morrison
** Also affects: keystonemiddleware
   Importance: Undecided
   Status: New

** Summary changed:

- ConnectionFailed: Connection to neutron failed: 'HTTPSConnectionPool' object 
has no attribute 'insecure'
+ ConnectionFailed: Connection to XX failed: 'HTTPSConnectionPool' object 
has no attribute 'insecure'

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362766

Title:
  ConnectionFailed: Connection to XX failed: 'HTTPSConnectionPool'
  object has no attribute 'insecure'

Status in OpenStack Identity  (Keystone) Middleware:
  New
Status in OpenStack Compute (Nova):
  New
Status in Python client library for Glance:
  Fix Released
Status in Python client library for Neutron:
  Incomplete

Bug description:
  While compute manager was trying to authenticate with neutronclient,
  we see the following:

  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager Traceback 
(most recent call last):
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager   File 
/usr/lib/python2.7/site-packages/powervc_nova/compute/manager.py, line 672, 
in _populate_admin_context
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager 
nclient.authenticate()
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager   File 
/usr/lib/python2.7/site-packages/neutronclient/client.py, line 231, in 
authenticate
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager 
self._authenticate_keystone()
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager   File 
/usr/lib/python2.7/site-packages/neutronclient/client.py, line 209, in 
_authenticate_keystone
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager 
allow_redirects=True)
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager   File 
/usr/lib/python2.7/site-packages/neutronclient/client.py, line 113, in 
_cs_request
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager raise 
exceptions.ConnectionFailed(reason=e)
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager 
ConnectionFailed: Connection to neutron failed: 'HTTPSConnectionPool' object 
has no attribute 'insecure'

  Setting a pdb breakpoint and stepping into the code, I see that the
  requests library is getting a connection object from a pool.  The
  interesting thing is that the connection object is actually from
  glanceclient.common.https.HTTPSConnectionPool.  It seems odd to me
  that neutronclient is using a connection object from glanceclient
  pool, but I do not know this requests code.  Here is the stack just
  before failure:

/usr/lib/python2.7/site-packages/neutronclient/client.py(234)authenticate()
  - self._authenticate_keystone()

/usr/lib/python2.7/site-packages/neutronclient/client.py(212)_authenticate_keystone()
  - allow_redirects=True)
/usr/lib/python2.7/site-packages/neutronclient/client.py(106)_cs_request()
  - resp, body = self.request(*args, **kargs)
/usr/lib/python2.7/site-packages/neutronclient/client.py(151)request()
  - **kwargs)
/usr/lib/python2.7/site-packages/requests/api.py(44)request()
  - return session.request(method=method, url=url, **kwargs)
/usr/lib/python2.7/site-packages/requests/sessions.py(335)request()
  - resp = self.send(prep, **send_kwargs)
/usr/lib/python2.7/site-packages/requests/sessions.py(438)send()
  - r = adapter.send(request, **kwargs)
/usr/lib/python2.7/site-packages/requests/adapters.py(292)send()
  - timeout=timeout
/usr/lib/python2.7/site-packages/urllib3/connectionpool.py(454)urlopen()
  - conn = self._get_conn(timeout=pool_timeout)
/usr/lib/python2.7/site-packages/urllib3/connectionpool.py(272)_get_conn()
  - return conn or self._new_conn()
   
/usr/lib/python2.7/site-packages/glanceclient/common/https.py(100)_new_conn()
  - return VerifiedHTTPSConnection(host=self.host,

  The code about to run there is this:

  class HTTPSConnectionPool(connectionpool.HTTPSConnectionPool):
  
  HTTPSConnectionPool will be instantiated when a new
  connection is requested to the HTTPSAdapter.This
  implementation overwrites the _new_conn method and
  returns an instances of glanceclient's VerifiedHTTPSConnection
  which handles no compression.

  ssl_compression is hard-coded to False because this will
  be used just when the user sets --no-ssl-compression.
  

  scheme = 'https'

  def _new_conn(self):
  self.num_connections += 1
  return VerifiedHTTPSConnection(host=self.host,
 port=self.port,
 key_file=self.key_file,
 cert_file=self.cert_file,
 cacert=self.ca_certs,
 insecure=self.insecure,
 

[Yahoo-eng-team] [Bug 1362766] Re: ConnectionFailed: Connection to neutron failed: 'HTTPSConnectionPool' object has no attribute 'insecure'

2015-03-26 Thread Sam Morrison
I'm getting this in nova-api too, we don't use neutron.

I get it when I do a nova list or nova show. Restarting nova-api fixed
it for a while but then it comes back again.

== /var/log/nova/nova-api.log ==
2015-03-27 13:56:06.649 10962 WARNING keystonemiddleware.auth_token [-] 
Retrying on HTTP connection exception: 'HTTPSConnectionPool' object has no 
attribute 'insecure'
2015-03-27 13:56:07.153 10962 WARNING keystonemiddleware.auth_token [-] 
Retrying on HTTP connection exception: 'HTTPSConnectionPool' object has no 
attribute 'insecure'
2015-03-27 13:56:08.156 10962 WARNING keystonemiddleware.auth_token [-] 
Retrying on HTTP connection exception: 'HTTPSConnectionPool' object has no 
attribute 'insecure'
2015-03-27 13:56:10.159 10962 ERROR keystonemiddleware.auth_token [-] HTTP 
connection exception: 'HTTPSConnectionPool' object has no attribute 'insecure'
2015-03-27 13:56:10.161 10962 WARNING keystonemiddleware.auth_token [-] 
Authorization failed for token
2015-03-27 13:56:10.162 10962 INFO nova.osapi_compute.wsgi.server [-] 
128.250.116.173,172.26.9.144 GET 
/v1.1/0bdf024c921848c4b74d9e69af9edf08/servers/detail HTTP/1.1 status: 401 
len: 282 time: 3.5197592


** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362766

Title:
  ConnectionFailed: Connection to neutron failed: 'HTTPSConnectionPool'
  object has no attribute 'insecure'

Status in OpenStack Compute (Nova):
  New
Status in Python client library for Glance:
  Fix Released
Status in Python client library for Neutron:
  Incomplete

Bug description:
  While compute manager was trying to authenticate with neutronclient,
  we see the following:

  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager Traceback 
(most recent call last):
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager   File 
/usr/lib/python2.7/site-packages/powervc_nova/compute/manager.py, line 672, 
in _populate_admin_context
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager 
nclient.authenticate()
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager   File 
/usr/lib/python2.7/site-packages/neutronclient/client.py, line 231, in 
authenticate
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager 
self._authenticate_keystone()
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager   File 
/usr/lib/python2.7/site-packages/neutronclient/client.py, line 209, in 
_authenticate_keystone
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager 
allow_redirects=True)
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager   File 
/usr/lib/python2.7/site-packages/neutronclient/client.py, line 113, in 
_cs_request
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager raise 
exceptions.ConnectionFailed(reason=e)
  2014-08-28 05:03:33.052 29982 TRACE powervc_nova.compute.manager 
ConnectionFailed: Connection to neutron failed: 'HTTPSConnectionPool' object 
has no attribute 'insecure'

  Setting a pdb breakpoint and stepping into the code, I see that the
  requests library is getting a connection object from a pool.  The
  interesting thing is that the connection object is actually from
  glanceclient.common.https.HTTPSConnectionPool.  It seems odd to me
  that neutronclient is using a connection object from glanceclient
  pool, but I do not know this requests code.  Here is the stack just
  before failure:

/usr/lib/python2.7/site-packages/neutronclient/client.py(234)authenticate()
  - self._authenticate_keystone()

/usr/lib/python2.7/site-packages/neutronclient/client.py(212)_authenticate_keystone()
  - allow_redirects=True)
/usr/lib/python2.7/site-packages/neutronclient/client.py(106)_cs_request()
  - resp, body = self.request(*args, **kargs)
/usr/lib/python2.7/site-packages/neutronclient/client.py(151)request()
  - **kwargs)
/usr/lib/python2.7/site-packages/requests/api.py(44)request()
  - return session.request(method=method, url=url, **kwargs)
/usr/lib/python2.7/site-packages/requests/sessions.py(335)request()
  - resp = self.send(prep, **send_kwargs)
/usr/lib/python2.7/site-packages/requests/sessions.py(438)send()
  - r = adapter.send(request, **kwargs)
/usr/lib/python2.7/site-packages/requests/adapters.py(292)send()
  - timeout=timeout
/usr/lib/python2.7/site-packages/urllib3/connectionpool.py(454)urlopen()
  - conn = self._get_conn(timeout=pool_timeout)
/usr/lib/python2.7/site-packages/urllib3/connectionpool.py(272)_get_conn()
  - return conn or self._new_conn()
   
/usr/lib/python2.7/site-packages/glanceclient/common/https.py(100)_new_conn()
  - return VerifiedHTTPSConnection(host=self.host,

  The code about to run there is this:

  class HTTPSConnectionPool(connectionpool.HTTPSConnectionPool):
  
  

[Yahoo-eng-team] [Bug 1437126] [NEW] conductor workers count doesn't respect config

2015-03-26 Thread Sam Morrison
Public bug reported:

If I have

[conductor]
workers = 0

I get 1 conductor process

Increasing the value I get the following

workers = 1 = 1 process
workers  = 2 = 3 processes
workers = 3 = 4 processes

Looks like if workers  1 processes = workers + 1

workers  2 = processes = 1

This is in Juno and has changed (again) from Icehouse.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1437126

Title:
  conductor workers count doesn't respect config

Status in OpenStack Compute (Nova):
  New

Bug description:
  If I have

  [conductor]
  workers = 0

  I get 1 conductor process

  Increasing the value I get the following

  workers = 1 = 1 process
  workers  = 2 = 3 processes
  workers = 3 = 4 processes

  Looks like if workers  1 processes = workers + 1

  workers  2 = processes = 1

  This is in Juno and has changed (again) from Icehouse.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1437126/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1413087] [NEW] aggregate_multitenancy_isolation doesn't work with multiple tenants

2015-01-20 Thread Sam Morrison
Public bug reported:

Even though the aggregate_multitenancy_isolation says it can filter on
multiple tenants it currently doesn't.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1413087

Title:
  aggregate_multitenancy_isolation doesn't work with multiple tenants

Status in OpenStack Compute (Nova):
  New

Bug description:
  Even though the aggregate_multitenancy_isolation says it can filter on
  multiple tenants it currently doesn't.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1413087/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1411489] [NEW] ValueError: Tables task_info, tasks have non utf8 collation, please make sure all tables are CHARSET=utf8

2015-01-15 Thread Sam Morrison
Public bug reported:

When upgrading to Juno and running DB migrations I get the following
error:


glance-manage db version
34

glance-manage db sync

2015-01-16 13:42:08.647 6746 CRITICAL glance [-] ValueError: Tables 
task_info,tasks have non utf8 collation, please make sure all tables are 
CHARSET=utf8
2015-01-16 13:42:08.647 6746 TRACE glance Traceback (most recent call last):
2015-01-16 13:42:08.647 6746 TRACE glance   File /usr/bin/glance-manage, line 
10, in module
2015-01-16 13:42:08.647 6746 TRACE glance sys.exit(main())
2015-01-16 13:42:08.647 6746 TRACE glance   File 
/usr/lib/python2.7/dist-packages/glance/cmd/manage.py, line 290, in main
2015-01-16 13:42:08.647 6746 TRACE glance return 
CONF.command.action_fn(*func_args, **func_kwargs)
2015-01-16 13:42:08.647 6746 TRACE glance   File 
/usr/lib/python2.7/dist-packages/glance/cmd/manage.py, line 115, in sync
2015-01-16 13:42:08.647 6746 TRACE glance version)
2015-01-16 13:42:08.647 6746 TRACE glance   File 
/usr/lib/python2.7/dist-packages/oslo/db/sqlalchemy/migration.py, line 77, in 
db_sync
2015-01-16 13:42:08.647 6746 TRACE glance _db_schema_sanity_check(engine)
2015-01-16 13:42:08.647 6746 TRACE glance   File 
/usr/lib/python2.7/dist-packages/oslo/db/sqlalchemy/migration.py, line 110, 
in _db_schema_sanity_check
2015-01-16 13:42:08.647 6746 TRACE glance ) % ','.join(table_names))
2015-01-16 13:42:08.647 6746 TRACE glance ValueError: Tables task_info,tasks 
have non utf8 collation, please make sure all tables are CHARSET=utf8
2015-01-16 13:42:08.647 6746 TRACE glance

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1411489

Title:
  ValueError: Tables task_info,tasks have non utf8 collation, please
  make sure all tables are CHARSET=utf8

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  When upgrading to Juno and running DB migrations I get the following
  error:

  
  glance-manage db version
  34

  glance-manage db sync

  2015-01-16 13:42:08.647 6746 CRITICAL glance [-] ValueError: Tables 
task_info,tasks have non utf8 collation, please make sure all tables are 
CHARSET=utf8
  2015-01-16 13:42:08.647 6746 TRACE glance Traceback (most recent call last):
  2015-01-16 13:42:08.647 6746 TRACE glance   File /usr/bin/glance-manage, 
line 10, in module
  2015-01-16 13:42:08.647 6746 TRACE glance sys.exit(main())
  2015-01-16 13:42:08.647 6746 TRACE glance   File 
/usr/lib/python2.7/dist-packages/glance/cmd/manage.py, line 290, in main
  2015-01-16 13:42:08.647 6746 TRACE glance return 
CONF.command.action_fn(*func_args, **func_kwargs)
  2015-01-16 13:42:08.647 6746 TRACE glance   File 
/usr/lib/python2.7/dist-packages/glance/cmd/manage.py, line 115, in sync
  2015-01-16 13:42:08.647 6746 TRACE glance version)
  2015-01-16 13:42:08.647 6746 TRACE glance   File 
/usr/lib/python2.7/dist-packages/oslo/db/sqlalchemy/migration.py, line 77, in 
db_sync
  2015-01-16 13:42:08.647 6746 TRACE glance _db_schema_sanity_check(engine)
  2015-01-16 13:42:08.647 6746 TRACE glance   File 
/usr/lib/python2.7/dist-packages/oslo/db/sqlalchemy/migration.py, line 110, 
in _db_schema_sanity_check
  2015-01-16 13:42:08.647 6746 TRACE glance ) % ','.join(table_names))
  2015-01-16 13:42:08.647 6746 TRACE glance ValueError: Tables 
task_info,tasks have non utf8 collation, please make sure all tables are 
CHARSET=utf8
  2015-01-16 13:42:08.647 6746 TRACE glance

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1411489/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408498] [NEW] Can't delete when control Juno and compute icehouse

2015-01-07 Thread Sam Morrison
Public bug reported:

When I have Juno control and Icehouse compute and icehouse network
deleting an instance doesn't work.

This is due to the Fixed IP object having an embedded version of the
network object that is too new for Icehouse. This causes and infinite
loop

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1408498

Title:
  Can't delete when control Juno and compute icehouse

Status in OpenStack Compute (Nova):
  New

Bug description:
  When I have Juno control and Icehouse compute and icehouse network
  deleting an instance doesn't work.

  This is due to the Fixed IP object having an embedded version of the
  network object that is too new for Icehouse. This causes and infinite
  loop

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1408498/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408496] [NEW] Juno with icehouse compute service object error

2015-01-07 Thread Sam Morrison
Public bug reported:

When running Juno with Icehouse computes on starting nova-compute you
get a RuntimeError: maximum recursion depth exceeded while calling a
Python object due to it trying to backport the service object.

This is caused by the Juno conductor, when it sends back the service
object it includes an embedded compute node object at a version too new
for Icehouse.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1408496

Title:
  Juno with icehouse compute service object error

Status in OpenStack Compute (Nova):
  New

Bug description:
  When running Juno with Icehouse computes on starting nova-compute you
  get a RuntimeError: maximum recursion depth exceeded while calling a
  Python object due to it trying to backport the service object.

  This is caused by the Juno conductor, when it sends back the service
  object it includes an embedded compute node object at a version too
  new for Icehouse.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1408496/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1377012] [NEW] Can't delete an image in deleted status

2014-10-02 Thread Sam Morrison
Public bug reported:

I'm trying to delete an image that has a status of deleted

It's not deleted as I can do an image-show and it returns plus I can see
it in image_locations and it exists in the backend which for us is swift

glance image-show 17c6077c-99f0-41c7-9bd2-175216330990
+---+--+
| Property  | Value 
   |
+---+--+
| checksum  | c9ef771d317595fd3654ca69a4be5f31  
   |
| container_format  | bare  
   |
| created_at| 2014-05-22T07:58:23   
   |
| deleted   | True  
   |
| deleted_at| 2014-05-23T02:16:53   
   |
| disk_format   | raw   
   |
| id| 
17c6077c-99f0-41c7-9bd2-175216330990 |
| is_public | True  
   |
| min_disk  | 10
   |
| min_ram   | 0 
   |
| name  | XX|
| owner | X |
| protected | False 
   |
| size  | 10737418240   
   |
| status| deleted   
   |
| updated_at| 2014-05-23T02:16:53   
   |
+---+--+

glance image-delete 17c6077c-99f0-41c7-9bd2-175216330990
Request returned failure status.
404 Not Found
Image 17c6077c-99f0-41c7-9bd2-175216330990 not found.
(HTTP 404): Unable to delete image 17c6077c-99f0-41c7-9bd2-175216330990

** Affects: glance
 Importance: Undecided
 Status: New

** Description changed:

  I'm trying to delete an image that has a status of deleted
  
  It's not deleted as I can do an image-show and it returns plus I can see
  it in image_locations and it exists in the backend which for us is swift
- 
  
  glance image-show 17c6077c-99f0-41c7-9bd2-175216330990
  
+---+--+
  | Property  | Value   
 |
  
+---+--+
  | checksum  | 
c9ef771d317595fd3654ca69a4be5f31 |
  | container_format  | bare
 |
  | created_at| 2014-05-22T07:58:23 
 |
  | deleted   | True
 |
  | deleted_at| 2014-05-23T02:16:53 
 |
  | disk_format   | raw 
 |
  | id| 
17c6077c-99f0-41c7-9bd2-175216330990 |
  | is_public | True
 |
  | min_disk  | 10  
 |
  | min_ram   | 0   
 |
  | name  | XX|
  | owner | X |
  | protected | False   
 |
  | size  | 10737418240 
 |
  | status| deleted 
 |
  | updated_at| 2014-05-23T02:16:53 
 |
  
+---+--+
- sam@cloudboy:~/cloud-init$ glance image-delete 
17c6077c-99f0-41c7-9bd2-175216330990
+ 
+ glance image-delete 17c6077c-99f0-41c7-9bd2-175216330990
  Request returned failure status.
  404 Not Found
  Image 17c6077c-99f0-41c7-9bd2-175216330990 not found.
- (HTTP 404): Unable to delete image 17c6077c-99f0-41c7-9bd2-175216330990
+ (HTTP 404): Unable to delete image 17c6077c-99f0-41c7-9bd2-175216330990

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1377012

Title:
  Can't delete an image in deleted status

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug 

[Yahoo-eng-team] [Bug 1376096] [NEW] nova show policy doesn't work

2014-09-30 Thread Sam Morrison
Public bug reported:

I want to allow a user with a certain role to be able to do a nova show.

I set the policy.json file to allow this by setting

==
context_is_admin:  role:admin,
admin_or_owner:  is_admin:True or project_id:%(project_id)s,
default: rule:admin_or_owner,
monitoring: role:monitoring,
monitoring_or_default:  rule:default or role:monitoring,

...

   compute:get: rule:monitoring_or_default,

=

I still get the following:

ERROR: No server with a name or ID of 'XXX' exists.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1376096

Title:
  nova show policy doesn't work

Status in OpenStack Compute (Nova):
  New

Bug description:
  I want to allow a user with a certain role to be able to do a nova
  show.

  I set the policy.json file to allow this by setting

  ==
  context_is_admin:  role:admin,
  admin_or_owner:  is_admin:True or project_id:%(project_id)s,
  default: rule:admin_or_owner,
  monitoring: role:monitoring,
  monitoring_or_default:  rule:default or role:monitoring,

  ...

 compute:get: rule:monitoring_or_default,

  =

  I still get the following:

  ERROR: No server with a name or ID of 'XXX' exists.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1376096/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376098] [NEW] nova list filtering only works when you have the Admin role

2014-09-30 Thread Sam Morrison
Public bug reported:

I'm trying to allow a non admin to be able to do a

nova list --all-tenants --tenant XX

I have set my policy.json file to allow this user who has a role called
monitoring to do this:

   context_is_admin:  role:admin,
admin_or_owner:  is_admin:True or project_id:%(project_id)s,
default: rule:admin_or_owner,
monitoring: role:monitoring,
monitoring_or_default:  rule:default or role:monitoring,

compute:get_all: rule:monitoring_or_default,
compute:get_all_tenants: rule:admin_api or rule:monitoring,

This allows them to do a nova list --all-tenants.

But if they filter by anything it just returns all and disregards the
filter

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1376098

Title:
  nova list filtering only works when you have the Admin role

Status in OpenStack Compute (Nova):
  New

Bug description:
  I'm trying to allow a non admin to be able to do a

  nova list --all-tenants --tenant XX

  I have set my policy.json file to allow this user who has a role
  called monitoring to do this:

 context_is_admin:  role:admin,
  admin_or_owner:  is_admin:True or project_id:%(project_id)s,
  default: rule:admin_or_owner,
  monitoring: role:monitoring,
  monitoring_or_default:  rule:default or role:monitoring,

  compute:get_all: rule:monitoring_or_default,
  compute:get_all_tenants: rule:admin_api or rule:monitoring,

  This allows them to do a nova list --all-tenants.

  But if they filter by anything it just returns all and disregards the
  filter

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1376098/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372218] [NEW] servers.list, filtering on metadata doesn't work. unicode error

2014-09-21 Thread Sam Morrison
Public bug reported:

I'm trying to list servers by filtering on system_metadata or metadata.

I should be able to do something like (looking into the code)

nclient.servers.list(search_opts={'system_metadata': {some_value:
some_key}, 'all_tenants': 1})

But this dictionary gets turned into a unicode string. I get a 500 back
from nova with the below stack trace in nova-api.

The offending code is in exact_filter in the db api. It is expecting a
list of dicts or a single dict when using system_metadata or metadata
key when searching. It looks like this used to work but now somewhere
higher up is ensuring this is a string.


2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack Traceback (most recent 
call last):
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/opt/nova/nova/api/openstack/__init__.py, line 125, in __call__
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack return 
req.get_response(self.application)
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1320, in send
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack application, 
catch_exc_info=False)
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/request.py, line 1284, in 
call_application
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py, 
line 582, in __call__
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack return self.app(env, 
start_response)
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/usr/local/lib/python2.7/dist-packages/routes/middleware.py, line 131, in 
__call__
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack response = 
self.app(environ, start_response)
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/usr/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack return 
self.func(req, *args, **kwargs)
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/opt/nova/nova/api/openstack/wsgi.py, line 917, in __call__
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack content_type, body, 
accept)
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/opt/nova/nova/api/openstack/wsgi.py, line 983, in _process_stack
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/opt/nova/nova/api/openstack/wsgi.py, line 1070, in dispatch
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack return 
method(req=request, **action_args)
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/opt/nova/nova/api/openstack/compute/servers.py, line 520, in detail
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack servers = 
self._get_servers(req, is_detail=True)
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/opt/nova/nova/api/openstack/compute/servers.py, line 603, in _get_servers
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack want_objects=True)
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/opt/nova/nova/compute/api.py, line 1887, in get_all
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack 
expected_attrs=expected_attrs)
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 
/opt/nova/nova/compute/cells_api.py, line 224, in _get_instances_by_filters
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack limit=limit, 
marker=marker, expected_attrs=fields)
2014-09-22 11:31:28.916 20196 TRACE nova.api.openstack   File 

[Yahoo-eng-team] [Bug 1362863] [NEW] reply queues fill up with unacked messages

2014-08-28 Thread Sam Morrison
Public bug reported:

Since upgrading to icehouse we consistently get reply_x queues
filling up with unacked messages. To fix this I have to restart the
service. This seems to happen when something is wrong for a short period
of time and it doesn't clean up after itself.

So far I've seen the issue with nova-api, nova-compute, nova-network,
nova-api-metadata, cinder-api but I'm sure there are others.

** Affects: cinder
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362863

Title:
  reply queues fill up with unacked messages

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  Since upgrading to icehouse we consistently get reply_x queues
  filling up with unacked messages. To fix this I have to restart the
  service. This seems to happen when something is wrong for a short
  period of time and it doesn't clean up after itself.

  So far I've seen the issue with nova-api, nova-compute, nova-network,
  nova-api-metadata, cinder-api but I'm sure there are others.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1362863/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326998] [NEW] UnboundLocalError: local variable 'domain' referenced before assignment

2014-06-05 Thread Sam Morrison
Public bug reported:

Get this sometimes when an instance fails to build


2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330] Traceback (most recent call last):
2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1311, in 
_build_instance
2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330] set_access_ip=set_access_ip)
2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 399, in 
decorated_function
2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330] return function(self, context, *args, 
**kwargs)
2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1723, in _spawn
2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330] LOG.exception(_('Instance failed to 
spawn'), instance=instance)
2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330]   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py, line 68, 
in __exit__
2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330] six.reraise(self.type_, self.value, 
self.tb)
2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1720, in _spawn
2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330] block_device_info)
2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 2253, in 
spawn
2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330] block_device_info)
2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 3651, in 
_create_domain_and_network
2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330] domain.destroy()
2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330] UnboundLocalError: local variable 
'domain' referenced before assignment

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1326998

Title:
  UnboundLocalError: local variable 'domain' referenced before
  assignment

Status in OpenStack Compute (Nova):
  New

Bug description:
  Get this sometimes when an instance fails to build


  2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330] Traceback (most recent call last):
  2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1311, in 
_build_instance
  2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330] set_access_ip=set_access_ip)
  2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 399, in 
decorated_function
  2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330] return function(self, context, *args, 
**kwargs)
  2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1723, in _spawn
  2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330] LOG.exception(_('Instance failed to 
spawn'), instance=instance)
  2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330]   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py, line 68, 
in __exit__
  2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330] six.reraise(self.type_, self.value, 
self.tb)
  2014-06-06 09:43:39.055 19328 TRACE nova.compute.utils [instance: 
d46aad17-c75d-4734-abdb-9b768a40c330]   File 

[Yahoo-eng-team] [Bug 1316373] [NEW] Can't force delete an errored instance with no info cache

2014-05-05 Thread Sam Morrison
Public bug reported:

Sometimes when an instance fails to launch for some reason when trying
to delete it using nova delete or nova force-delete it doesn't work and
gives the following error:

This is when using cells but I think it possibly isn't cells related.
Deleting is expecting an info cache no matter what. Ideally force delete
should ignore all errors and delete the instance.


2014-05-06 10:48:58.368 21210 ERROR nova.cells.messaging 
[req-a74c59d3-dc58-4318-87e8-0da15ca2a78d d1fa8867e42444cf8724e65fef1da549 
094ae1e2c08f4eddb444a9d9db71ab40] Error processing message locally: Info cache 
for instance bb07522b-d705-4fc8-8045-e12de2affe2e could not be found.
2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging Traceback (most recent 
call last):
2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
/opt/nova/nova/cells/messaging.py, line 200, in _process_locally
2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging resp_value = 
self.msg_runner._process_message_locally(self)
2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
/opt/nova/nova/cells/messaging.py, line 1532, in _process_message_locally
2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging return fn(message, 
**message.method_kwargs)
2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
/opt/nova/nova/cells/messaging.py, line 894, in terminate_instance
2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging 
self._call_compute_api_with_obj(message.ctxt, instance, 'delete')
2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
/opt/nova/nova/cells/messaging.py, line 855, in _call_compute_api_with_obj
2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging 
instance.refresh(ctxt)
2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
/opt/nova/nova/objects/base.py, line 151, in wrapper
2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging return fn(self, 
ctxt, *args, **kwargs)
2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
/opt/nova/nova/objects/instance.py, line 500, in refresh
2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging 
self.info_cache.refresh()
2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
/opt/nova/nova/objects/base.py, line 151, in wrapper
2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging return fn(self, 
ctxt, *args, **kwargs)
2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
/opt/nova/nova/objects/instance_info_cache.py, line 103, in refresh
2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging self.instance_uuid)
2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
/opt/nova/nova/objects/base.py, line 112, in wrapper
2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging result = fn(cls, 
context, *args, **kwargs)
2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
/opt/nova/nova/objects/instance_info_cache.py, line 70, in 
get_by_instance_uuid
2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging 
instance_uuid=instance_uuid)
2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging 
InstanceInfoCacheNotFound: Info cache for instance 
bb07522b-d705-4fc8-8045-e12de2affe2e could not be found.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1316373

Title:
  Can't force delete an errored instance with no info cache

Status in OpenStack Compute (Nova):
  New

Bug description:
  Sometimes when an instance fails to launch for some reason when trying
  to delete it using nova delete or nova force-delete it doesn't work
  and gives the following error:

  This is when using cells but I think it possibly isn't cells related.
  Deleting is expecting an info cache no matter what. Ideally force
  delete should ignore all errors and delete the instance.

  
  2014-05-06 10:48:58.368 21210 ERROR nova.cells.messaging 
[req-a74c59d3-dc58-4318-87e8-0da15ca2a78d d1fa8867e42444cf8724e65fef1da549 
094ae1e2c08f4eddb444a9d9db71ab40] Error processing message locally: Info cache 
for instance bb07522b-d705-4fc8-8045-e12de2affe2e could not be found.
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging Traceback (most 
recent call last):
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
/opt/nova/nova/cells/messaging.py, line 200, in _process_locally
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging resp_value = 
self.msg_runner._process_message_locally(self)
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
/opt/nova/nova/cells/messaging.py, line 1532, in _process_message_locally
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging return 
fn(message, **message.method_kwargs)
  2014-05-06 10:48:58.368 21210 TRACE nova.cells.messaging   File 
/opt/nova/nova/cells/messaging.py, line 894, in 

[Yahoo-eng-team] [Bug 1314490] [NEW] EC2 and metadata API get instance AZ is using duplicated methods

2014-04-30 Thread Sam Morrison
Public bug reported:

Can clean up the code path here to make it act like the rest of nova

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1314490

Title:
  EC2 and metadata API get instance AZ is using duplicated methods

Status in OpenStack Compute (Nova):
  New

Bug description:
  Can clean up the code path here to make it act like the rest of nova

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1314490/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1312468] [NEW] Cells: Broadcast call messages fail if a child cell goes down

2014-04-24 Thread Sam Morrison
Public bug reported:

If a child cell stops functioning we still include it when we send down 
broadcast messages that require a response.
This causes things like listing hosts, hypervisor-stats etc. to fail if one of 
your compute cells is down.

We know if the cell is mute so we shouldn't send messages to it and
expect replies while it's in this state.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1312468

Title:
  Cells: Broadcast call messages fail if a child cell goes down

Status in OpenStack Compute (Nova):
  New

Bug description:
  If a child cell stops functioning we still include it when we send down 
broadcast messages that require a response.
  This causes things like listing hosts, hypervisor-stats etc. to fail if one 
of your compute cells is down.

  We know if the cell is mute so we shouldn't send messages to it and
  expect replies while it's in this state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1312468/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1310931] [NEW] nova migration-list fails with unicode error

2014-04-22 Thread Sam Morrison
Public bug reported:

Doing a nova migration-list I get the following error in the nova-api
logs:

ERROR Exception handling resource: 'unicode' object does not support item 
deletion
TRACE nova.api.openstack.wsgi Traceback (most recent call last):
TRACE nova.api.openstack.wsgi   File /opt/nova/nova/api/openstack/wsgi.py, 
line 983, in _process_stack
TRACE nova.api.openstack.wsgi action_result = self.dispatch(meth, request, 
action_args)
TRACE nova.api.openstack.wsgi   File /opt/nova/nova/api/openstack/wsgi.py, 
line 1070, in dispatch
TRACE nova.api.openstack.wsgi return method(req=request, **action_args)
TRACE nova.api.openstack.wsgi   File 
/opt/nova/nova/api/openstack/compute/contrib/migrations.py, line 74, in index
TRACE nova.api.openstack.wsgi return {'migrations': output(migrations)}
TRACE nova.api.openstack.wsgi   File 
/opt/nova/nova/api/openstack/compute/contrib/migrations.py, line 37, in output
TRACE nova.api.openstack.wsgi del obj['deleted']
TRACE nova.api.openstack.wsgi TypeError: 'unicode' object does not support item 
deletion
Returning 400 to user: The server could not comply with the request since it is 
either malformed or otherwise incorrect. __call__ 
/opt/nova/nova/api/openstack/wsgi.py:1215

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  Doing a nova migration-list I get the following error in the nova-api
  logs:
  
- 2014-04-22 16:15:23.431 9325 ERROR nova.api.openstack.wsgi 
[req-7e992c9e-d5c1-4d0f-a536-98950af4c2f0 c0645ff94b864d3d84c438d9855f9cea 
9427903ca1544f0795ba4117d55ed9b2] Exception handling resource: 'unicode' object 
does not support item deletion
- 2014-04-22 16:15:23.431 9325 TRACE nova.api.openstack.wsgi Traceback (most 
recent call last):
- 2014-04-22 16:15:23.431 9325 TRACE nova.api.openstack.wsgi   File 
/opt/nova/nova/api/openstack/wsgi.py, line 983, in _process_stack
- 2014-04-22 16:15:23.431 9325 TRACE nova.api.openstack.wsgi action_result 
= self.dispatch(meth, request, action_args)
- 2014-04-22 16:15:23.431 9325 TRACE nova.api.openstack.wsgi   File 
/opt/nova/nova/api/openstack/wsgi.py, line 1070, in dispatch
- 2014-04-22 16:15:23.431 9325 TRACE nova.api.openstack.wsgi return 
method(req=request, **action_args)
- 2014-04-22 16:15:23.431 9325 TRACE nova.api.openstack.wsgi   File 
/opt/nova/nova/api/openstack/compute/contrib/migrations.py, line 74, in index
- 2014-04-22 16:15:23.431 9325 TRACE nova.api.openstack.wsgi return 
{'migrations': output(migrations)}
- 2014-04-22 16:15:23.431 9325 TRACE nova.api.openstack.wsgi   File 
/opt/nova/nova/api/openstack/compute/contrib/migrations.py, line 37, in output
- 2014-04-22 16:15:23.431 9325 TRACE nova.api.openstack.wsgi del 
obj['deleted']
- 2014-04-22 16:15:23.431 9325 TRACE nova.api.openstack.wsgi TypeError: 
'unicode' object does not support item deletion
- 2014-04-22 16:15:23.431 9325 TRACE nova.api.openstack.wsgi 
- 2014-04-22 16:15:23.434 9325 DEBUG nova.api.openstack.wsgi 
[req-7e992c9e-d5c1-4d0f-a536-98950af4c2f0 c0645ff94b864d3d84c438d9855f9cea 
9427903ca1544f0795ba4117d55ed9b2] Returning 400 to user: The server could not 
comply with the request since it is either malformed or otherwise incorrect. 
__call__ /opt/nova/nova/api/openstack/wsgi.py:1215
+ ERROR Exception handling resource: 'unicode' object does not support item 
deletion
+ TRACE nova.api.openstack.wsgi Traceback (most recent call last):
+ TRACE nova.api.openstack.wsgi   File /opt/nova/nova/api/openstack/wsgi.py, 
line 983, in _process_stack
+ TRACE nova.api.openstack.wsgi action_result = self.dispatch(meth, 
request, action_args)
+ TRACE nova.api.openstack.wsgi   File /opt/nova/nova/api/openstack/wsgi.py, 
line 1070, in dispatch
+ TRACE nova.api.openstack.wsgi return method(req=request, **action_args)
+ TRACE nova.api.openstack.wsgi   File 
/opt/nova/nova/api/openstack/compute/contrib/migrations.py, line 74, in index
+ TRACE nova.api.openstack.wsgi return {'migrations': output(migrations)}
+ TRACE nova.api.openstack.wsgi   File 
/opt/nova/nova/api/openstack/compute/contrib/migrations.py, line 37, in output
+ TRACE nova.api.openstack.wsgi del obj['deleted']
+ TRACE nova.api.openstack.wsgi TypeError: 'unicode' object does not support 
item deletion
+ Returning 400 to user: The server could not comply with the request since it 
is either malformed or otherwise incorrect. __call__ 
/opt/nova/nova/api/openstack/wsgi.py:1215

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1310931

Title:
  nova migration-list fails with unicode error

Status in OpenStack Compute (Nova):
  New

Bug description:
  Doing a nova migration-list I get the following error in the nova-
  api logs:

  ERROR Exception handling resource: 'unicode' object does not support item 
deletion
  TRACE nova.api.openstack.wsgi Traceback (most recent call last):
  TRACE 

[Yahoo-eng-team] [Bug 1308805] [NEW] object backport doesn't work

2014-04-16 Thread Sam Morrison
Public bug reported:

The code to backport an object doesn't work at all. This code is only
called in one place.

In nova/objects/base.py in _process_object

If the version is incompatible it tries to backport it:

def _process_object(self, context, objprim):
try:
objinst = NovaObject.obj_from_primitive(objprim, context=context)
except exception.IncompatibleObjectVersion as e:
objinst = self.conductor.object_backport(context, objprim,
 e.kwargs['supported'])
return objinst


You'll note here the object_backport is being passed in a primitive and a 
supported version and expecting back an object.

However the object_backport method does:

def object_backport(self, context, objinst, target_version):
return objinst.obj_to_primitive(target_version=target_version)


You'll see here it is expecting to be passed in an object and will return a 
primitive. Exactly the opposite of the only code in nova that calls this code.

This is meaning we can't have Icehouse and Havana working together when
trying to upgrade.

Any ideas?

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1308805

Title:
  object backport doesn't work

Status in OpenStack Compute (Nova):
  New

Bug description:
  The code to backport an object doesn't work at all. This code is only
  called in one place.

  In nova/objects/base.py in _process_object

  If the version is incompatible it tries to backport it:

  def _process_object(self, context, objprim):
  try:
  objinst = NovaObject.obj_from_primitive(objprim, context=context)
  except exception.IncompatibleObjectVersion as e:
  objinst = self.conductor.object_backport(context, objprim,
   e.kwargs['supported'])
  return objinst

  
  You'll note here the object_backport is being passed in a primitive and a 
supported version and expecting back an object.

  However the object_backport method does:

  def object_backport(self, context, objinst, target_version):
  return objinst.obj_to_primitive(target_version=target_version)

  
  You'll see here it is expecting to be passed in an object and will return a 
primitive. Exactly the opposite of the only code in nova that calls this code.

  This is meaning we can't have Icehouse and Havana working together
  when trying to upgrade.

  Any ideas?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1308805/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1308811] [NEW] nova.objects.base imports conductor wrong

2014-04-16 Thread Sam Morrison
Public bug reported:

in nova.objects.base it imports conductor

from nova.conductor import api as conductor_api
self._conductor = conductor_api.API()

This bypasses the logic to detemin whether to use conductor RPC service
or not.

Should do

from nova import conductor
self._conductor = conductor.API()

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1308811

Title:
  nova.objects.base imports conductor wrong

Status in OpenStack Compute (Nova):
  New

Bug description:
  in nova.objects.base it imports conductor

  from nova.conductor import api as conductor_api
  self._conductor = conductor_api.API()

  This bypasses the logic to detemin whether to use conductor RPC
  service or not.

  Should do

  from nova import conductor
  self._conductor = conductor.API()

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1308811/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1308846] [NEW] attach_volume doesn't work in cells when api is icehouse and compute is havana

2014-04-16 Thread Sam Morrison
Public bug reported:

This affects Havana not Icehouse

The method signature of attach_volume changed from Havana - Icehouse

-def attach_volume(self, context, instance, volume_id, device=None):
+def attach_volume(self, context, instance, volume_id, device=None,
+  disk_bus=None, device_type=None):

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1308846

Title:
  attach_volume doesn't work in cells when api is icehouse and compute
  is havana

Status in OpenStack Compute (Nova):
  New

Bug description:
  This affects Havana not Icehouse

  The method signature of attach_volume changed from Havana - Icehouse

  -def attach_volume(self, context, instance, volume_id, device=None):
  +def attach_volume(self, context, instance, volume_id, device=None,
  +  disk_bus=None, device_type=None):

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1308846/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302351] [NEW] v2 API can't create image - Attribute 'file' is reserved.

2014-04-04 Thread Sam Morrison
Public bug reported:

Trying to create an image with V2 API and get the following error:

glance --os-image-api-version 2 --os-image-url http://glance-icehouse:9292/ 
image-create --container-format bare --disk-format raw --name trusty2 --file 
trusty-server-cloudimg-amd64-disk1.img 
Request returned failure status.
403 Forbidden
Attribute 'file' is reserved.
(HTTP 403)

It works fine if I do  --os-image-api-version 1

** Affects: glance
 Importance: Undecided
 Status: New

** Summary changed:

- v2 API can't create image
+ v2 API can't create image - Attribute 'file' is reserved.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1302351

Title:
  v2 API can't create image - Attribute 'file' is reserved.

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Trying to create an image with V2 API and get the following error:

  glance --os-image-api-version 2 --os-image-url http://glance-icehouse:9292/ 
image-create --container-format bare --disk-format raw --name trusty2 --file 
trusty-server-cloudimg-amd64-disk1.img 
  Request returned failure status.
  403 Forbidden
  Attribute 'file' is reserved.
  (HTTP 403)

  It works fine if I do  --os-image-api-version 1

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1302351/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302345] [NEW] glance registry v2 doesn't work

2014-04-03 Thread Sam Morrison
Public bug reported:

Glance v2 registry API still doesn't work in Icehouse. This time
thankfully the fix is pretty simple

Basically this is because the configure_registry_client() method in 
registry/client/v2/api.py isn't called anywhere in the code.
Adding this call to the get_registry_client method fixes everything.

This is a pretty critical bug so I'm hoping this can be fixed for when
Icehouse is released?


2014-04-04 16:33:25.536 14975 DEBUG routes.middleware 
[7b235057-41cb-46a3-b397-fef7b250bb58 c0645ff94b864d3d84c438d9855f9cea 
9427903ca1544f0795ba4117d55ed9b2 - - -] Matched GET /images __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:100
2014-04-04 16:33:25.537 14975 DEBUG routes.middleware 
[7b235057-41cb-46a3-b397-fef7b250bb58 c0645ff94b864d3d84c438d9855f9cea 
9427903ca1544f0795ba4117d55ed9b2 - - -] Route path: '/images', defaults: 
{'action': u'index', 'controller': glance.common.wsgi.Resource object at 
0x7f76e4d6d390} __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:102
2014-04-04 16:33:25.539 14975 DEBUG routes.middleware 
[7b235057-41cb-46a3-b397-fef7b250bb58 c0645ff94b864d3d84c438d9855f9cea 
9427903ca1544f0795ba4117d55ed9b2 - - -] Match dict: {'action': u'index', 
'controller': glance.common.wsgi.Resource object at 0x7f76e4d6d390} __call__ 
/usr/lib/python2.7/dist-packages/routes/middleware.py:103
2014-04-04 16:33:25.541 14975 DEBUG glance.common.client 
[7b235057-41cb-46a3-b397-fef7b250bb58 c0645ff94b864d3d84c438d9855f9cea 
9427903ca1544f0795ba4117d55ed9b2 - - -] Constructed URL: http://None:9191/rpc 
_construct_url /usr/lib/python2.7/dist-packages/glance/common/client.py:411
2014-04-04 16:33:25.561 14975 INFO glance.wsgi.server 
[7b235057-41cb-46a3-b397-fef7b250bb58 c0645ff94b864d3d84c438d9855f9cea 
9427903ca1544f0795ba4117d55ed9b2 - - -] Traceback (most recent call last):
  File /usr/lib/python2.7/dist-packages/eventlet/wsgi.py, line 384, in 
handle_one_response
result = self.application(self.environ, start_response)
  File /usr/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
resp = self.call_func(req, *args, **self.kwargs)
  File /usr/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
return self.func(req, *args, **kwargs)
  File /usr/lib/python2.7/dist-packages/glance/common/wsgi.py, line 378, in 
__call__
response = req.get_response(self.application)
  File /usr/lib/python2.7/dist-packages/webob/request.py, line 1320, in send
application, catch_exc_info=False)
  File /usr/lib/python2.7/dist-packages/webob/request.py, line 1284, in 
call_application
app_iter = application(self.environ, start_response)
  File 
/usr/lib/python2.7/dist-packages/keystoneclient/middleware/auth_token.py, 
line 582, in __call__
return self.app(env, start_response)
  File /usr/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
resp = self.call_func(req, *args, **self.kwargs)
  File /usr/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
return self.func(req, *args, **kwargs)
  File /usr/lib/python2.7/dist-packages/glance/common/wsgi.py, line 378, in 
__call__
response = req.get_response(self.application)
  File /usr/lib/python2.7/dist-packages/webob/request.py, line 1320, in send
application, catch_exc_info=False)
  File /usr/lib/python2.7/dist-packages/webob/request.py, line 1284, in 
call_application
app_iter = application(self.environ, start_response)
  File /usr/lib/python2.7/dist-packages/paste/urlmap.py, line 206, in __call__
return app(environ, start_response)
  File /usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
return resp(environ, start_response)
  File /usr/lib/python2.7/dist-packages/routes/middleware.py, line 131, in 
__call__
response = self.app(environ, start_response)
  File /usr/lib/python2.7/dist-packages/webob/dec.py, line 144, in __call__
return resp(environ, start_response)
  File /usr/lib/python2.7/dist-packages/webob/dec.py, line 130, in __call__
resp = self.call_func(req, *args, **self.kwargs)
  File /usr/lib/python2.7/dist-packages/webob/dec.py, line 195, in call_func
return self.func(req, *args, **kwargs)
  File /usr/lib/python2.7/dist-packages/glance/common/wsgi.py, line 644, in 
__call__
request, **action_args)
  File /usr/lib/python2.7/dist-packages/glance/common/wsgi.py, line 668, in 
dispatch
return method(*args, **kwargs)
  File /usr/lib/python2.7/dist-packages/glance/api/v2/images.py, line 91, in 
index
member_status=member_status)
  File /usr/lib/python2.7/dist-packages/glance/api/authorization.py, line 99, 
in list
images = self.image_repo.list(*args, **kwargs)
  File /usr/lib/python2.7/dist-packages/glance/domain/proxy.py, line 89, in 
list
items = self.base.list(*args, **kwargs)
  File /usr/lib/python2.7/dist-packages/glance/api/policy.py, line 183, in 
list
return super(ImageRepoProxy, self).list(*args, **kwargs)
  File 

[Yahoo-eng-team] [Bug 1297635] [NEW] Race condition when deleting iscsi devices

2014-03-26 Thread Sam Morrison
Public bug reported:

If you have two instances on the same compute node that each have a
volume attached (using iscsi backend)

If you delete both of them triggering a disconnect volume the following
happens:

First request will delete the device
echo 1 /sys/block/sdr/device/delete

The second request triggers an iscsi_rescan which then rediscovers the
device.

The volume is then deleted from the backend cinder.

now you have a device which is pointing back to a deleted volume.

This is using an NetApp device where all the devices are in the same IQN
and using multipath on stable/havana

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1297635

Title:
  Race condition when deleting iscsi devices

Status in OpenStack Compute (Nova):
  New

Bug description:
  If you have two instances on the same compute node that each have a
  volume attached (using iscsi backend)

  If you delete both of them triggering a disconnect volume the
  following happens:

  First request will delete the device
  echo 1 /sys/block/sdr/device/delete

  The second request triggers an iscsi_rescan which then rediscovers the
  device.

  The volume is then deleted from the backend cinder.

  now you have a device which is pointing back to a deleted volume.

  This is using an NetApp device where all the devices are in the same
  IQN and using multipath on stable/havana

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1297635/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277316] Re: Diconnecting a volume with multipath generates excesive multipath calls

2014-02-08 Thread Sam Morrison
This is due to inefficient code in nova.virt.libvirt.volume

The massive amount of multipath calls is from the code to figure out other IQNs 
attached to the compute node. 
There is basically a nested for loop, this code can be changed to make it more 
efficient.


** Project changed: cinder = nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1277316

Title:
  Diconnecting a volume with multipath generates excesive multipath
  calls

Status in OpenStack Compute (Nova):
  New

Bug description:
  I have a compute node with 20 volumes attached using iscsi and multipath.
  Each multipath device has 4 iscsi devices.

  When I disconnect a volume it generates 779 multipath -ll calls.

  
  iscsiadm -m node --rescan
  iscsiadm -m session --rescan
  multipath - r

  multipath -ll /dev/sdch
  multipath -ll /dev/sdcg
  multipath -ll /dev/sdcf
  multipath -ll /dev/sdce
  multipath -ll /dev/sdcd
  multipath -ll /dev/sdcc
  multipath -ll /dev/sdcb
  multipath -ll /dev/sdca
  multipath -ll /dev/sdbz
  multipath -ll /dev/sdby
  multipath -ll /dev/sdbx
  multipath -ll /dev/sdbw
  multipath -ll /dev/sdbv
  multipath -ll /dev/sdbu
  multipath -ll /dev/sdbt
  multipath -ll /dev/sdbs
  multipath -ll /dev/sdbr
  multipath -ll /dev/sdbq
  multipath -ll /dev/sdbp
  multipath -ll /dev/sdbo
  multipath -ll /dev/sdbn
  multipath -ll /dev/sdbm
  multipath -ll /dev/sdbl
  multipath -ll /dev/sdbk
  multipath -ll /dev/sdbj
  multipath -ll /dev/sdbi
  multipath -ll /dev/sdbh
  multipath -ll /dev/sdbg
  multipath -ll /dev/sdbf
  multipath -ll /dev/sdbe
  multipath -ll /dev/sdbd
  multipath -ll /dev/sdbc
  multipath -ll /dev/sdbb
  multipath -ll /dev/sdba
  
  .. And so on for 779 times
  cp /dev/stdin /sys/block/sdcd/device/delete
  cp /dev/stdin /sys/block/sdcc/device/delete
  cp /dev/stdin /sys/block/sdcb/device/delete
  cp /dev/stdin /sys/block/sdca/device/delete
  multipath - r

  
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1277316/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277316] [NEW] Diconnecting a volume with multipath generates excesive multipath calls

2014-02-06 Thread Sam Morrison
Public bug reported:

I have a compute node with 20 volumes attached using iscsi and multipath.
Each multipath device has 4 iscsi devices.

When I disconnect a volume it generates 779 multipath -ll calls.


iscsiadm -m node --rescan
iscsiadm -m session --rescan
multipath - r

multipath -ll /dev/sdch
multipath -ll /dev/sdcg
multipath -ll /dev/sdcf
multipath -ll /dev/sdce
multipath -ll /dev/sdcd
multipath -ll /dev/sdcc
multipath -ll /dev/sdcb
multipath -ll /dev/sdca
multipath -ll /dev/sdbz
multipath -ll /dev/sdby
multipath -ll /dev/sdbx
multipath -ll /dev/sdbw
multipath -ll /dev/sdbv
multipath -ll /dev/sdbu
multipath -ll /dev/sdbt
multipath -ll /dev/sdbs
multipath -ll /dev/sdbr
multipath -ll /dev/sdbq
multipath -ll /dev/sdbp
multipath -ll /dev/sdbo
multipath -ll /dev/sdbn
multipath -ll /dev/sdbm
multipath -ll /dev/sdbl
multipath -ll /dev/sdbk
multipath -ll /dev/sdbj
multipath -ll /dev/sdbi
multipath -ll /dev/sdbh
multipath -ll /dev/sdbg
multipath -ll /dev/sdbf
multipath -ll /dev/sdbe
multipath -ll /dev/sdbd
multipath -ll /dev/sdbc
multipath -ll /dev/sdbb
multipath -ll /dev/sdba

.. And so on for 779 times
cp /dev/stdin /sys/block/sdcd/device/delete
cp /dev/stdin /sys/block/sdcc/device/delete
cp /dev/stdin /sys/block/sdcb/device/delete
cp /dev/stdin /sys/block/sdca/device/delete
multipath - r




** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1277316

Title:
  Diconnecting a volume with multipath generates excesive multipath
  calls

Status in OpenStack Compute (Nova):
  New

Bug description:
  I have a compute node with 20 volumes attached using iscsi and multipath.
  Each multipath device has 4 iscsi devices.

  When I disconnect a volume it generates 779 multipath -ll calls.

  
  iscsiadm -m node --rescan
  iscsiadm -m session --rescan
  multipath - r

  multipath -ll /dev/sdch
  multipath -ll /dev/sdcg
  multipath -ll /dev/sdcf
  multipath -ll /dev/sdce
  multipath -ll /dev/sdcd
  multipath -ll /dev/sdcc
  multipath -ll /dev/sdcb
  multipath -ll /dev/sdca
  multipath -ll /dev/sdbz
  multipath -ll /dev/sdby
  multipath -ll /dev/sdbx
  multipath -ll /dev/sdbw
  multipath -ll /dev/sdbv
  multipath -ll /dev/sdbu
  multipath -ll /dev/sdbt
  multipath -ll /dev/sdbs
  multipath -ll /dev/sdbr
  multipath -ll /dev/sdbq
  multipath -ll /dev/sdbp
  multipath -ll /dev/sdbo
  multipath -ll /dev/sdbn
  multipath -ll /dev/sdbm
  multipath -ll /dev/sdbl
  multipath -ll /dev/sdbk
  multipath -ll /dev/sdbj
  multipath -ll /dev/sdbi
  multipath -ll /dev/sdbh
  multipath -ll /dev/sdbg
  multipath -ll /dev/sdbf
  multipath -ll /dev/sdbe
  multipath -ll /dev/sdbd
  multipath -ll /dev/sdbc
  multipath -ll /dev/sdbb
  multipath -ll /dev/sdba
  
  .. And so on for 779 times
  cp /dev/stdin /sys/block/sdcd/device/delete
  cp /dev/stdin /sys/block/sdcc/device/delete
  cp /dev/stdin /sys/block/sdcb/device/delete
  cp /dev/stdin /sys/block/sdca/device/delete
  multipath - r

  
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1277316/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1269990] [NEW] LXC volume issues

2014-01-16 Thread Sam Morrison
Public bug reported:

Few issues with LXC and volumes that relate to the same code.

* Hard rebooting a volume will make attached volumes disappear from
libvirt xml

* Booting an instance specifying an extra volume (passing in
block_device_mappings on server.create) will result in the volume not
being in the libvirt xml

This is due to 2 places in the code where LXC is treated differently

1. nova.virt.libvirt.blockinfo  get_disk_mapping
2. nova.virt.libvirt.driver get_guest_storage_config

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1269990

Title:
  LXC volume issues

Status in OpenStack Compute (Nova):
  New

Bug description:
  Few issues with LXC and volumes that relate to the same code.

  * Hard rebooting a volume will make attached volumes disappear from
  libvirt xml

  * Booting an instance specifying an extra volume (passing in
  block_device_mappings on server.create) will result in the volume not
  being in the libvirt xml

  This is due to 2 places in the code where LXC is treated differently

  1. nova.virt.libvirt.blockinfo  get_disk_mapping
  2. nova.virt.libvirt.driver get_guest_storage_config

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1269990/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1257532] [NEW] Shelving fails with KeyError: 'metadata'

2013-12-03 Thread Sam Morrison
Public bug reported:

When I try and shelve an instance I get the following error on the
compute node:

2013-12-04 10:39:59.716 18800 ERROR nova.openstack.common.rpc.amqp 
[req-d87825e7-9c2f-4735-94e2-4c470ee0edab d9646718471b46aeb5fd94c702336ca9 
0bdf024c921848c4b74d9e69af9edf08] Exception during message handling
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py, line 461, 
in _process_data
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp **args)
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py, 
line 172, in dispatch
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp result = 
getattr(proxyobj, method)(ctxt, **kwargs)
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/exception.py, line 90, in wrapped
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp payload)
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/exception.py, line 73, in wrapped
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 243, in 
decorated_function
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp pass
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 229, in 
decorated_function
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 294, in 
decorated_function
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp 
function(self, context, *args, **kwargs)
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 271, in 
decorated_function
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 258, in 
decorated_function
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 3336, in 
shelve_instance
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp 
current_period=True)
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/conductor/api.py, line 292, in 
notify_usage_exists
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp 
system_metadata, extra_usage_info)
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/utils.py, line 1094, in wrapper
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp return 
func(*args, **kwargs)
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/conductor/manager.py, line 486, in 
notify_usage_exists
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp 
system_metadata, extra_usage_info)
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/utils.py, line 295, in 
notify_usage_exists
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp 
system_metadata=system_metadata, extra_usage_info=extra_info)
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/compute/utils.py, line 316, in 
notify_about_instance_usage
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp 
network_info, system_metadata, **extra_usage_info)
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/notifications.py, line 420, in 
info_from_instance
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp 
instance_info['metadata'] = utils.instance_meta(instance_ref)
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/nova/utils.py, line 1044, in instance_meta
2013-12-04 10:39:59.716 18800 TRACE nova.openstack.common.rpc.amqp if 

[Yahoo-eng-team] [Bug 1257545] [NEW] Unshelving an offloaded instance doesn't set host, hypervisor_name

2013-12-03 Thread Sam Morrison
Public bug reported:

When you unshelve an instance that has been offloaded it doesn't set:

OS-EXT-SRV-ATTR:host
OS-EXT-SRV-ATTR:hypervisor_hostname

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1257545

Title:
  Unshelving an offloaded instance doesn't set host, hypervisor_name

Status in OpenStack Compute (Nova):
  New

Bug description:
  When you unshelve an instance that has been offloaded it doesn't set:

  OS-EXT-SRV-ATTR:host
  OS-EXT-SRV-ATTR:hypervisor_hostname

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1257545/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1250309] Re: Missing import in exception.py

2013-11-12 Thread Sam Morrison
The `_` function is installed in the entry of the keystone


** Changed in: keystone
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1250309

Title:
  Missing import in exception.py

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  Missing import for _ in exception

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1250309/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp