[Yahoo-eng-team] [Bug 2002577] [NEW] The neutron-keepalived-state-change.log log is not rotated and grows without bound until disk is full

2023-01-11 Thread Mike Lowe
Public bug reported:

There is no facility to rotate neutron-keepalived-state-change.log which
means it grows without bound. This leads full disks and failure of
openstack components.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2002577

Title:
  The neutron-keepalived-state-change.log log is not rotated and grows
  without bound until disk is full

Status in neutron:
  New

Bug description:
  There is no facility to rotate neutron-keepalived-state-change.log
  which means it grows without bound. This leads full disks and failure
  of openstack components.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2002577/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1921953] [NEW] The upload of image is slow

2021-03-30 Thread Mike Durnosvistov
Public bug reported:

The original code of the `add`[1] method (upload) is inefficient and takes more 
time to upload images.
It's significantly improving after using the `python-swiftclient` library API.
Additionally `add` method became simpler.

[1]
https://github.com/openstack/glance_store/blob/stable/victoria/glance_store/_drivers/swift/store.py#L915

# The original implementation
> time glance image-create --protected=False --name=some-large-image --progress 
> --disk-format=vmdk --visibility=private --file some-large-image 
> --container-format=bare
[=>] 100%
+--+--+
| Property | Value  
  |
+--+--+
| checksum | aa7399679400faa31fb9f01bbae6758f   
  |
| container_format | bare   
  |
| created_at   | 2021-03-30T20:00:26Z   
  |
| direct_url   | 
swift+https://objectstore-3.cloud:443/v1/AUTH_e9141fb24eee4b3e9f27ae
 |
|  | 
69cda31132/glance_7d0607a8-80c1-465f-aecc-0c9129366a0e/7d0607a8-80c1-465f-aecc-0
 |
|  | c9129366a0e
  |
| disk_format  | vmdk   
  |
| id   | 7d0607a8-80c1-465f-aecc-0c9129366a0e   
  |
| min_disk | 0  
  |
| min_ram  | 0  
  |
| name | some-large-image   
  |
| os_hash_algo | sha512 
  |
| os_hash_value| 
e9d5babf23f24643f06a2bbd2e48f7a9a04109e176d46c6930ae0a3f76c571440367d6a400d45fd4
 |
|  | f07f0ed5e36d422825b33ae00d287ffad07a2ea16800f6cf   
  |
| os_hidden| False  
  |
| owner| e9141fb24eee4b3e9f25ae69cda31132   
  |
| protected| False  
  |
| size | 7980414976 
  |
| status   | active 
  |
| tags | [] 
  |
| updated_at   | 2021-03-30T20:04:33Z   
  |
| virtual_size | 64424509440
  |
| visibility   | private
  |
+--+--+
real4m9.821s
user0m18.227s
sys 0m15.272s


# The improved version using `SwiftService` from `python-swiftclient`
> time glance image-create --protected=False --name=some-large-image --progress 
> --disk-format=vmdk --visibility=private --file some-large-image 
> -container-format=bare
[=>] 100%
+--+--+
| Property | Value  
  |
+--+--+
| checksum | 818971a4539213d56f1d5b5f37efeab6   
  |
| container_format | bare   
  |
| created_at   | 2021-03-30T19:53:27Z   
  |
| direct_url   | 
swift+https://objectstore-3.cloud:443/v1/AUTH_e9141fb24eee4b3e9f27ae
 |
|  | 
69cda31132/glance_b6f49875-7769-45d3-8b07-e7c0118f4a02/b6f49875-7769-45d3-8b07-e
 |
|  | 7c0118f4a02
  |
| disk_format  | vmdk   
  |
| id   | b6f49875-7769-45d3-8b07-e7c0118f4a02   
  |
| min_disk | 0   

[Yahoo-eng-team] [Bug 1884949] [NEW] ds-identify fails on nocloud datasource when /var is a separate filesystem

2020-06-24 Thread Mike Drangula
Public bug reported:

I'm running CentOS 7.8.2003 with cloud-init 18.5. At least in CentOS,
systemd is running the generators, including the cloud-init generator,
before any secondary filesystems are mounted.

Thus, in ds-identify/dscheck_NoCloud/check_seed_dir, this line of code
is invalid:

local dir="${PATH_VAR_LIB_CLOUD}/seed/$name"
[ -d "$dir" ] || return 1

because PATH_VAR_LIB_CLOUD is set to /var/lib/cloud and when this code
is running /var is not yet mounted.

I'm sorry but I am not setup to test using a more recent version of
cloud-init.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1884949

Title:
  ds-identify fails on nocloud datasource when /var is a separate
  filesystem

Status in cloud-init:
  New

Bug description:
  I'm running CentOS 7.8.2003 with cloud-init 18.5. At least in CentOS,
  systemd is running the generators, including the cloud-init generator,
  before any secondary filesystems are mounted.

  Thus, in ds-identify/dscheck_NoCloud/check_seed_dir, this line of code
  is invalid:

  local dir="${PATH_VAR_LIB_CLOUD}/seed/$name"
  [ -d "$dir" ] || return 1

  because PATH_VAR_LIB_CLOUD is set to /var/lib/cloud and when this code
  is running /var is not yet mounted.

  I'm sorry but I am not setup to test using a more recent version of
  cloud-init.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1884949/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1827453] [NEW] Nova scheduler attempts to re-assign currently in-use SR-IOV VF to new VM

2019-05-02 Thread Mike Joseph
Public bug reported:

Running a small cluster with 16 compute nodes and 3 controller nodes on
OpenStack Queens using SR-IOV VFs.  From time to time, it appears that
the Nova scheduler loses track of some of the PCI devices (VFs) that are
actively mapped into servers.  We don't know exactly when this occurs
and we cannot trigger it on demand, but it occurs on a number of the
compute nodes over time.  Restarting the given compute node resolves the
issue.

The problem is manifest with the following errors:

/var/log/nova/nova-conductor.log:2019-05-03 01:35:27.309 13073 ERROR
nova.scheduler.utils [req-8418eb3a-4118-4505-97e3-fffbaae7aae6
2469493ff8b546ff9a6f4e339cc50ac2 33bb32d9463340bca0bb72a8c36579a9 -
default default] [instance: b2b4dbf2-d381-4416-95c9-b410aa6d8377] Error
from last host: node05 (node {REDACTED}): [u'Traceback (most recent call
last):\n', u'  File "/usr/lib/python2.7/dist-
packages/nova/compute/manager.py", line 1828, in
_do_build_and_run_instance\nfilter_properties, request_spec)\n', u'
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line
2108, in _build_and_run_instance\ninstance_uuid=instance.uuid,
reason=six.text_type(e))\n', u'RescheduledException: Build of instance
b2b4dbf2-d381-4416-95c9-b410aa6d8377 was re-scheduled: Requested
operation is not valid: PCI device :04:01.3 is in use by driver
QEMU, domain instance-1466\n']

The compute nodes in question are configured with the following PCI
whitelist:

[pci]
passthrough_whitelist = [{"vendor_id": "15b3", "product_id": "1004"}]

Note the, despite similar bugs, there haven't been changes to the
whitelist that would likely cause this to occur.  It just seems to
develop over time.

= Versions =

Compute nodes:

ii  nova-common   2:17.0.6-0ubuntu1 
 all  OpenStack Compute - common files
ii  nova-compute  2:17.0.6-0ubuntu1 
 all  OpenStack Compute - compute node base
ii  nova-compute-kvm  2:17.0.6-0ubuntu1 
 all  OpenStack Compute - compute node (KVM)
ii  nova-compute-libvirt  2:17.0.6-0ubuntu1 
 all  OpenStack Compute - compute node libvirt support

Controller nodes:

ii  nova-api  2:17.0.9-0ubuntu1 
  all  OpenStack Compute - API frontend
ii  nova-common   2:17.0.9-0ubuntu1 
  all  OpenStack Compute - common files
ii  nova-compute  2:17.0.9-0ubuntu1 
  all  OpenStack Compute - compute node base
ii  nova-compute-kvm  2:17.0.9-0ubuntu1 
  all  OpenStack Compute - compute node (KVM)
ii  nova-compute-libvirt  2:17.0.9-0ubuntu1 
  all  OpenStack Compute - compute node libvirt support
ii  nova-conductor2:17.0.9-0ubuntu1 
  all  OpenStack Compute - conductor service
ii  nova-consoleauth  2:17.0.9-0ubuntu1 
  all  OpenStack Compute - Console Authenticator
ii  nova-novncproxy   2:17.0.9-0ubuntu1 
  all  OpenStack Compute - NoVNC proxy
ii  nova-placement-api2:17.0.9-0ubuntu1 
  all  OpenStack Compute - placement API frontend
ii  nova-scheduler2:17.0.9-0ubuntu1 
  all  OpenStack Compute - virtual machine scheduler
ii  nova-serialproxy  2:17.0.9-0ubuntu1 
  all  OpenStack Compute - serial proxy
ii  nova-xvpvncproxy  2:17.0.9-0ubuntu1 
  all  OpenStack Compute - XVP VNC proxy

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1827453

Title:
  Nova scheduler attempts to re-assign currently in-use SR-IOV VF to new
  VM

Status in OpenStack Compute (nova):
  New

Bug description:
  Running a small cluster with 16 compute nodes and 3 controller nodes
  on OpenStack Queens using SR-IOV VFs.  From time to time, it appears
  that the Nova scheduler loses track of some of the PCI devices (VFs)
  that are actively mapped into servers.  We don't know exactly when
  this occurs and we cannot trigger it on demand, but it occurs on a
  number of the compute nodes over time.  Restarting the given compute
  node resolves the issue.

  The problem is manifest with the following errors:

  /var/log/nova/nova-conductor.log:2019-05-03 01:35:27.309 13073 ERROR
  nova.scheduler.utils 

[Yahoo-eng-team] [Bug 1808917] [NEW] RetryRequest shouldn't log stack trace by default, or it should be configuarble by the exception

2018-12-17 Thread Mike Kolesnik
Public bug reported:

I see the following littering the logs and it strikes me as wrong:

2018-12-18 01:01:46.259 34 DEBUG neutron.plugins.ml2.managers 
[req-196ce43f-2408-48f4-9c7e-bb90f66c9c14 - - - - -] DB exception raised by 
Mechanism driver 'opendaylight_v2' in update_port_precommit _call_on_drivers 
/usr/lib/python2.7/site-packages/neutron/plugins/ml2/managers.py:434
2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers Traceback (most 
recent call last):
2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/managers.py", line 427, 
in _call_on_drivers
2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers 
getattr(driver.obj, method_name)(context)
2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/site-packages/oslo_log/helpers.py", line 67, in wrapper
2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers return 
method(*args, **kwargs)
2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/site-packages/networking_odl/ml2/mech_driver_v2.py", line 
117, in update_port_precommit
2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers context, 
odl_const.ODL_PORT, odl_const.ODL_UPDATE)
2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/site-packages/networking_odl/ml2/mech_driver_v2.py", line 
87, in _record_in_journal
2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers 
ml2_context=context)
2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/site-packages/networking_odl/journal/journal.py", line 123, 
in record
2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers raise 
exception.RetryRequest(e)
2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers RetryRequest
2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers 

Since this is an explicit request by the operation to retry, and not some 
unexpected behavior, it shouldn't log the stack trace.
If you really want more fine grained control (over not logging the trace 
completely), a flag can be added to the exception to determine whether the log 
of it should contain the stack trace or not.

The code in question is here (also on master but this rocky url is simpler):
https://github.com/openstack/neutron/blob/stable/rocky/neutron/plugins/ml2/managers.py#L433

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1808917

Title:
  RetryRequest shouldn't log stack trace by default, or it should be
  configuarble by the exception

Status in neutron:
  New

Bug description:
  I see the following littering the logs and it strikes me as wrong:

  2018-12-18 01:01:46.259 34 DEBUG neutron.plugins.ml2.managers 
[req-196ce43f-2408-48f4-9c7e-bb90f66c9c14 - - - - -] DB exception raised by 
Mechanism driver 'opendaylight_v2' in update_port_precommit _call_on_drivers 
/usr/lib/python2.7/site-packages/neutron/plugins/ml2/managers.py:434
  2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers Traceback (most 
recent call last):
  2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/managers.py", line 427, 
in _call_on_drivers
  2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers 
getattr(driver.obj, method_name)(context)
  2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/site-packages/oslo_log/helpers.py", line 67, in wrapper
  2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers return 
method(*args, **kwargs)
  2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/site-packages/networking_odl/ml2/mech_driver_v2.py", line 
117, in update_port_precommit
  2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers context, 
odl_const.ODL_PORT, odl_const.ODL_UPDATE)
  2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/site-packages/networking_odl/ml2/mech_driver_v2.py", line 
87, in _record_in_journal
  2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers 
ml2_context=context)
  2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/site-packages/networking_odl/journal/journal.py", line 123, 
in record
  2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers raise 
exception.RetryRequest(e)
  2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers RetryRequest
  2018-12-18 01:01:46.259 34 ERROR neutron.plugins.ml2.managers 

  Since this is an explicit request by the operation to retry, and not some 
unexpected behavior, it shouldn't log the stack trace.
  If you really want more fine grained control (over not 

[Yahoo-eng-team] [Bug 1799332] Re: Apache WSGI config shipping with Keystone is incompatible with Horizon

2018-10-24 Thread Mike Joseph
I believe that this should be reopened, since the issue remains for the
following reasons:

* All installation guide docs refer to Keystone running on port 5000
(OS_AUTH_URL=http://controller:5000/v3).  If that's no longer the
recommended deployment model, then the docs should be updated
accordingly.

* The file in question still contains endpoints on both :5000 and
/identity.  If the Keystone project believes that :5000 is deprecated in
favor of /identity, then the WSGI config should be updated in the file
to remove :5000.  But having both seems broken.

* For some reason that I haven't worked out yet, the /identity endpoint
*is* interfering with the /horizon endpoint.  If /identity will be
remaining, we should try to figure out why that is.

-MJ

** Changed in: keystone
   Status: Invalid => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1799332

Title:
  Apache WSGI config shipping with Keystone is incompatible with Horizon

Status in OpenStack Identity (keystone):
  New

Bug description:
  In keystone/httpd/wsgi-keystone.conf, the following configuration is
  present:

  Alias /identity /usr/local/bin/keystone-wsgi-public
  
  SetHandler wsgi-script
  Options +ExecCGI

  WSGIProcessGroup keystone-public
  WSGIApplicationGroup %{GLOBAL}
  WSGIPassAuthorization On
  

  However, it is both harmful and unnecessary.  The operative WSGI
  configuration for Keystone comes from the ... section.  In fact, the commit which added the
  /identity endpoint described it as an documentation example:

  "Apache Httpd can be configured to accept keystone requests on all
  sorts of interfaces. The sample config file is updated to show
  how to configure Apache Httpd to also send requests on /identity
  and /identity_admin to keystone."

  Leaving it in place, however, causes conflicts when Horizon is
  concurrently installed:

  AH01630: client denied by server configuration: /usr/bin/keystone-
  wsgi-public

  ...in responses to Horizon URL's referencing '/identity'.  Therefore,
  I believe keeping this configuration snippet in the shipped WSGI
  configuration (as opposed to actual documentation) is a defect.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1799332/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1799332] [NEW] Apache WSGI config shipping with Keystone is incompatible with Horizon

2018-10-22 Thread Mike Joseph
Public bug reported:

In keystone/httpd/wsgi-keystone.conf, the following configuration is
present:

Alias /identity /usr/local/bin/keystone-wsgi-public

SetHandler wsgi-script
Options +ExecCGI

WSGIProcessGroup keystone-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On


However, it is both harmful and unnecessary.  The operative WSGI
configuration for Keystone comes from the ... section.  In fact, the commit which added the
/identity endpoint described it as an documentation example:

"Apache Httpd can be configured to accept keystone requests on all
sorts of interfaces. The sample config file is updated to show
how to configure Apache Httpd to also send requests on /identity
and /identity_admin to keystone."

Leaving it in place, however, causes conflicts when Horizon is
concurrently installed:

AH01630: client denied by server configuration: /usr/bin/keystone-wsgi-
public

...in responses to Horizon URL's referencing '/identity'.  Therefore, I
believe keeping this configuration snippet in the shipped WSGI
configuration (as opposed to actual documentation) is a defect.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1799332

Title:
  Apache WSGI config shipping with Keystone is incompatible with Horizon

Status in OpenStack Identity (keystone):
  New

Bug description:
  In keystone/httpd/wsgi-keystone.conf, the following configuration is
  present:

  Alias /identity /usr/local/bin/keystone-wsgi-public
  
  SetHandler wsgi-script
  Options +ExecCGI

  WSGIProcessGroup keystone-public
  WSGIApplicationGroup %{GLOBAL}
  WSGIPassAuthorization On
  

  However, it is both harmful and unnecessary.  The operative WSGI
  configuration for Keystone comes from the ... section.  In fact, the commit which added the
  /identity endpoint described it as an documentation example:

  "Apache Httpd can be configured to accept keystone requests on all
  sorts of interfaces. The sample config file is updated to show
  how to configure Apache Httpd to also send requests on /identity
  and /identity_admin to keystone."

  Leaving it in place, however, causes conflicts when Horizon is
  concurrently installed:

  AH01630: client denied by server configuration: /usr/bin/keystone-
  wsgi-public

  ...in responses to Horizon URL's referencing '/identity'.  Therefore,
  I believe keeping this configuration snippet in the shipped WSGI
  configuration (as opposed to actual documentation) is a defect.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1799332/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1793816] [NEW] Verify operation in keystone

2018-09-21 Thread Mike Frisch
Public bug reported:


This bug tracker is for errors with the documentation, use the following
as a template and remove or add fields as you see fit. Convert [ ] into
[x] to check boxes:

- [X] This doc is inaccurate in this way: Reference URL containing port 35357, 
instead of 5000
- [ ] This is a doc addition request.
- [ ] I have a fix to the document that I can paste below including example: 
input and output. 

If you have a troubleshooting or support issue, use the following
resources:

 - Ask OpenStack: http://ask.openstack.org
 - The mailing list: http://lists.openstack.org
 - IRC: 'openstack' channel on Freenode

---
Release:  on 2018-09-10 22:19
SHA: c5930abc5aa06881f28baa697d8d43a1f25157b8
Source: 
https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/install/keystone-verify-rdo.rst
URL: https://docs.openstack.org/keystone/rocky/install/keystone-verify-rdo.html

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: doc

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1793816

Title:
  Verify operation in keystone

Status in OpenStack Identity (keystone):
  New

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [X] This doc is inaccurate in this way: Reference URL containing port 
35357, instead of 5000
  - [ ] This is a doc addition request.
  - [ ] I have a fix to the document that I can paste below including example: 
input and output. 

  If you have a troubleshooting or support issue, use the following
  resources:

   - Ask OpenStack: http://ask.openstack.org
   - The mailing list: http://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release:  on 2018-09-10 22:19
  SHA: c5930abc5aa06881f28baa697d8d43a1f25157b8
  Source: 
https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/install/keystone-verify-rdo.rst
  URL: 
https://docs.openstack.org/keystone/rocky/install/keystone-verify-rdo.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1793816/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1785656] [NEW] test_internal_dns.InternalDNSTest fails even though dns-integration extension isn't loaded

2018-08-06 Thread Mike Kolesnik
Public bug reported:

We're seeing this on the Networking-ODL CI [1].

The test
neutron_tempest_plugin.scenario.test_internal_dns.InternalDNSTest is
being executed even though there's a decorator to prevent it from
running [2]

Either the checker isn't working or something is missing, since other
DNS tests are being skipped automatically due to the extension not being
loaded.

[1] 
http://logs.openstack.org/91/584591/5/check/networking-odl-tempest-oxygen/df17c02/
[2] 
http://git.openstack.org/cgit/openstack/neutron-tempest-plugin/tree/neutron_tempest_plugin/scenario/test_internal_dns.py#n28

** Affects: networking-odl
 Importance: Critical
 Status: Confirmed

** Affects: neutron
 Importance: High
 Status: Confirmed

** Also affects: networking-odl
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1785656

Title:
  test_internal_dns.InternalDNSTest fails even though dns-integration
  extension isn't loaded

Status in networking-odl:
  Confirmed
Status in neutron:
  Confirmed

Bug description:
  We're seeing this on the Networking-ODL CI [1].

  The test
  neutron_tempest_plugin.scenario.test_internal_dns.InternalDNSTest is
  being executed even though there's a decorator to prevent it from
  running [2]

  Either the checker isn't working or something is missing, since other
  DNS tests are being skipped automatically due to the extension not
  being loaded.

  [1] 
http://logs.openstack.org/91/584591/5/check/networking-odl-tempest-oxygen/df17c02/
  [2] 
http://git.openstack.org/cgit/openstack/neutron-tempest-plugin/tree/neutron_tempest_plugin/scenario/test_internal_dns.py#n28

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-odl/+bug/1785656/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1784155] Re: nova_placement service start not coordinated with api db sync on multiple controllers

2018-07-29 Thread Mike Bayer
** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

** Package changed: nova (Ubuntu) => ubuntu

** Package changed: ubuntu => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1784155

Title:
  nova_placement service start not coordinated with api db sync on
  multiple controllers

Status in OpenStack Compute (nova):
  New
Status in tripleo:
  New

Bug description:
  On a loaded HA / galera environment using VMs I can fairly
  consistently reproduce a race condition where the nova_placement
  service is started on controllers where the database is not yet
  available.   The nova_placement service itself does not seem to be
  able to tolerate this condition upon startup and it then fails to
  recover.   Mitigation here can either involve synchronizing these
  conditions or getting nova-placement to be more resilient.

  The symptoms of overcloud deploy failure look like two out of three
  controllers having the nova_placement container in an unhealthy state:

  TASK [Debug output for task which failed: Check for unhealthy containers 
after step 3] ***
  Saturday 28 July 2018  10:19:29 + (0:00:00.663)   0:30:26.152 
* 
  fatal: [stack2-overcloud-controller-2]: FAILED! => {
  "failed_when_result": true, 
  
"outputs.stdout_lines|default([])|union(outputs.stderr_lines|default([]))": [
  "3597b92e9714
192.168.25.1:8787/tripleomaster/centos-binary-nova-placement-api:959e1d7f755ee681b6f23b498d262a9e4dd6326f_4cbb1814
   \"kolla_start\"   2 minutes ago   Up 2 minutes (unhealthy)   
nova_placement"
  ]
  }
  fatal: [stack2-overcloud-controller-1]: FAILED! => {
  "failed_when_result": true, 
  
"outputs.stdout_lines|default([])|union(outputs.stderr_lines|default([]))": [
  "322c5ea53895
192.168.25.1:8787/tripleomaster/centos-binary-nova-placement-api:959e1d7f755ee681b6f23b498d262a9e4dd6326f_4cbb1814
   \"kolla_start\"   2 minutes ago   Up 2 minutes (unhealthy)   
nova_placement"
  ]
  }
  ok: [stack2-overcloud-controller-0] => {
  "failed_when_result": false, 
  
"outputs.stdout_lines|default([])|union(outputs.stderr_lines|default([]))": []
  }
  ok: [stack2-overcloud-compute-0] => {
  "failed_when_result": false, 
  
"outputs.stdout_lines|default([])|union(outputs.stderr_lines|default([]))": []
  }

  NO MORE HOSTS LEFT
  *

  
  inspecting placement_wsgi_error.log shows the first stack trace that the 
nova_placement database is missing the "traits" table:

  [Sat Jul 28 10:17:06.525018 2018] [:error] [pid 14] [remote 10.1.20.15:0] 
mod_wsgi (pid=14): Target WSGI script 
'/var/www/cgi-bin/nova/nova-placement-api' cannot be loaded as Python module.
  [Sat Jul 28 10:17:06.525067 2018] [:error] [pid 14] [remote 10.1.20.15:0] 
mod_wsgi (pid=14): Exception occurred processing WSGI script 
'/var/www/cgi-bin/nova/nova-placement-api'.
  [Sat Jul 28 10:17:06.525101 2018] [:error] [pid 14] [remote 10.1.20.15:0] 
Traceback (most recent call last):
  [Sat Jul 28 10:17:06.525124 2018] [:error] [pid 14] [remote 10.1.20.15:0]   
File "/var/www/cgi-bin/nova/nova-placement-api", line 54, in 
  [Sat Jul 28 10:17:06.525165 2018] [:error] [pid 14] [remote 10.1.20.15:0] 
application = init_application()
  [Sat Jul 28 10:17:06.525174 2018] [:error] [pid 14] [remote 10.1.20.15:0]   
File "/usr/lib/python2.7/site-packages/nova/api/openstack/placement/wsgi.py", 
line 88, in init_application
  [Sat Jul 28 10:17:06.525198 2018] [:error] [pid 14] [remote 10.1.20.15:0] 
return deploy.loadapp(conf.CONF)
  [Sat Jul 28 10:17:06.525205 2018] [:error] [pid 14] [remote 10.1.20.15:0]   
File "/usr/lib/python2.7/site-packages/nova/api/openstack/placement/deploy.py", 
line 111, in loadapp
  [Sat Jul 28 10:17:06.525300 2018] [:error] [pid 14] [remote 10.1.20.15:0] 
update_database()
  [Sat Jul 28 10:17:06.525310 2018] [:error] [pid 14] [remote 10.1.20.15:0]   
File "/usr/lib/python2.7/site-packages/nova/api/openstack/placement/deploy.py", 
line 92, in update_database
  [Sat Jul 28 10:17:06.525329 2018] [:error] [pid 14] [remote 10.1.20.15:0] 
resource_provider.ensure_trait_sync(ctx)
  [Sat Jul 28 10:17:06.525337 2018] [:error] [pid 14] [remote 10.1.20.15:0]   
File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/placement/objects/resource_provider.py",
 line 146, in ensure_trait_sync
  [Sat Jul 28 10:17:06.526277 2018] [:error] [pid 14] [remote 10.1.20.15:0] 
_trait_sync(ctx)

  ...

  [Sat Jul 28 10:17:06.531950 2018] [:error] [pid 14] [remote 10.1.20.15:0] 
raise errorclass(errno, errval)
  [Sat Jul 28 10:17:06.532049 2018] [:error] [pid 14] [remote 10.1.20.15:0] 
ProgrammingError: (pymysql.err.ProgrammingError) (1146, u"Table 
'nova_placement.traits' doesn't exist") 

[Yahoo-eng-team] [Bug 1779880] [NEW] Option needed to create image in erasure coded ceph pool

2018-07-03 Thread Mike Lowe
Public bug reported:

Erasure coded pools for rbd have been supported since the ceph
luminous release. To use this feature the erasure coded pool where
the data will be held needs to be specified during image creation.
The rbd_pool option still needs to point to a replicated pool where
a small amount of metadata will be stored in OMAP. Currently the
driver implicitly uses the default value of None.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1779880

Title:
  Option needed to create image in erasure coded ceph pool

Status in Glance:
  New

Bug description:
  Erasure coded pools for rbd have been supported since the ceph
  luminous release. To use this feature the erasure coded pool where
  the data will be held needs to be specified during image creation.
  The rbd_pool option still needs to point to a replicated pool where
  a small amount of metadata will be stored in OMAP. Currently the
  driver implicitly uses the default value of None.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1779880/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1550278] Re: tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.* tempest tests are failing repeatedly in the gate for networking-ovn

2018-04-30 Thread Mike Kolesnik
Curretly marking Invalid, if this resurfaces please reopen

** Changed in: networking-odl
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1550278

Title:
  tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.* tempest tests
  are failing repeatedly in the gate for networking-ovn

Status in networking-odl:
  Invalid
Status in networking-ovn:
  Fix Released
Status in neutron:
  Incomplete

Bug description:
  We are seeing a lot of tempest failures for the tests 
tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.* 
  with the below error.

  Either we should fix the error or at least disable these tests
  temporarily.

  
  t156.9: 
tempest.api.network.test_dhcp_ipv6.NetworksTestDHCPv6.test_dhcpv6_stateless_no_ra[id-ae2f4a5d-03ff-4c42-a3b0-ce2fcb7ea832]_StringException:
 Empty attachments:
stderr
stdout

  pythonlogging:'': {{{
  2016-02-26 07:29:46,168 4673 INFO [tempest.lib.common.rest_client] 
Request (NetworksTestDHCPv6:test_dhcpv6_stateless_no_ra): 404 POST 
http://127.0.0.1:9696/v2.0/subnets 0.370s
  2016-02-26 07:29:46,169 4673 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
  Body: {"subnet": {"cidr": "2003::/64", "ip_version": 6, "network_id": 
"4c7de56a-b059-4239-a5a0-94a53ba4929c", "gateway_ip": "2003::1", 
"ipv6_address_mode": "slaac"}}
  Response - Headers: {'content-length': '132', 'status': '404', 'date': 
'Fri, 26 Feb 2016 07:29:46 GMT', 'connection': 'close', 'content-type': 
'application/json; charset=UTF-8', 'x-openstack-request-id': 
'req-e21f771f-1a16-452a-9429-8a01f0409ae3'}
  Body: {"NeutronError": {"message": "Port 
598c23eb-1ae4-4010-a263-39f86240fd86 could not be found.", "type": 
"PortNotFound", "detail": ""}}
  2016-02-26 07:29:46,196 4673 INFO [tempest.lib.common.rest_client] 
Request (NetworksTestDHCPv6:tearDown): 200 GET http://127.0.0.1:9696/v2.0/ports 
0.024s
  2016-02-26 07:29:46,197 4673 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
  Body: None
  Response - Headers: {'content-location': 
'http://127.0.0.1:9696/v2.0/ports', 'content-length': '13', 'status': '200', 
'date': 'Fri, 26 Feb 2016 07:29:46 GMT', 'connection': 'close', 'content-type': 
'application/json; charset=UTF-8', 'x-openstack-request-id': 
'req-f0966c23-c72f-4a6f-b113-5d88a6dd5912'}
  Body: {"ports": []}
  2016-02-26 07:29:46,250 4673 INFO [tempest.lib.common.rest_client] 
Request (NetworksTestDHCPv6:tearDown): 200 GET 
http://127.0.0.1:9696/v2.0/subnets 0.052s
  2016-02-26 07:29:46,251 4673 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
  Body: None
  Response - Headers: {'content-location': 
'http://127.0.0.1:9696/v2.0/subnets', 'content-length': '457', 'status': '200', 
'date': 'Fri, 26 Feb 2016 07:29:46 GMT', 'connection': 'close', 'content-type': 
'application/json; charset=UTF-8', 'x-openstack-request-id': 
'req-3b29ba53-9ae0-4c0f-8c18-ec12db7a6bde'}
  Body: {"subnets": [{"name": "", "enable_dhcp": true, "network_id": 
"4c7de56a-b059-4239-a5a0-94a53ba4929c", "tenant_id": 
"631f9cb1391d41b6aba109afe06bc51b", "dns_nameservers": [], "gateway_ip": 
"2003::1", "ipv6_ra_mode": null, "allocation_pools": [{"start": "2003::2", 
"end": "2003:::::"}], "host_routes": [], "ip_version": 6, 
"ipv6_address_mode": "slaac", "cidr": "2003::/64", "id": 
"6bc2602c-2584-44cc-a6cd-b8af444f6403", "subnetpool_id": null}]}
  2016-02-26 07:29:46,293 4673 INFO [tempest.lib.common.rest_client] 
Request (NetworksTestDHCPv6:tearDown): 200 GET 
http://127.0.0.1:9696/v2.0/routers 0.041s
  2016-02-26 07:29:46,293 4673 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
  Body: None
  Response - Headers: {'content-location': 
'http://127.0.0.1:9696/v2.0/routers', 'content-length': '15', 'status': '200', 
'date': 'Fri, 26 Feb 2016 07:29:46 GMT', 'connection': 'close', 'content-type': 
'application/json; charset=UTF-8', 'x-openstack-request-id': 
'req-2b883ce9-b10f-4a49-a854-450c341f9cd9'}
  Body: {"routers": []}
  }}}

  Traceback (most recent call last):
File "tempest/api/network/test_dhcp_ipv6.py", line 129, in 
test_dhcpv6_stateless_no_ra
  real_ip, eui_ip = self._get_ips_from_subnet(**kwargs)
File "tempest/api/network/test_dhcp_ipv6.py", line 91, in 
_get_ips_from_subnet
  subnet = self.create_subnet(self.network, **kwargs)
File "tempest/api/network/base.py", line 196, in create_subnet
  **kwargs)
File 

[Yahoo-eng-team] [Bug 1765801] [NEW] network should be optionally reconfigured on every boot

2018-04-20 Thread Mike Gerdts
Public bug reported:

LP#1571004 made it so that networking is applied on the first boot of an
instance.  This makes sense in some cases, but not in others.  In the
case of Joyent's cloud, there is a need to support network
reconfiguration on reboot.

The proposed approach is to add a new metadata key 'maintain_network'.
This will default to False.  When set to True, network settings will be
applied PER_ALWAYS.  SmartOS will begin to support the
sdc:maintain_network key in its metadata service.

The SmartOS change is being tracked at
https://smartos.org/bugview/OS-6902 .

** Affects: cloud-init
 Importance: Undecided
 Assignee: Mike Gerdts (mgerdts)
 Status: Confirmed

** Changed in: cloud-init
   Status: New => Confirmed

** Changed in: cloud-init
 Assignee: (unassigned) => Mike Gerdts (mgerdts)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1765801

Title:
  network should be optionally reconfigured on every boot

Status in cloud-init:
  Confirmed

Bug description:
  LP#1571004 made it so that networking is applied on the first boot of
  an instance.  This makes sense in some cases, but not in others.  In
  the case of Joyent's cloud, there is a need to support network
  reconfiguration on reboot.

  The proposed approach is to add a new metadata key 'maintain_network'.
  This will default to False.  When set to True, network settings will
  be applied PER_ALWAYS.  SmartOS will begin to support the
  sdc:maintain_network key in its metadata service.

  The SmartOS change is being tracked at
  https://smartos.org/bugview/OS-6902 .

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1765801/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1765085] [NEW] DataSourceSmartOS ignores sdc:hostname

2018-04-18 Thread Mike Gerdts
Public bug reported:

In SmartOS, vmadm(1M) documents the hostname property as the way to set
the VM's hostname.  This property is available in the guest via the
sdc:hostname metadata property.  DataSourceSmartOS does not use this
value.  It currently sets the hostname from the following properties,
the first one wins.

1. hostname
2. sdc:uuid

The order should be:

1. sdc:hostname
2. hostname
3. sdc:uuid

** Affects: cloud-init
 Importance: Undecided
 Assignee: Mike Gerdts (mgerdts)
 Status: New

** Changed in: cloud-init
 Assignee: (unassigned) => Mike Gerdts (mgerdts)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1765085

Title:
  DataSourceSmartOS ignores sdc:hostname

Status in cloud-init:
  New

Bug description:
  In SmartOS, vmadm(1M) documents the hostname property as the way to
  set the VM's hostname.  This property is available in the guest via
  the sdc:hostname metadata property.  DataSourceSmartOS does not use
  this value.  It currently sets the hostname from the following
  properties, the first one wins.

  1. hostname
  2. sdc:uuid

  The order should be:

  1. sdc:hostname
  2. hostname
  3. sdc:uuid

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1765085/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1763512] [NEW] DataSourceSmartOS ignores sdc:routes

2018-04-12 Thread Mike Gerdts
Public bug reported:

As of OS-6178 (https://smartos.org/bugview/OS-6178), HVMs can use
sdc:routes to get their static routes metadata.  The support of
sdc:routes in DataSourceSmartOS is insufficient to cause the configured
routes to be effective. Documentation on the sdc:routes can be found at
https://eng.joyent.com/mdata/datadict.html#sdcroutes.

** Affects: cloud-init
 Importance: Undecided
 Assignee: Mike Gerdts (mgerdts)
 Status: New

** Changed in: cloud-init
 Assignee: (unassigned) => Mike Gerdts (mgerdts)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1763512

Title:
  DataSourceSmartOS ignores sdc:routes

Status in cloud-init:
  New

Bug description:
  As of OS-6178 (https://smartos.org/bugview/OS-6178), HVMs can use
  sdc:routes to get their static routes metadata.  The support of
  sdc:routes in DataSourceSmartOS is insufficient to cause the
  configured routes to be effective. Documentation on the sdc:routes can
  be found at https://eng.joyent.com/mdata/datadict.html#sdcroutes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1763512/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1763480] [NEW] DataSourceSmartOS list() should always return a list

2018-04-12 Thread Mike Gerdts
Public bug reported:

If customer_metadata is empty, a stack trace is seen on the console:

2018-04-12 16:01:18,302 - DataSourceSmartOS.py[DEBUG]: Writing "V2 13 d8094091 
1055c865 KEYS
" to metadata transport.
2018-04-12 16:01:18,382 - DataSourceSmartOS.py[DEBUG]: Read "V2 16 b23eb5d0 
1055c865 SUCCESS" from metadata transport.
2018-04-12 16:01:18,382 - DataSourceSmartOS.py[DEBUG]: No value found.
2018-04-12 16:01:18,382 - handlers.py[DEBUG]: finish: 
init-local/search-SmartOS: FAIL: no local data found from DataSourceSmartOS
2018-04-12 16:01:18,382 - util.py[WARNING]: Getting data from  failed
2018-04-12 16:01:18,382 - util.py[DEBUG]: Getting data from  failed
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/cloudinit/sources/__init__.py", line 
447, in find_source
if s.get_data():
  File "/usr/lib/python3/dist-packages/cloudinit/sources/__init__.py", line 
121, in get_data
return_value = self._get_data()
  File "/usr/lib/python3/dist-packages/cloudinit/sources/DataSourceSmartOS.py", 
line 238, in _get_data
md[ci_noun] = self.md_client.get(smartos_noun, strip=strip)
  File "/usr/lib/python3/dist-packages/cloudinit/sources/DataSourceSmartOS.py", 
line 640, in get
if self.is_b64_encoded(key):
  File "/usr/lib/python3/dist-packages/cloudinit/sources/DataSourceSmartOS.py", 
line 628, in is_b64_encoded
self._init_base64_keys(reset=reset)
  File "/usr/lib/python3/dist-packages/cloudinit/sources/DataSourceSmartOS.py", 
line 594, in _init_base64_keys
if 'base64_all' in keys:
TypeError: argument of type 'NoneType' is not iterable
2018-04-12 16:01:18,391 - main.py[DEBUG]: No local datasource found


To reproduce:

# vmadm create < Mike Gerdts (mgerdts)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1763480

Title:
  DataSourceSmartOS list() should always return a list

Status in cloud-init:
  New

Bug description:
  If customer_metadata is empty, a stack trace is seen on the console:

  2018-04-12 16:01:18,302 - DataSourceSmartOS.py[DEBUG]: Writing "V2 13 
d8094091 1055c865 KEYS
  " to metadata transport.
  2018-04-12 16:01:18,382 - DataSourceSmartOS.py[DEBUG]: Read "V2 16 b23eb5d0 
1055c865 SUCCESS" from metadata transport.
  2018-04-12 16:01:18,382 - DataSourceSmartOS.py[DEBUG]: No value found.
  2018-04-12 16:01:18,382 - handlers.py[DEBUG]: finish: 
init-local/search-SmartOS: FAIL: no local data found from DataSourceSmartOS
  2018-04-12 16:01:18,382 - util.py[WARNING]: Getting data from  failed
  2018-04-12 16:01:18,382 - util.py[DEBUG]: Getting data from  failed
  Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/cloudinit/sources/__init__.py", line 
447, in find_source
  if s.get_data():
File "/usr/lib/python3/dist-packages/cloudinit/sources/__init__.py", line 
121, in get_data
  return_value = self._get_data()
File 
"/usr/lib/python3/dist-packages/cloudinit/sources/DataSourceSmartOS.py", line 
238, in _get_data
  md[ci_noun] = self.md_client.get(smartos_noun, strip=strip)
File 
"/usr/lib/python3/dist-packages/cloudinit/sources/DataSourceSmartOS.py", line 
640, in get
  if self.is_b64_encoded(key):
File 
"/usr/lib/python3/dist-packages/cloudinit/sources/DataSourceSmartOS.py", line 
628, in is_b64_encoded
  self._init_base64_keys(reset=reset)
File 
"/usr/lib/python3/dist-packages/cloudinit/sources/DataSourceSmartOS.py", line 
594, in _init_base64_keys
  if 'base64_all' in keys:
  TypeError: argument of type 'NoneType' is not iterable
  2018-04-12 16:01:18,391 - main.py[DEBUG]: No local datasource found

  
  To reproduce:

  # vmadm create <https://bugs.launchpad.net/cloud-init/+bug/1763480/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1759924] [NEW] Port device owner isn't updated with new host availability zone during unshelve

2018-03-29 Thread Mike Lowe
Public bug reported:

During an unshelve the host for an instance and therefor the
availability zone may change but does not seem to updated in the port's
device_owner causing problems with server action add fixed ip for
example.

In nova/network/neutronv2/api.py _update_port_binding_for_instance should 
probably update the port's device_owner the same way that 
_update_ports_for_instance does.
 

+---+--+
| Field | Value|
+---+--+
| admin_state_up| UP   |
| allowed_address_pairs |  |
| binding_host_id   | r02c4b15 |
| binding_profile   |  |
| binding_vif_details   | port_filter='True'   |
| binding_vif_type  | bridge   |
| binding_vnic_type | normal   |
| created_at| 2018-03-05T13:25:48Z |
| data_plane_status | None |
| description   |  |
| device_id | 53f04bf3-eb1f-4c64-a70f-fd16d6c1a5af |
| device_owner  | compute:zone-r7  |
| dns_assignment|  |
| dns_name  | instance-w-volume-shelving-test  |
| extra_dhcp_opts   |  |
| fixed_ips |  |
| id| 327b891f-1820-4aa9-bbc3-fe9cc619eac3 |
| ip_address| None |
| mac_address   | fa:16:3e:14:21:d1|
| name  |  |
| network_id| e73b1699-0129-4c12-b722-e6ce52604824 |
| option_name   | None |
| option_value  | None |
| port_security_enabled | False|
| project_id| ecf32b152563403bbde297f58f4637d4 |
| qos_policy_id | None |
| revision_number   | 19   |
| security_group_ids| bb25a73a-a62e-4015-9595-16add6b7d3a0 |
| status| ACTIVE   |
| subnet_id | None |
| tags  |  |
| trunk_details | None |
| updated_at| 2018-03-28T20:03:23Z |
+---+--+

nova show 53f04bf3-eb1f-4c64-a70f-fd16d6c1a5af
+--++
| Property | Value  


|
+--++
| OS-DCF:diskConfig| MANUAL 


|
| OS-EXT-AZ:availability_zone  | zone-r2


|
| OS-EXT-SRV-ATTR:host | r02c4b15   


|
| OS-EXT-SRV-ATTR:hypervisor_hostname  | r02c4b15

** Affects: nova
 Importance: Medium
 Status: Triaged


** Tags: neutron shelve

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1759924

Title:
  Port device owner isn't updated with new host availability zone during
  unshelve

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  

[Yahoo-eng-team] [Bug 1758919] Re: Static routes are not per-interface, which breaks some deployments

2018-03-26 Thread Mike Pontillo
IMHO, this should also be fixed in cloud-init. If the input netplan
contains "global" routes, the renderer (or whatever can pre-process the
Netplan before renderering) should intelligently determine which
interfaces have an on-link gateway that matches the global route, and
automatically render the route at interface scope instead of "global".

Arguably, if the route's gateway address doesn't match an on-link
prefix, it should not be installed anyway (the kernel will reject it
anyway, unless the `onlink` flag is supplied, which instructs the kernel
to assume the address is on-link even if it doesn't appear to be). But
the only useful scenario I can see for supporting the `onlink` flag is
if we're installing a route on an interface that will get is IP address
via DHCP.

** Also affects: cloud-init
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1758919

Title:
  Static routes are not per-interface, which breaks some deployments

Status in cloud-init:
  New
Status in MAAS:
  In Progress
Status in MAAS 2.3 series:
  Triaged

Bug description:
  When juju tries to deploy a lxd container on a maas managed machine,
  it looses all static routes, due to ifdown/ifup being issued and e/n/i
  has no saved data on the original state.

  Machine with no lxd container deployed:
  root@4-compute-4:~# ip r
  default via 100.68.4.254 dev bond2 onlink 
  100.68.4.0/24 dev bond2  proto kernel  scope link  src 100.68.4.1 
  100.68.5.0/24 via 100.68.4.254 dev bond2 
  100.68.6.0/24 via 100.68.4.254 dev bond2 
  100.84.4.0/24 dev bond1  proto kernel  scope link  src 100.84.4.2 
  100.84.5.0/24 via 100.84.4.254 dev bond1 
  100.84.6.0/24 via 100.84.4.254 dev bond1 
  100.99.4.0/24 dev bond0  proto kernel  scope link  src 100.99.4.101 
  100.99.5.0/24 via 100.99.4.254 dev bond0 
  100.99.6.0/24 via 100.99.4.254 dev bond0 
  100.107.0.0/24 via 100.99.4.254 dev bond0 

  After juju deploys a container, routes are disappearing:
  root@4-management-1:~# ip r
  default via 100.68.100.254 dev bond2 onlink 
  10.177.144.0/24 dev lxdbr0  proto kernel  scope link  src 10.177.144.1 
  100.68.100.0/24 dev bond2  proto kernel  scope link  src 100.68.100.26 
  100.84.4.0/24 dev br-bond1  proto kernel  scope link  src 100.84.4.1 
  100.99.4.0/24 dev br-bond0  proto kernel  scope link  src 100.99.4.3 

  After host reboot, the routes are NOT getting back in place, they are still 
gone:
  root@4-management-1:~# ip r s
  default via 100.68.100.254 dev bond2 onlink 
  100.68.100.0/24 dev bond2  proto kernel  scope link  src 100.68.100.26 
  100.84.4.0/24 dev br-bond1  proto kernel  scope link  src 100.84.4.1 
  100.84.5.0/24 via 100.84.4.254 dev br-bond1 
  100.84.6.0/24 via 100.84.4.254 dev br-bond1 
  100.99.4.0/24 dev br-bond0  proto kernel  scope link  src 100.99.4.3

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1758919/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667863] Re: if a subnet has multiple static routes, the network interfaces file is invalid

2018-03-23 Thread Mike Pontillo
Adding cloud-init. This looks like an issue with how the netplan gets
rendered.

** Also affects: cloud-init
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1667863

Title:
  if a subnet has multiple static routes, the network interfaces file is
  invalid

Status in cloud-init:
  New
Status in curtin:
  New
Status in MAAS:
  Incomplete

Bug description:
  I have multiple subnets, each has an additional custom static route.

  those subnets are used by different bridges on the same node.

  example:
  brAdm (on interface enp9s0) - subnet 172.30.72.128/25 - static route 
172.30.72.0/21 gw 172.30.72.129
  brPublic (on interface ens9.2002) - subnet 172.30.80.128/25 - static route 
172.30.80.0/21 gw 172.30.80.129

  the resulting pre-up and post-up lines in /etc/network/interfaces are
  malformed, which creates then the wrong routing table.

  It seems the pre-down of one route and the post-up of the next route
  are not separated by a newline.

  See below:

  post-up route add -net 172.30.80.0 netmask 255.255.248.0 gw 172.30.80.129 
metric 0 || true
  pre-down route del -net 172.30.80.0 netmask 255.255.248.0 gw 172.30.80.129 
metric 0 || truepost-up route add -net 172.30.72.0 netmask 255.255.248.0 gw 
172.30.72.129 metric 0 || true
  pre-down route del -net 172.30.72.0 netmask 255.255.248.0 gw 172.30.72.129 
metric 0 || true

  
  Here's the entire resulting network configuration for reference.
  note that a bunch of other bridge interfaces are created, but not used on 
this machine, so not configured.

  
  cat /etc/network/interfaces
  auto lo
  iface lo inet loopback
  dns-nameservers 172.30.72.130
  dns-search r16maas.os maas

  auto enp9s0
  iface enp9s0 inet manual
  mtu 9000

  auto ens9
  iface ens9 inet manual
  mtu 9000

  auto brAdm
  iface brAdm inet static
  address 172.30.72.132/25
  hwaddress ether 08:9e:01:ab:fc:f6
  bridge_ports enp9s0
  bridge_fd 15
  mtu 9000

  auto brData
  iface brData inet manual
  hwaddress ether 00:02:c9:ce:7c:16
  bridge_ports ens9.0
  bridge_fd 15
  mtu 9000

  auto brExt
  iface brExt inet manual
  hwaddress ether 00:02:c9:ce:7c:16
  bridge_ports ens9.0
  bridge_fd 15
  mtu 9000

  auto brInt
  iface brInt inet manual
  hwaddress ether 00:02:c9:ce:7c:16
  bridge_ports ens9.0
  bridge_fd 15
  mtu 9000

  auto brPublic
  iface brPublic inet static
  address 172.30.80.132/25
  gateway 172.30.80.129
  hwaddress ether 00:02:c9:ce:7c:16
  bridge_ports ens9.0
  bridge_fd 15
  mtu 9000

  auto brStoClu
  iface brStoClu inet manual
  hwaddress ether 00:02:c9:ce:7c:16
  bridge_ports ens9.0
  bridge_fd 15
  mtu 9000

  auto brStoData
  iface brStoData inet manual
  hwaddress ether 00:02:c9:ce:7c:16
  bridge_ports ens9.0
  bridge_fd 15
  mtu 9000

  auto brAdm.52
  iface brAdm.52 inet manual
  vlan_id 52
  mtu 1500
  vlan-raw-device brAdm

  auto ens9.0
  iface ens9.0 inet manual
  mtu 9000
  vlan-raw-device ens9
  post-up route add -net 172.30.80.0 netmask 255.255.248.0 gw 172.30.80.129 
metric 0 || true
  pre-down route del -net 172.30.80.0 netmask 255.255.248.0 gw 172.30.80.129 
metric 0 || truepost-up route add -net 172.30.72.0 netmask 255.255.248.0 gw 
172.30.72.129 metric 0 || true
  pre-down route del -net 172.30.72.0 netmask 255.255.248.0 gw 172.30.72.129 
metric 0 || true
  source /etc/network/interfaces.d/*.cfg

  
  route
  Kernel IP routing table
  Destination Gateway Genmask Flags Metric RefUse Iface
  172.30.72.128   *   255.255.255.128 U 0  00 brAdm
  172.30.80.128   *   255.255.255.128 U 0  00 
brPublic

  
  
  ifconfig
  brAdm Link encap:Ethernet  HWaddr 08:9e:01:ab:fc:f6
inet addr:172.30.72.132  Bcast:172.30.72.255  Mask:255.255.255.128
inet6 addr: fe80::a9e:1ff:feab:fcf6/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
RX packets:15029 errors:0 dropped:0 overruns:0 frame:0
TX packets:1447 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:7393978 (7.3 MB)  TX bytes:182411 (182.4 KB)

  brAdm.52  Link encap:Ethernet  HWaddr 08:9e:01:ab:fc:f6
inet6 addr: fe80::a9e:1ff:feab:fcf6/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:7885 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:398943 (398.9 KB)  TX bytes:488 (488.0 B)

  brDataLink encap:Ethernet  HWaddr 00:02:c9:ce:7c:16
  

[Yahoo-eng-team] [Bug 1750884] Re: [2.4, bionic] /etc/resolv.conf not configured correctly in Bionic, leads to no DNS resolution

2018-03-08 Thread Mike Pontillo
** Changed in: maas
   Status: Triaged => Won't Fix

** Changed in: maas
 Assignee: Mike Pontillo (mpontillo) => (unassigned)

** Changed in: maas
Milestone: 2.4.0alpha2 => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1750884

Title:
  [2.4, bionic] /etc/resolv.conf not configured correctly in Bionic,
  leads to no DNS resolution

Status in cloud-init:
  New
Status in MAAS:
  Triaged
Status in nplan package in Ubuntu:
  New
Status in systemd package in Ubuntu:
  New

Bug description:
  When deploying Bionic, /etc/resolv.conf is not configured correctly,
  which leads to no DNS resolution. In the output below, you will see
  that netplan config is correctly to the 10.90.90.1 nameserver, but in
  resolv.conf that's a local address.

  Resolv.conf should really be configured to use the provided DNS
  server(s). That said, despite that fact, DNS resolution doesn't work
  with the local address.

  Bionic
  --

  ubuntu@node01:~$ cat /etc/netplan/50-cloud-init.yaml
  # This file is generated from information provided by
  # the datasource.  Changes to it will not persist across an instance.
  # To disable cloud-init's network configuration capabilities, write a file
  # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
  # network: {config: disabled}
  network:
  version: 2
  ethernets:
  enp0s25:
  match:
  macaddress: b8:ae:ed:7d:17:d2
  mtu: 1500
  nameservers:
  addresses:
  - 10.90.90.1
  search:
  - maaslab
  - maas
  set-name: enp0s25
  bridges:
  br0:
  addresses:
  - 10.90.90.3/24
  gateway4: 10.90.90.1
  interfaces:
  - enp0s25
  parameters:
  forward-delay: 15
  stp: false
  ubuntu@node01:~$ cat /etc/resolv.conf
  # This file is managed by man:systemd-resolved(8). Do not edit.
  #
  # 127.0.0.53 is the systemd-resolved stub resolver.
  # run "systemd-resolve --status" to see details about the actual nameservers.
  nameserver 127.0.0.53

  search maaslab maas
  ubuntu@node01:~$ ping google.com
  ping: google.com: Temporary failure in name resolution

  [...]

  ubuntu@node01:~$ sudo vim /etc/resolv.conf
  ubuntu@node01:~$ cat /etc/resolv.conf
  # This file is managed by man:systemd-resolved(8). Do not edit.
  #
  # 127.0.0.53 is the systemd-resolved stub resolver.
  # run "systemd-resolve --status" to see details about the actual nameservers.
  nameserver 10.90.90.1

  search maaslab maas
  ubuntu@node01:~$ ping google.com
  PING google.com (172.217.0.174) 56(84) bytes of data.
  64 bytes from mia09s16-in-f14.1e100.net (172.217.0.174): icmp_seq=1 ttl=52 
time=4.46 ms
  64 bytes from mia09s16-in-f14.1e100.net (172.217.0.174): icmp_seq=2 ttl=52 
time=4.38 ms

  =
  Xenial
  ==

  ubuntu@node05:~$ cat /etc/network/interfaces.d/50-cloud-init.cfg
  # This file is generated from information provided by
  # the datasource.  Changes to it will not persist across an instance.
  # To disable cloud-init's network configuration capabilities, write a file
  # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
  # network: {config: disabled}
  auto lo
  iface lo inet loopback
  dns-nameservers 10.90.90.1
  dns-search maaslab maas

  auto enp0s25
  iface enp0s25 inet static
  address 10.90.90.162/24
  gateway 10.90.90.1
  mtu 1500
  ubuntu@node05:~$ cat /etc/resolv.conf
  # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
  # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
  nameserver 10.90.90.1
  search maaslab maas

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1750884/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1749323] [NEW] Deleting the first of multiple HA routers munges HA network

2018-02-13 Thread Mike Lowe
Public bug reported:

Pike release, linuxbridge:

Deleting one ha router breaks all subsequent routers by munging the HA
network that keep alive uses.

Neutron server has these errors:
DEBUG neutron.plugins.ml2.managers [req-224baca3-954d-4daa-8ae8-3dac3aa66931 - 
- - - -] Network 82c3bca0-d04e-460a-993f-d95d3665c9d6 has no segments 
_extend_network_dict_provider 
/usr/lib/python2.7/site-packages/neutron/plugins/ml2/managers.py:165

Example of how to reproduce:

:~# openstack router create --project 370ed91835cb4a90aa0830060ccf0a88 router
+-+--+
| Field   | Value|
+-+--+
| admin_state_up  | UP   |
| availability_zone_hints |  |
| availability_zones  |  |
| created_at  | 2018-02-13T22:18:31Z |
| description |  |
| distributed | False|
| external_gateway_info   | None |
| flavor_id   | None |
| ha  | True |
| id  | 1ded1e23-fcad-40f4-9369-422cbe5fa7ed |
| name| router   |
| project_id  | 370ed91835cb4a90aa0830060ccf0a88 |
| revision_number | None |
| routes  |  |
| status  | ACTIVE   |
| tags|  |
| updated_at  | 2018-02-13T22:18:31Z |
+-+--+
:~# openstack router create --project 370ed91835cb4a90aa0830060ccf0a88 
router-deleteme
+-+--+
| Field   | Value|
+-+--+
| admin_state_up  | UP   |
| availability_zone_hints |  |
| availability_zones  |  |
| created_at  | 2018-02-13T22:19:08Z |
| description |  |
| distributed | False|
| external_gateway_info   | None |
| flavor_id   | None |
| ha  | True |
| id  | fc9d0b95-081a-4fce-9acb-0a9f8040e444 |
| name| router-deleteme  |
| project_id  | 370ed91835cb4a90aa0830060ccf0a88 |
| revision_number | None |
| routes  |  |
| status  | ACTIVE   |
| tags|  |
| updated_at  | 2018-02-13T22:19:08Z |
+-+--+
:~# openstack network show 22a84484-7b34-4a16-bba7-f5e6861198bd
+---++
| Field | Value 
 |
+---++
| admin_state_up| UP
 |
| availability_zone_hints   |   
 |
| availability_zones| nova  
 |
| created_at| 2018-02-13T22:18:30Z  
 |
| description   |   
 |
| dns_domain|   
 |
| id| 22a84484-7b34-4a16-bba7-f5e6861198bd  
 |
| ipv4_address_scope| None  
 |
| ipv6_address_scope| None  
 |
| is_default| None  
 |
| is_vlan_transparent   | None  
 |
| mtu   | 9000  
 |
| name  | HA network tenant 
370ed91835cb4a90aa0830060ccf0a88 |
| port_security_enabled | False 
 |
| project_id| 

[Yahoo-eng-team] [Bug 1746605] [NEW] stack trace when sdc:* not defined

2018-01-31 Thread Mike Gerdts
Public bug reported:

I'm seeing the following while trying to read meta-data from SmartOS.

2018-01-31 21:36:03,554 - DataSourceSmartOS.py[DEBUG]: Writing "V2 29 459961e2 
d133c055 GET c2RjOnJvdXRlcw==
" to metadata transport.
2018-01-31 21:36:03,995 - DataSourceSmartOS.py[DEBUG]: Read "aV2 21 0e6e7ec8 
d133c055 SUCCESS W10=" from metadata transport.
2018-01-31 21:36:03,996 - handlers.py[DEBUG]: finish: 
init-local/search-SmartOS: FAIL: no local data found from DataSourceSmartOS
2018-01-31 21:36:03,996 - util.py[WARNING]: Getting data from  failed
2018-01-31 21:36:03,996 - util.py[DEBUG]: Getting data from  failed
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/cloudinit/sources/__init__.py", line 
434, in find_source
if s.get_data():
  File "/usr/lib/python2.7/site-packages/cloudinit/sources/__init__.py", line 
121, in get_data
return_value = self._get_data()
  File 
"/usr/lib/python2.7/site-packages/cloudinit/sources/DataSourceSmartOS.py", line 
237, in _get_data
md[ci_noun] = self.md_client.get_json(smartos_noun)
  File 
"/usr/lib/python2.7/site-packages/cloudinit/sources/DataSourceSmartOS.py", line 
406, in get_json
result = self.get(key, default=default)
  File 
"/usr/lib/python2.7/site-packages/cloudinit/sources/DataSourceSmartOS.py", line 
559, in get
val = self._get(key, strip=False, default=mdefault)
  File 
"/usr/lib/python2.7/site-packages/cloudinit/sources/DataSourceSmartOS.py", line 
544, in _get
get(key, default=default, strip=strip))
  File 
"/usr/lib/python2.7/site-packages/cloudinit/sources/DataSourceSmartOS.py", line 
398, in get
result = self.request(rtype='GET', param=key)
  File 
"/usr/lib/python2.7/site-packages/cloudinit/sources/DataSourceSmartOS.py", line 
394, in request
value = self._get_value_from_frame(request_id, response)
  File 
"/usr/lib/python2.7/site-packages/cloudinit/sources/DataSourceSmartOS.py", line 
342, in _get_value_from_frame
frame_data = self.line_regex.match(frame).groupdict()
AttributeError: 'NoneType' object has no attribute 'groupdict'
2018-01-31 21:36:04,004 - main.py[DEBUG]: No local datasource found

[root@7180e700-3cba-cb89-eb82-ff14a51a62b2 ~]# echo c2RjOnJvdXRlcw== | base64 
-d; echo 
sdc:routes
[root@7180e700-3cba-cb89-eb82-ff14a51a62b2 ~]# mdata-get sdc:routes
[]

This seems to cause DataSourceSmartOS to fail completely, then it goes
on to time out on EC2 and CloudStack.

This is using my own build with this changeset at HEAD.

$ git log -n 1
commit f7deaf15acf382d62554e2b1d70daa9a9109d542

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1746605

Title:
  stack trace when sdc:* not defined

Status in cloud-init:
  New

Bug description:
  I'm seeing the following while trying to read meta-data from SmartOS.

  2018-01-31 21:36:03,554 - DataSourceSmartOS.py[DEBUG]: Writing "V2 29 
459961e2 d133c055 GET c2RjOnJvdXRlcw==
  " to metadata transport.
  2018-01-31 21:36:03,995 - DataSourceSmartOS.py[DEBUG]: Read "aV2 21 0e6e7ec8 
d133c055 SUCCESS W10=" from metadata transport.
  2018-01-31 21:36:03,996 - handlers.py[DEBUG]: finish: 
init-local/search-SmartOS: FAIL: no local data found from DataSourceSmartOS
  2018-01-31 21:36:03,996 - util.py[WARNING]: Getting data from  failed
  2018-01-31 21:36:03,996 - util.py[DEBUG]: Getting data from  failed
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/cloudinit/sources/__init__.py", line 
434, in find_source
  if s.get_data():
File "/usr/lib/python2.7/site-packages/cloudinit/sources/__init__.py", line 
121, in get_data
  return_value = self._get_data()
File 
"/usr/lib/python2.7/site-packages/cloudinit/sources/DataSourceSmartOS.py", line 
237, in _get_data
  md[ci_noun] = self.md_client.get_json(smartos_noun)
File 
"/usr/lib/python2.7/site-packages/cloudinit/sources/DataSourceSmartOS.py", line 
406, in get_json
  result = self.get(key, default=default)
File 
"/usr/lib/python2.7/site-packages/cloudinit/sources/DataSourceSmartOS.py", line 
559, in get
  val = self._get(key, strip=False, default=mdefault)
File 
"/usr/lib/python2.7/site-packages/cloudinit/sources/DataSourceSmartOS.py", line 
544, in _get
  get(key, default=default, strip=strip))
File 
"/usr/lib/python2.7/site-packages/cloudinit/sources/DataSourceSmartOS.py", line 
398, in get
  result = self.request(rtype='GET', param=key)
File 
"/usr/lib/python2.7/site-packages/cloudinit/sources/DataSourceSmartOS.py", line 
394, in request
  value = self._get_value_from_frame(request_id, response)
File 
"/usr/lib/python2.7/site-packages/cloudinit/sources/DataSourceSmartOS.py", line 
342, in _get_value_from_frame
  frame_data = self.line_regex.match(frame).groupdict()
  AttributeError: 'NoneType' object has no attribute 

[Yahoo-eng-team] [Bug 1735950] Re: ValueError: Old and New apt format defined with unequal values True vs False @ apt_preserve_sources_list

2017-12-05 Thread Mike Pontillo
Thanks for the clarification.

** Changed in: maas
   Status: Incomplete => Triaged

** Changed in: maas
   Importance: Undecided => Critical

** Also affects: maas/2.3
   Importance: Undecided
   Status: New

** Changed in: maas/2.3
   Status: New => Triaged

** Changed in: maas/2.3
   Importance: Undecided => Critical

** Changed in: maas/2.3
Milestone: None => 2.3.x

** Changed in: maas
Milestone: None => 2.4.0alpha1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1735950

Title:
  ValueError: Old and New apt format defined with unequal values True vs
  False @ apt_preserve_sources_list

Status in cloud-init:
  Incomplete
Status in MAAS:
  Triaged
Status in MAAS 2.3 series:
  Triaged

Bug description:
  All nodes have these same failed events:

  Node post-installation failure - 'cloudinit' running modules for
  config

  Node post-installation failure - 'cloudinit' running config-apt-
  configure with frequency once-per-instance

  
  Experiencing odd issues with the squid proxy not being reachable.

  From a deployed node that had the event errors.

  $ sudo cat /var/log/cloud-init.log | http://paste.ubuntu.com/26098787/
  $ sudo cat /var/log/cloud-init-output.log | http://paste.ubuntu.com/26098802/

  ubuntu@os-util-00:~$ sudo apt install htop
  sudo: unable to resolve host os-util-00
  Reading package lists... Done
  Building dependency tree   
  Reading state information... Done
  The following NEW packages will be installed:
htop
  0 upgraded, 1 newly installed, 0 to remove and 14 not upgraded.
  Need to get 76.4 kB of archives.
  After this operation, 215 kB of additional disk space will be used.
  Err:1 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 htop 
amd64 2.0.1-1ubuntu1
Could not connect to 10.10.0.110:8000 (10.10.0.110). - connect (113: No 
route to host)
  E: Failed to fetch 
http://archive.ubuntu.com/ubuntu/pool/universe/h/htop/htop_2.0.1-1ubuntu1_amd64.deb
  Could not connect to 10.10.0.110:8000 (10.10.0.110). - connect (113: No route 
to host)

  E: Unable to fetch some archives, maybe run apt-get update or try with
  --fix-missing?


  
  Not sure if these things are related (the proxy not being reachable, and the 
node event errors)  something is not right.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1735950/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1735950] Re: ValueError: Old and New apt format defined with unequal values True vs False @ apt_preserve_sources_list

2017-12-05 Thread Mike Pontillo
Setting this to Incomplete for MAAS, since it's not clear from if bad
input data from MAAS is causing this traceback in cloud-init, or if the
bug is only in cloud-init.

@jamesbeedy, can you tell us about how you have configured/customized
your apt sources in MAAS?

** Also affects: cloud-init
   Importance: Undecided
   Status: New

** Changed in: maas
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1735950

Title:
  ValueError: Old and New apt format defined with unequal values True vs
  False @ apt_preserve_sources_list

Status in cloud-init:
  New
Status in MAAS:
  Incomplete

Bug description:
  All nodes have these same failed events:

  Node post-installation failure - 'cloudinit' running modules for
  config

  Node post-installation failure - 'cloudinit' running config-apt-
  configure with frequency once-per-instance

  
  Experiencing odd issues with the squid proxy not being reachable.

  From a deployed node that had the event errors.

  $ sudo cat /var/log/cloud-init.log | http://paste.ubuntu.com/26098787/
  $ sudo cat /var/log/cloud-init-output.log | http://paste.ubuntu.com/26098802/

  ubuntu@os-util-00:~$ sudo apt install htop
  sudo: unable to resolve host os-util-00
  Reading package lists... Done
  Building dependency tree   
  Reading state information... Done
  The following NEW packages will be installed:
htop
  0 upgraded, 1 newly installed, 0 to remove and 14 not upgraded.
  Need to get 76.4 kB of archives.
  After this operation, 215 kB of additional disk space will be used.
  Err:1 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 htop 
amd64 2.0.1-1ubuntu1
Could not connect to 10.10.0.110:8000 (10.10.0.110). - connect (113: No 
route to host)
  E: Failed to fetch 
http://archive.ubuntu.com/ubuntu/pool/universe/h/htop/htop_2.0.1-1ubuntu1_amd64.deb
  Could not connect to 10.10.0.110:8000 (10.10.0.110). - connect (113: No route 
to host)

  E: Unable to fetch some archives, maybe run apt-get update or try with
  --fix-missing?


  
  Not sure if these things are related (the proxy not being reachable, and the 
node event errors)  something is not right.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1735950/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1732522] Re: [2.3, UI] IP address not listed while hardware testing

2017-11-17 Thread Mike Pontillo
** Also affects: cloud-init
   Importance: Undecided
   Status: New

** Changed in: maas
   Status: Confirmed => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1732522

Title:
  [2.3, UI] IP address not listed while hardware testing

Status in cloud-init:
  New
Status in MAAS:
  Triaged

Bug description:
  I started commissioning+hardware testing on a machine, and while the
  machine was testing (for 2hrs+) i noticed that the IP address had
  disappeared. The machine has the MAC of  00:25:90:4c:e7:9e and IP of
  192.168.0.211 from the dynamic range.

  Checking the MAAS server, I noticed that the IP/MAC was in the ARP
  table:

  andreserl@maas:/var/lib/maas/dhcp$ arp -a | grep 211
  192-168-9-211.maas (192.168.9.211) at 00:25:90:4c:e7:9e [ether] on bond-lan

  Checking the leases file has the following:
  http://pastebin.ubuntu.com/25969442/

  Then I checked a couple areas of MAAS:
   - Device discovery, the machine wasn't there.
   - Subnet details page, the machine wasn't there (e.g. as observed)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1732522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1729636] [NEW] Inconsistent xml for libvirt disks

2017-11-02 Thread Mike Lowe
Public bug reported:

With both of these settings the xml is not consistent between root disks
and volumes when using RBD.

hw_disk_discard = unmap
disk_cachemodes = network=writeback

Root disks get this:
  
Volumes get this:
  

Both have the same type () but the
settings are not applied consistently between volumes and root disks.

nova: 15.0.7
CentOS: 7.4
libvirt: 3.2.0
Ceph: 12.2.1

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1729636

Title:
  Inconsistent xml for libvirt disks

Status in OpenStack Compute (nova):
  New

Bug description:
  With both of these settings the xml is not consistent between root
  disks and volumes when using RBD.

  hw_disk_discard = unmap
  disk_cachemodes = network=writeback

  Root disks get this:

  Volumes get this:


  Both have the same type () but the
  settings are not applied consistently between volumes and root disks.

  nova: 15.0.7
  CentOS: 7.4
  libvirt: 3.2.0
  Ceph: 12.2.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1729636/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1726434] [NEW] delete router from horizon causes critical error in neutron logs

2017-10-23 Thread Mike Manuthu
Public bug reported:

While deleting a router from the horizon interface the following error
is observed in the neutron-agent-container.

root@infra1-neutron-agents-container-efc7805b:~# tail -45 
/var/log/neutron/neutron.log
2017-10-23 17:12:15.472 20348 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', '/openstack/venvs/neutron-16.0.1/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'kill', '-9', '20349'] create_process 
/openstack/venvs/neutron-16.0.1/lib/python2.7/site-packages/neutron/agent/linux/utils.py:92
2017-10-23 17:12:15.488 20348 CRITICAL neutron [-] Unhandled error: 
AssertionError: do not call blocking functions from the mainloop
2017-10-23 17:12:15.488 20348 ERROR neutron Traceback (most recent call last):
2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/openstack/venvs/neutron-16.0.1/bin/neutron-keepalived-state-change", line 11, 
in 
2017-10-23 17:12:15.488 20348 ERROR neutron sys.exit(main())
2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/openstack/venvs/neutron-16.0.1/lib/python2.7/site-packages/neutron/cmd/keepalived_state_change.py",
 line 19, in main
2017-10-23 17:12:15.488 20348 ERROR neutron keepalived_state_change.main()
2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/openstack/venvs/neutron-16.0.1/lib/python2.7/site-packages/neutron/agent/l3/keepalived_state_change.py",
 line 156, in main
2017-10-23 17:12:15.488 20348 ERROR neutron cfg.CONF.monitor_cidr).start()
2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/openstack/venvs/neutron-16.0.1/lib/python2.7/site-packages/neutron/agent/linux/daemon.py",
 line 253, in start
2017-10-23 17:12:15.488 20348 ERROR neutron self.run()
2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/openstack/venvs/neutron-16.0.1/lib/python2.7/site-packages/neutron/agent/l3/keepalived_state_change.py",
 line 69, in run
2017-10-23 17:12:15.488 20348 ERROR neutron for iterable in self.monitor:
2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/openstack/venvs/neutron-16.0.1/lib/python2.7/site-packages/neutron/agent/linux/async_process.py",
 line 261, in _iter_queue
2017-10-23 17:12:15.488 20348 ERROR neutron yield queue.get(block=block)
2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/openstack/venvs/neutron-16.0.1/lib/python2.7/site-packages/eventlet/queue.py",
 line 313, in get
2017-10-23 17:12:15.488 20348 ERROR neutron return waiter.wait()
2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/openstack/venvs/neutron-16.0.1/lib/python2.7/site-packages/eventlet/queue.py",
 line 141, in wait
2017-10-23 17:12:15.488 20348 ERROR neutron return get_hub().switch()
2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/openstack/venvs/neutron-16.0.1/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 294, in switch
2017-10-23 17:12:15.488 20348 ERROR neutron return self.greenlet.switch()
2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/openstack/venvs/neutron-16.0.1/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 346, in run
2017-10-23 17:12:15.488 20348 ERROR neutron self.wait(sleep_time)
2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/openstack/venvs/neutron-16.0.1/lib/python2.7/site-packages/eventlet/hubs/poll.py",
 line 85, in wait
2017-10-23 17:12:15.488 20348 ERROR neutron presult = self.do_poll(seconds)
2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/openstack/venvs/neutron-16.0.1/lib/python2.7/site-packages/eventlet/hubs/epolls.py",
 line 62, in do_poll
2017-10-23 17:12:15.488 20348 ERROR neutron return self.poll.poll(seconds)
2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/openstack/venvs/neutron-16.0.1/lib/python2.7/site-packages/neutron/agent/l3/keepalived_state_change.py",
 line 133, in handle_sigterm
2017-10-23 17:12:15.488 20348 ERROR neutron self._kill_monitor()
2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/openstack/venvs/neutron-16.0.1/lib/python2.7/site-packages/neutron/agent/l3/keepalived_state_change.py",
 line 130, in _kill_monitor
2017-10-23 17:12:15.488 20348 ERROR neutron run_as_root=True)
2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/openstack/venvs/neutron-16.0.1/lib/python2.7/site-packages/neutron/agent/linux/utils.py",
 line 223, in kill_process
2017-10-23 17:12:15.488 20348 ERROR neutron execute(['kill', '-%d' % 
signal, pid], run_as_root=run_as_root)
2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/openstack/venvs/neutron-16.0.1/lib/python2.7/site-packages/neutron/agent/linux/utils.py",
 line 131, in execute
2017-10-23 17:12:15.488 20348 ERROR neutron _stdout, _stderr = 
obj.communicate(_process_input)
2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/usr/lib/python2.7/subprocess.py", line 800, in communicate
2017-10-23 17:12:15.488 20348 ERROR neutron return self._communicate(input)
2017-10-23 17:12:15.488 20348 ERROR neutron   File 
"/usr/lib/python2.7/subprocess.py", line 1419, in _communicate
2017-10-23 17:12:15.488 20348 ERROR neutron stdout, 

[Yahoo-eng-team] [Bug 1716448] [NEW] Enable GVRP for vlan interfaces with linuxbridge agent option

2017-09-11 Thread Mike Lowe
Public bug reported:

GARP VLAN registration protocol (GVRP) exchanges network VLAN
information to allow switches to dynamically forward frames for one or
more VLANs. By enabling gvrp on vlan interfaces created by linuxbridge
agent operators will be able to dynamically create and destroy vlan
based tenant networks.  No additional switch configuration or software
defined networking is required and brings the features of linuxbridge
more in line with openvswitch based clouds.  This should be enabled via
an option in the linuxbridge agent config; however, there are no serious
consequences for having it wrongly enabled.  The changes required in the
agent are checking the option, if true append 'gvrp on' to the 'ip link
add' command that creates the vlan interface. For example 'ip link add
link bond0 name bond0.365 type vlan id 365 gvrp on' creates a sub
interface for vlan 365 on interface bond0 with gvrp enabled.  Adding
this capability greatly simplifies switch configuration and deployment
of linuxbridge based clouds with minimal impact.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1716448

Title:
  Enable GVRP for vlan interfaces with linuxbridge agent option

Status in neutron:
  New

Bug description:
  GARP VLAN registration protocol (GVRP) exchanges network VLAN
  information to allow switches to dynamically forward frames for one or
  more VLANs. By enabling gvrp on vlan interfaces created by linuxbridge
  agent operators will be able to dynamically create and destroy vlan
  based tenant networks.  No additional switch configuration or software
  defined networking is required and brings the features of linuxbridge
  more in line with openvswitch based clouds.  This should be enabled
  via an option in the linuxbridge agent config; however, there are no
  serious consequences for having it wrongly enabled.  The changes
  required in the agent are checking the option, if true append 'gvrp
  on' to the 'ip link add' command that creates the vlan interface. For
  example 'ip link add link bond0 name bond0.365 type vlan id 365 gvrp
  on' creates a sub interface for vlan 365 on interface bond0 with gvrp
  enabled.  Adding this capability greatly simplifies switch
  configuration and deployment of linuxbridge based clouds with minimal
  impact.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1716448/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1701346] [NEW] Trust mechanism is broken

2017-06-29 Thread Mike Fedosin
Public bug reported:

Because of various changes in keystoneauth1 module current trust
mechanism glance.common.trust_auth cannot create a trust and fails with
a error:

[None req-b7ac5edd-2104-4cab-b85e-ddae7c205261 admin admin] Unable to
create trust: 'NoneType' object has no attribute 'endswith' Use the
existing user token.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1701346

Title:
  Trust mechanism is broken

Status in Glance:
  New

Bug description:
  Because of various changes in keystoneauth1 module current trust
  mechanism glance.common.trust_auth cannot create a trust and fails
  with a error:

  [None req-b7ac5edd-2104-4cab-b85e-ddae7c205261 admin admin] Unable to
  create trust: 'NoneType' object has no attribute 'endswith' Use the
  existing user token.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1701346/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1700117] [NEW] Linuxbridge agent in ocata no longer sets mtu for tap interfaces

2017-06-23 Thread Mike Lowe
Public bug reported:

Post upgrade to Ocata from Newton tap interfaces that used to come up
mtu 9000 now come up mtu 1500.  The settings for global_physnet_mtu and
path_mtu have remained unchanged at 9050.  I no longer have lines like
this in the logs: Running command (rootwrap daemon): ['ip', 'link',
'set', 'tap028da37a-63', 'mtu', '9000'] execute_rootwrap_daemon

The networks in question are reported as mtu 9000 from `openstack
network show`.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1700117

Title:
  Linuxbridge agent in ocata no longer sets mtu for tap interfaces

Status in neutron:
  New

Bug description:
  Post upgrade to Ocata from Newton tap interfaces that used to come up
  mtu 9000 now come up mtu 1500.  The settings for global_physnet_mtu
  and path_mtu have remained unchanged at 9050.  I no longer have lines
  like this in the logs: Running command (rootwrap daemon): ['ip',
  'link', 'set', 'tap028da37a-63', 'mtu', '9000']
  execute_rootwrap_daemon

  The networks in question are reported as mtu 9000 from `openstack
  network show`.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1700117/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1612433] Re: neutron-db-manage autogenerate is generating empty upgrades

2017-03-30 Thread Mike Kolesnik
This seems to not work properly in sub projects..

$ neutron-db-manage --subproject networking-odl revision -m "Add journal 
dependency table" --autogenerate 
  Running revision for networking-odl ...
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.autogenerate.compare] Detected removed table 
u'sfc_service_function_params'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'cisco_firewall_associations'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'sfc_port_pair_groups'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'bgpvpn_router_associations'
INFO  [alembic.autogenerate.compare] Detected removed table u'lbaas_l7rules'
INFO  [alembic.autogenerate.compare] Detected removed table u'sfc_path_nodes'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'sfc_path_port_associations'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'sfc_chain_group_associations'
INFO  [alembic.autogenerate.compare] Detected removed table u'physical_locators'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'l2gw_alembic_version'
INFO  [alembic.autogenerate.compare] Detected removed table u'lbaas_l7policies'
INFO  [alembic.autogenerate.compare] Detected removed table u'firewall_rules_v2'
INFO  [alembic.autogenerate.compare] Detected removed table u'sfc_port_pairs'
INFO  [alembic.autogenerate.compare] Detected removed table u'alembic_version'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'firewall_groups_v2'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'sfc_uuid_intid_associations'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'l2gatewayinterfaces'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'alembic_version_lbaas'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'sfc_flow_classifiers'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'alembic_version_fwaas'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'firewall_group_port_associations_v2'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'opendaylightjournal'
INFO  [alembic.autogenerate.compare] Detected removed table u'l2gatewaydevices'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'lbaas_sessionpersistences'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'lbaas_loadbalanceragentbindings'
INFO  [alembic.autogenerate.compare] Detected removed table u'lbaas_members'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'opendaylight_maintenance'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'lbaas_loadbalancers'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'firewall_policy_rule_associations_v2'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'sfc_port_chain_parameters'
INFO  [alembic.autogenerate.compare] Detected removed table u'physical_ports'
INFO  [alembic.autogenerate.compare] Detected removed table u'lbaas_listeners'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'pending_ucast_macs_remotes'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'alembic_version_bgpvpn'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'bgpvpn_network_associations'
INFO  [alembic.autogenerate.compare] Detected removed table u'bgpvpns'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'sfc_port_pair_group_params'
INFO  [alembic.autogenerate.compare] Detected removed table u'lbaas_pools'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'sfc_flow_classifier_l7_parameters'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'lbaas_healthmonitors'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'sfc_chain_classifier_associations'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'firewall_router_associations'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'sfc_portpair_details'
INFO  [alembic.autogenerate.compare] Detected removed table u'sfc_port_chains'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'lbaas_loadbalancer_statistics'
INFO  [alembic.autogenerate.compare] Detected removed table u'l2gateways'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'firewall_policies_v2'
INFO  [alembic.autogenerate.compare] Detected removed table u'ucast_macs_locals'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'l2gatewayconnections'
INFO  [alembic.autogenerate.compare] Detected removed table u'lbaas_sni'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'alembic_version_sfc'
INFO  [alembic.autogenerate.compare] Detected removed table 
u'ucast_macs_remotes'
INFO  [alembic.autogenerate.compare] Detected removed table u'logical_switches'
INFO  [alembic.autogenerate.compare] Detected removed table u'physical_switches'
INFO  [alembic.autogenerate.compare] Detected removed 

[Yahoo-eng-team] [Bug 1546910] Re: args pass to securitygroup precommit event should include the complete info

2017-01-08 Thread Mike Kolesnik
** Also affects: networking-odl/3.0-newton
   Importance: Undecided
   Status: New

** Changed in: networking-odl/3.0-newton
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1546910

Title:
  args pass to securitygroup precommit event should include the complete
  info

Status in networking-odl:
  In Progress
Status in networking-odl 3.0-newton series:
  New
Status in neutron:
  In Progress

Bug description:
  We introduced the PRECOMMIT_XXX event, but in securitygroups_db.py,
  the kwargs passed to it do not include the complete info of DB like
  AFTER_XXX event. For example, the id of the new created sg/rule.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-odl/+bug/1546910/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1653830] [NEW] Security group filters for all ports are refreshed on any DHCP port change

2017-01-03 Thread Mike Dorman
Public bug reported:

Whenever any change is made to a DHCP agent port, a refresh of all
security group filters for all ports on that network is triggered.  This
is unnecessary as all instance ports automatically get a blanket allow
rule for DHCP port numbers.  So changes to DHCP ports in no way require
updates to any filters.

For networks with a large number of ports, this also generates
significant load against neutron-server and the backend database.

Steps to reproduce:

- Network with some number of instance ports
- Add or remove a DHCP agent from that network (constitutes a change of DHCP 
ports)
- A refresh for all ports on that network is triggered

See:
https://github.com/openstack/neutron/blob/master/neutron/db/securitygroups_rpc_base.py#L138-L140

We experience this issue in Liberty, and it's still present in master.

** Affects: neutron
 Importance: Undecided
 Assignee: Mike Dorman (mdorman-m)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1653830

Title:
  Security group filters for all ports are refreshed on any DHCP port
  change

Status in neutron:
  In Progress

Bug description:
  Whenever any change is made to a DHCP agent port, a refresh of all
  security group filters for all ports on that network is triggered.
  This is unnecessary as all instance ports automatically get a blanket
  allow rule for DHCP port numbers.  So changes to DHCP ports in no way
  require updates to any filters.

  For networks with a large number of ports, this also generates
  significant load against neutron-server and the backend database.

  Steps to reproduce:

  - Network with some number of instance ports
  - Add or remove a DHCP agent from that network (constitutes a change of DHCP 
ports)
  - A refresh for all ports on that network is triggered

  See:
  
https://github.com/openstack/neutron/blob/master/neutron/db/securitygroups_rpc_base.py#L138-L140

  We experience this issue in Liberty, and it's still present in master.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1653830/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1644263] [NEW] passlib 1.7.0 deprecates sha512_crypt.encrypt()

2016-11-23 Thread Mike Bayer
Public bug reported:

tests are failing due to a new deprecation warning:

Captured traceback:
~~~
Traceback (most recent call last):
  File "keystone/tests/unit/test_backend_sql.py", line 59, in setUp
self.load_fixtures(default_fixtures)
  File "keystone/tests/unit/core.py", line 754, in load_fixtures
user_copy = self.identity_api.create_user(user_copy)
  File "keystone/common/manager.py", line 123, in wrapped
__ret_val = __f(*args, **kwargs)
  File "keystone/identity/core.py", line 410, in wrapper
return f(self, *args, **kwargs)
  File "keystone/identity/core.py", line 420, in wrapper
return f(self, *args, **kwargs)
  File "keystone/identity/core.py", line 925, in create_user
ref = driver.create_user(user['id'], user)
  File "keystone/common/sql/core.py", line 429, in wrapper
return method(*args, **kwargs)
  File "keystone/identity/backends/sql.py", line 121, in create_user
user = utils.hash_user_password(user)
  File "keystone/common/utils.py", line 129, in hash_user_password
return dict(user, password=hash_password(password))
  File "keystone/common/utils.py", line 136, in hash_password
password_utf8, rounds=CONF.crypt_strength)
  File 
"/var/lib/jenkins/workspace/openstack_gerrit/keystone/.tox/sqla_py27/lib/python2.7/site-packages/passlib/utils/decor.py",
 line 190, in wrapper
warn(msg % tmp, DeprecationWarning, stacklevel=2)
DeprecationWarning: the method 
passlib.handlers.sha2_crypt.sha512_crypt.encrypt() is deprecated as of Passlib 
1.7, and will be removed in Passlib 2.0, use .hash() instead.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1644263

Title:
  passlib 1.7.0 deprecates sha512_crypt.encrypt()

Status in OpenStack Identity (keystone):
  New

Bug description:
  tests are failing due to a new deprecation warning:

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "keystone/tests/unit/test_backend_sql.py", line 59, in setUp
  self.load_fixtures(default_fixtures)
File "keystone/tests/unit/core.py", line 754, in load_fixtures
  user_copy = self.identity_api.create_user(user_copy)
File "keystone/common/manager.py", line 123, in wrapped
  __ret_val = __f(*args, **kwargs)
File "keystone/identity/core.py", line 410, in wrapper
  return f(self, *args, **kwargs)
File "keystone/identity/core.py", line 420, in wrapper
  return f(self, *args, **kwargs)
File "keystone/identity/core.py", line 925, in create_user
  ref = driver.create_user(user['id'], user)
File "keystone/common/sql/core.py", line 429, in wrapper
  return method(*args, **kwargs)
File "keystone/identity/backends/sql.py", line 121, in create_user
  user = utils.hash_user_password(user)
File "keystone/common/utils.py", line 129, in hash_user_password
  return dict(user, password=hash_password(password))
File "keystone/common/utils.py", line 136, in hash_password
  password_utf8, rounds=CONF.crypt_strength)
File 
"/var/lib/jenkins/workspace/openstack_gerrit/keystone/.tox/sqla_py27/lib/python2.7/site-packages/passlib/utils/decor.py",
 line 190, in wrapper
  warn(msg % tmp, DeprecationWarning, stacklevel=2)
  DeprecationWarning: the method 
passlib.handlers.sha2_crypt.sha512_crypt.encrypt() is deprecated as of Passlib 
1.7, and will be removed in Passlib 2.0, use .hash() instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1644263/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493448] Re: All operations are perfomed with admin priveleges when 'use_user_token' is False

2016-09-21 Thread Mike Fedosin
"use_user_token" and related glance config options were deprecated in
Mitaka: https://review.openstack.org/#/c/237742/

Bug may be closed.

** Changed in: glance
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1493448

Title:
  All operations are perfomed with admin priveleges when
  'use_user_token' is False

Status in Glance:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix
Status in OpenStack Security Notes:
  Fix Released

Bug description:
  In glance-api.conf we have a param called 'use_user_token' which is
  enabled by default. It was introduced to allow for reauthentication
  when tokens expire and prevents requests from silently failing.
  https://review.openstack.org/#/c/29967/

  Unfortunately disabling this parameter leads to security issues and
  allows a regular user to perform any operation with admin rights.

  Steps to reproduce on devstack:
  1. Change /etc/glance/glance-api.conf parameters and restart glance-api:
  # Pass the user's token through for API requests to the registry.
  # Default: True
  use_user_token = False

  # If 'use_user_token' is not in effect then admin credentials
  # can be specified. Requests to the registry on behalf of
  # the API will use these credentials.
  # Admin user name
  admin_user = glance
  # Admin password
  admin_password = nova
  # Admin tenant name
  admin_tenant_name = service
  # Keystone endpoint
  auth_url = http://127.0.0.1:5000/v2.0

  (for v2 api it's required to enable registry service, too: data_api =
  glance.db.registry.api)

  2. Create a private image with admin user:
  source openrc admin admin
  glance --os-image-api-version 1 image-create --name private --is-public False 
--disk-format qcow2 --container-format bare --file /etc/fstab
  +--+--+
  | Property | Value|
  +--+--+
  | checksum | e533283e6aac072533d1d091a7d2e413 |
  | container_format | bare |
  | created_at   | 2015-09-01T22:17:25.00   |
  | deleted  | False|
  | deleted_at   | None |
  | disk_format  | qcow2|
  | id   | e0d0bf2f-9f81-4500-ae50-7a1a0994e2f0 |
  | is_public| False|
  | min_disk | 0|
  | min_ram  | 0|
  | name | private  |
  | owner| e1cec705e33b4dfaaece11b623f3c680 |
  | protected| False|
  | size | 616  |
  | status   | active   |
  | updated_at   | 2015-09-01T22:17:27.00   |
  | virtual_size | None |
  +--+--+

  3. Check the image list with admin user:
  glance --os-image-api-version 1 image-list
  
+--+-+-+--+--++
  | ID   | Name| 
Disk Format | Container Format | Size | Status |
  
+--+-+-+--+--++
  | 4a1703e7-72d1-4fce-8b5c-5bb1ef2a5047 | cirros-0.3.4-x86_64-uec | 
ami | ami  | 25165824 | active |
  | c513f951-e1b0-4acd-8980-ae932f073039 | cirros-0.3.4-x86_64-uec-kernel  | 
aki | aki  | 4979632  | active |
  | de99e4b9-0491-4990-8b93-299377bf2c95 | cirros-0.3.4-x86_64-uec-ramdisk | 
ari | ari  | 3740163  | active |
  | e0d0bf2f-9f81-4500-ae50-7a1a0994e2f0 | private | 
qcow2   | bare | 616  | active |
  
+--+-+-+--+--++

  4. Enable demo user and get the image list:
  source openrc demo demo
  glance --os-image-api-version 1 image-list
  
+--+-+-+--+--++
  | ID   | Name| 
Disk Format | Container Format | Size | Status |
  
+--+-+-+--+--++
  | 4a1703e7-72d1-4fce-8b5c-5bb1ef2a5047 | cirros-0.3.4-x86_64-uec | 
ami | ami  | 25165824 | active |
  | 

[Yahoo-eng-team] [Bug 1623567] Re: It is possible to import package twice via plugin with enabled glance artifact repository

2016-09-14 Thread Mike Fedosin
** Changed in: fuel-plugin-murano
 Assignee: Kirill Zaitsev (kzaitsev) => Mike Fedosin (mfedosin)

** Project changed: fuel-plugin-murano => glance

** Changed in: glance
Milestone: 1.0.0 => None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1623567

Title:
  It is possible to import package twice via plugin with enabled glance
  artifact repository

Status in Glance:
  Confirmed

Bug description:
  Bug description:
  Currently it is possible to import any app several times via murano cli if 
you are using fuel murano plugin with enabled glance artifact repository.

  Steps to reproduce:
  1) deploy fuel 9.0
  2) install fuel murano plugin
  3) add 1 controller and 1 compute
  4) enable fuel murano plugin and enable glance artifact repository
  5) deploy environment
  6) ssh to the controller
  7) use "murano --murano-repo-url=http://storage.apps.openstack.org 
package-import com.example.databases.MySql" to import MySql. Use this command 
second time to import in again.

  Expected results:
  the second time command should tell that MySql is already exist. So it will 
be only one MySql package

  Actual results:
  MySql will be imported twice(see screenshot)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1623567/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581553] Re: MAAS fails w/ bad hwclock & no NTP access

2016-06-22 Thread Mike Pontillo
*** This bug is a duplicate of bug 1511589 ***
https://bugs.launchpad.net/bugs/1511589

** This bug has been marked a duplicate of bug 1511589
   maas provider, hwclock out of sync means juju will not work

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1581553

Title:
  MAAS fails w/ bad hwclock & no NTP access

Status in cloud-init:
  New
Status in curtin:
  New
Status in MAAS:
  Invalid

Bug description:
  mmm, let me try this again :)

  We have been testing maas 2.0 Beta 4 on some enablement hardware and
  have been seeing a problem in which during enlistment, a clock skew is
  reported on the host console:

  [ 288.753993] cloud-init[1464]: Success
  [ 290.910846] cloud-init[1464]: updated clock skew to 7853431
  [ 290.911437] cloud-init[1464]: request to 
http://10.246.48.112/MAAS/metadata//2012-03-01/ failed. sleeping 1.: HTTP Error 
401: OK
  [ 290.911929] cloud-init[1464]: Success
  [ 292.752177] cloud-init[1464]: updated clock skew to 7853431
  [ 292.752746] cloud-init[1464]: request to 
http://10.246.48.112/MAAS/metadata//2012-03-01/ failed. sleeping 1.: HTTP Error 
401: OK
  [ 292.753234] cloud-init[1464]: Success
  [ 337.916546] cloud-init[1464]: updated clock skew to 7853431
  [ 337.917122] cloud-init[1464]: request to http://10.246.48.112/

  This happens a number of times. And as you mentioned, this is cloud
  init fixing the clock skew.

  The enlistment will complete and we are able to successfully finish
  commissioning the Host.   The host will appear in a ready state via
  the MAAS UI.

  Now before I go further, As mentioned earlier. I always ensure that
  both HOST AND CLIENT dates & time match in UEFI prior to starting
  enlistment. If anything the times on all of the hosts are offset by
  +/- 2 minutes.

  Moving onto deployment:

  When Deploying Xenial on these hosts This is where I get stuck, due to
  the clock-skew, which is fixed via cloud-init, tar will report time
  stamps approx 1 month into the future, while copying the disk image
  and eventually cause the deployment to fail due to a timeout. (tar is
  still extracting the root image)

  Does setting the ntp host in maas settings have any affect on this,
  (for example: if ntp.ubuntu.com was unavailable ? )

  We have been triaging this on our end of the stick, however would like
  some insight from the maas team.

  
  [   19.353487] cloud-init[1207]: Cloud-init v. 0.7.7 running 'init' at Thu, 
11 Feb 2016 16:28:06 +. Up 19.03 seconds.
  [   19.368566] cloud-init[1207]: ci-info: 
Net device 
info
  [   19.388533] cloud-init[1207]: ci-info: 
++---+--+-+---+---+
  [   19.408484] cloud-init[1207]: ci-info: |   Device   |   Up  |   
Address| Mask| Scope | Hw-Address|
  [   19.428484] cloud-init[1207]: ci-info: 
++---+--+-+---+---+
  [   19.80] cloud-init[1207]: ci-info: | enP2p1s0f1 | False |  
.   |  .  |   .   | 40:8d:5c:ba:b9:10 |
  [   19.464490] cloud-init[1207]: ci-info: | lo |  True |  
127.0.0.1   |  255.0.0.0  |   .   | . |
  [   19.484480] cloud-init[1207]: ci-info: | lo |  True |   
::1/128|  .  |  host | . |
  [   19.500475] cloud-init[1207]: ci-info: | enP2p1s0f3 | False |  
.   |  .  |   .   | 40:8d:5c:ba:b9:12 |
  [   19.520503] cloud-init[1207]: ci-info: | enP2p1s0f2 |  True | 
10.246.48.3  | 255.255.0.0 |   .   | 40:8d:5c:ba:b9:11 |
  [   19.536497] cloud-init[1207]: ci-info: | enP2p1s0f2 |  True | 
fe80::428d:5cff:feba:b911/64 |  .  |  link | 40:8d:5c:ba:b9:11 |
  [   19.556478] cloud-init[1207]: ci-info: 
++---+--+-+---+---+
  [   19.576514] cloud-init[1207]: ci-info: Route 
IPv4 info+
  [   19.592494] cloud-init[1207]: ci-info: 
+---+-+-+-++---+
  [   19.608504] cloud-init[1207]: ci-info: | Route | Destination |   Gateway   
|   Genmask   | Interface  | Flags |
  [   19.624535] cloud-init[1207]: ci-info: 
+---+-+-+-++---+
  [   19.640528] cloud-init[1207]: ci-info: |   0   |   0.0.0.0   | 10.246.48.1 
|   0.0.0.0   | enP2p1s0f2 |   UG  |
  [   19.656495] cloud-init[1207]: ci-info: |   1   |  10.246.0.0 |   0.0.0.0   
| 255.255.0.0 | enP2p1s0f2 |   U   |
  [   19.672513] cloud-init[1207]: ci-info: 
+---+-+-+-++---+
  [   19.688941] cloud-init[1207]: 

[Yahoo-eng-team] [Bug 1594898] [NEW] functional DB tests based on SqlFixture don't actually use non-sqlite DB

2016-06-21 Thread Mike Bayer
ionalError) no such 
function: CURDATE [SQL: u'SELECT CURDATE()']


At the end there, that's a SQLite error.  You're not supposed to get
those in the MySQL test suite :).

The problem is that the SqlFixture is calling upon
neutron.db.api.get_engine() but this is in no way associated with the
context that oslo.db creates within the MySQLOpportunisticFixture
approach.   As neutron is using enginefacade now we need to swap in the
facade that's specific to oslo_db.sqlalchemy.test_base.DbFixture and
make sure everything is linked up.

Note that this problem does not impact the alembic migration tests, as
that test suite does its own set up of alembic fixtures.

I'm working on a reorg of the test fixtures here so this can work, as we
will need these fixtures to be effective for the upcoming CIDR stored
functions / triggers to be tested.

** Affects: neutron
 Importance: Undecided
 Assignee: Mike Bayer (zzzeek)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Mike Bayer (zzzeek)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1594898

Title:
  functional DB tests based on SqlFixture don't actually use non-sqlite
  DB

Status in neutron:
  New

Bug description:
  Currently only neutron/tests/functional/db/test_ipam.py seems to use
  this fixture, however it is not interacting correctly with oslo.db
  such that it actually uses the engine set up by oslo.

  Add a test like this:

  diff --git a/neutron/tests/functional/db/test_ipam.py 
b/neutron/tests/functional/db/test_ipam.py
  index 0f28f74..d14bf6e 100644
  --- a/neutron/tests/functional/db/test_ipam.py
  +++ b/neutron/tests/functional/db/test_ipam.py
  @@ -156,8 +156,8 @@ class IpamTestCase(base.BaseTestCase):
   
   
   class TestIpamMySql(common_base.MySQLTestCase, IpamTestCase):
  -pass
  -
  +def test_we_are_on_mysql(self):
  +self.cxt.session.execute("SELECT CURDATE()")
   
   class TestIpamPsql(common_base.PostgreSQLTestCase, IpamTestCase):
   pass

  
  then run:

  [classic@photon2 neutron]$  tox -e functional 
neutron.tests.functional.db.test_ipam
  functional develop-inst-nodeps: /home/classic/dev/redhat/openstack/neutron
  functional installed:  ( ... output skipped ... )
  functional runtests: PYTHONHASHSEED='545881821'
  functional runtests: commands[0] | 
/home/classic/dev/redhat/openstack/neutron/tools/ostestr_compat_shim.sh 
neutron.tests.functional.db.test_ipam

  ( ... output skipped ... )

  {3} neutron.tests.functional.db.test_ipam.IpamTestCase.test_allocate_fixed_ip 
[1.510751s] ... ok
  {1} 
neutron.tests.functional.db.test_ipam.TestIpamMySql.test_allocate_fixed_ip 
[1.822431s] ... ok
  {2} 
neutron.tests.functional.db.test_ipam.IpamTestCase.test_allocate_ip_exausted_pool
 [2.468420s] ... ok
  {1} 
neutron.tests.functional.db.test_ipam.TestIpamPsql.test_allocate_ip_exausted_pool
 ... SKIPPED: backend 'postgresql' unavailable
  {0} 
neutron.tests.functional.db.test_ipam.TestIpamMySql.test_allocate_ip_exausted_pool
 [2.873318s] ... ok
  {2} neutron.tests.functional.db.test_ipam.TestIpamMySql.test_we_are_on_mysql 
[0.993651s] ... FAILED
  {0} neutron.tests.functional.db.test_ipam.TestIpamPsql.test_allocate_fixed_ip 
... SKIPPED: backend 'postgresql' unavailable
  {1} 
neutron.tests.functional.db.test_ipam.TestPluggableIpamMySql.test_allocate_fixed_ip
 [1.133034s] ... ok
  {0} 
neutron.tests.functional.db.test_ipam.TestPluggableIpamPsql.test_allocate_ip_exausted_pool
 ... SKIPPED: backend 'postgresql' unavailable
  {2} 
neutron.tests.functional.db.test_ipam.TestPluggableIpamPsql.test_allocate_fixed_ip
 ... SKIPPED: backend 'postgresql' unavailable
  {3} 
neutron.tests.functional.db.test_ipam.TestPluggableIpamMySql.test_allocate_ip_exausted_pool
 [2.740086s] ... ok

  ==
  Failed 1 tests - output below:
  ==

  neutron.tests.functional.db.test_ipam.TestIpamMySql.test_we_are_on_mysql
  

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "neutron/tests/functional/db/test_ipam.py", line 160, in 
test_we_are_on_mysql
  self.cxt.session.execute("SELECT CURDATE()")
File 
"/home/classic/dev/redhat/openstack/neutron/.tox/functional/lib/python2.7/site-packages/sqlalchemy/orm/session.py",
 line 1034, in execute
  bind, close_with_result=True).execute(clause, params or {})
File 
"/home/classic/dev/redhat/openstack/neutron/.tox/functional/lib/python2.7/site-packages/sqlalchemy/engine/base.py",
 line 914, in execute
  return meth(self, multiparams, params)
File 
"/home/classic/dev/redhat/openstack/neutron/.tox/functional/lib/python2.7/site-packages/sqlalchemy/sql/elements.py",
 line 323, in _execute_on_connection
   

[Yahoo-eng-team] [Bug 1592808] [NEW] Snapshot failed during inconsistencies in glance v2 image schema

2016-06-15 Thread Mike Fedosin
Public bug reported:

When trying to create a snapshot with Glance v2 with nodepool the bug
appears: http://paste.openstack.org/show/516238/

It happens because in glance v1 it was possible to set empty string to
kernel_id or ramdisk_id. In v2 it's forbidden.

** Affects: nova
 Importance: Undecided
 Assignee: Mike Fedosin (mfedosin)
 Status: Confirmed

** Changed in: nova
   Status: New => Confirmed

** Changed in: nova
 Assignee: (unassigned) => Mike Fedosin (mfedosin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1592808

Title:
  Snapshot failed during inconsistencies in glance v2 image schema

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  When trying to create a snapshot with Glance v2 with nodepool the bug
  appears: http://paste.openstack.org/show/516238/

  It happens because in glance v1 it was possible to set empty string to
  kernel_id or ramdisk_id. In v2 it's forbidden.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1592808/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1587985] [NEW] Glance v2 allows to set locations if image has saving status

2016-06-01 Thread Mike Fedosin
Public bug reported:

Currently, if 'show_multiple_locations' is activated, user can set
custom location to an image, even if it has 'saving' or 'deactivated'
status.

Example: http://paste.openstack.org/show/506998/

In v1 this request returns 400, but imho 409 is more appropriate
response code.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1587985

Title:
  Glance v2 allows to set locations if image has saving status

Status in Glance:
  New

Bug description:
  Currently, if 'show_multiple_locations' is activated, user can set
  custom location to an image, even if it has 'saving' or 'deactivated'
  status.

  Example: http://paste.openstack.org/show/506998/

  In v1 this request returns 400, but imho 409 is more appropriate
  response code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1587985/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582911] [NEW] Relaxed validation for v2 doesn't accept null for user_data like legacy v2 does

2016-05-17 Thread Mike Dorman
Public bug reported:

Description
===
When moving to the relaxed validation [1] implementation of the v2 API under 
the v2.1 code base, a 'nova boot' request with "user_data": null fails with the 
error:

  Returning 400 to user: Invalid input for field/attribute user_data.
Value: None. None is not of type 'string'

Under the legacy v2 code base, such a request is allowed.


Steps to reproduce
==
Using the legacy v2 code base under Liberty, make a nova boot call using the 
following json payload:

{
  "server": {
"name": "mgdlibertyBBC",
"flavorRef": "1",
"imageRef": "626ce751-744f-4830-9d38-5e9e4f70fe3f",
"user_data": null,
"metadata": {
  "created_by": "mdorman"
},
"security_groups": [
  {
"name": "default"
  }
],
"availability_zone": "glbt1-dev-lab-zone-1,glbt1-dev-lab-zone-2,",
"key_name": "lm126135-mdorm"
  }
}

The request succeeds and the instance is created.

However, using the v2 implementation from the v2.1 code base with the
same json payload fails:

2016-05-17 12:47:02.336 18296 DEBUG nova.api.openstack.wsgi [req-
6d5d4100-7c0c-4ffa-a40c-4a086a473293 mdorman
40e94f951b704545885bdaa987a25154 - - -] Returning 400 to user: Invalid
input for field/attribute user_data. Value: None. None is not of type
'string' __call__ /usr/lib/python2.7/site-
packages/nova/api/openstack/wsgi.py:1175


Expected result
===
The behavior of the v2 API in the v2.1 code base should be exactly the same as 
the legacy v2 code base.


Actual result
=
Request fails under v2.1 code base, but succeeds under legacy v2 code base.


Environment
===
Liberty, 12.0.3 tag (stable/liberty branch on 4/13/2016.  Latest commit 
6fdf1c87b1149e8b395eaa9f4cbf27263cf96ac6)


Logs & Configs
==
Paste config used for legacy v2 code base (request succeeds):

[composite:osapi_compute]
use = call:nova.api.openstack.urlmap:urlmap_factory
/v1.1: openstack_compute_api_legacy_v2
/v2: openstack_compute_api_legacy_v2
/v2.1: openstack_compute_api_v21

Paste config used for v2.1 code base (request fails):

[composite:osapi_compute]
use = call:nova.api.openstack.urlmap:urlmap_factory
/: oscomputeversions
/v1.1: openstack_compute_api_v21_legacy_v2_compatible
/v2: openstack_compute_api_v21_legacy_v2_compatible
/v2.1: openstack_compute_api_v21


[1]  
http://specs.openstack.org/openstack/nova-specs/specs/liberty/implemented/api-relax-validation.html

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  Description
  ===
  When moving to the relaxed validation [1] implementation of the v2 API under 
the v2.1 code base, a 'nova boot' request with "user_data": null fails with the 
error:
  
Returning 400 to user: Invalid input for field/attribute user_data.
  Value: None. None is not of type 'string'
  
  Under the legacy v2 code base, such a request is allowed.
  
  
  Steps to reproduce
  ==
  Using the legacy v2 code base under Liberty, make a nova boot call using the 
following json payload:
  
  {
"server": {
  "name": "mgdlibertyBBC",
  "flavorRef": "1",
  "imageRef": "626ce751-744f-4830-9d38-5e9e4f70fe3f",
  "user_data": null,
  "metadata": {
"created_by": "mdorman"
  },
  "security_groups": [
{
  "name": "default"
}
  ],
  "availability_zone": "glbt1-dev-lab-zone-1,glbt1-dev-lab-zone-2,",
  "key_name": "lm126135-mdorm"
}
  }
  
  The request succeeds and the instance is created.
  
  However, using the v2 implementation from the v2.1 code base with the
  same json payload fails:
  
  2016-05-17 12:47:02.336 18296 DEBUG nova.api.openstack.wsgi [req-
  6d5d4100-7c0c-4ffa-a40c-4a086a473293 mdorman
  40e94f951b704545885bdaa987a25154 - - -] Returning 400 to user: Invalid
  input for field/attribute user_data. Value: None. None is not of type
  'string' __call__ /usr/lib/python2.7/site-
  packages/nova/api/openstack/wsgi.py:1175
  
  
  Expected result
  ===
  The behavior of the v2 API in the v2.1 code base should be exactly the same 
as the legacy v2 code base.
  
  
  Actual result
  =
  Request fails under v2.1 code base, but succeeds under legacy v2 code base.
  
  
  Environment
  ===
- Liberty, from stable/liberty branch on 4/13/2016.  Latest commit 
6fdf1c87b1149e8b395eaa9f4cbf27263cf96ac6
+ Liberty, 12.0.3 tag (stable/liberty branch on 4/13/2016.  Latest commit 
6fdf1c87b1149e8b395eaa9f4cbf27263cf96ac6)
  
  
  Logs & Configs
  ==
  Paste config used for legacy v2 code base (request succeeds):
  
  [composite:osapi_compute]
  use = call:nova.api.openstack.urlmap:urlmap_factory
  /v1.1: openstack_compute_api_legacy_v2
  /v2: openstack_compute_api_legacy_v2
  /v2.1: openstack_compute_api_v21
  
  Paste config used for v2.1 code base (request fails):
  
  [composite:osapi_compute]
  use = call:nova.api.openstack.urlmap:urlmap_factory
  /: 

[Yahoo-eng-team] [Bug 1577960] Re: [2.0b4] After commissioning, subnet lists 'observed' IP address for machines

2016-05-07 Thread Mike Pontillo
The MAAS team discussed this today; we feel like since cloud-init is
responsible for configuring the interfaces to DHCP, and is responsible
for sending the last status message to the network before powering off
the system, cloud-init should be responsible for running "dhclient -r
" for each interface it configured for DHCP.

We would most likely want a boolean we could set in the cloud-init
configuration which would cause this behavior to occur; we would only
want to release the IP address when we're booting the system
ephemerally. It seems possible that we would inadvertently cause
regressions with some integrations if cloud-init unconditionally
released its IP addresses. What do you think?

** Also affects: cloud-init
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1577960

Title:
  [2.0b4] After commissioning, subnet lists 'observed' IP address for
  machines

Status in cloud-init:
  New
Status in MAAS:
  Incomplete

Bug description:
  I commissioned a whole bunch of machines, and after it compelted, MAAS
  showed 'Machines' with 'Observed' IP addresses but machines are now
  off.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1577960/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1557495] [NEW] Possible race conditions when changing image status in v2

2016-03-15 Thread Mike Fedosin
Public bug reported:

Currently Glance architecture (domain model) is affected by possible
race conditions during image status transition. To eliminate this there
was introduced a parameter called 'from_state' in 'save' method for
ImageRepo. Unfortunately it only checks if transition happened from
'saving' to 'active':
https://github.com/openstack/glance/blob/master/glance/api/v2/image_data.py#L117

Other cases are still not fixed and it leads to the fact that admin can
reactivate deleted image and it will have status 'active'. Also Glance
rewrites the status even if it didn't change. To fix it it's suggested
to use 'from_state' parameters in other places, where race conditions
may happen.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1557495

Title:
  Possible race conditions when changing image status in v2

Status in Glance:
  New

Bug description:
  Currently Glance architecture (domain model) is affected by possible
  race conditions during image status transition. To eliminate this
  there was introduced a parameter called 'from_state' in 'save' method
  for ImageRepo. Unfortunately it only checks if transition happened
  from 'saving' to 'active':
  
https://github.com/openstack/glance/blob/master/glance/api/v2/image_data.py#L117

  Other cases are still not fixed and it leads to the fact that admin
  can reactivate deleted image and it will have status 'active'. Also
  Glance rewrites the status even if it didn't change. To fix it it's
  suggested to use 'from_state' parameters in other places, where race
  conditions may happen.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1557495/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1549869] [NEW] Glance should return 204 when user downloads queued image file

2016-02-25 Thread Mike Fedosin
Public bug reported:

Previously (In Liberty) when user tried to download file while image was
in 'queued' status Glance returned 204. In Mitaka this behavior was
changed  and now Glance returns 403. This is contrary to the Glance
image api v2 http://developer.openstack.org/api-ref-image-v2.html We
have to return it back.

Previously: http://paste.openstack.org/show/487782/

Now:  http://paste.openstack.org/show/488210/

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1549869

Title:
  Glance should return 204 when user downloads queued image file

Status in Glance:
  New

Bug description:
  Previously (In Liberty) when user tried to download file while image
  was in 'queued' status Glance returned 204. In Mitaka this behavior
  was changed  and now Glance returns 403. This is contrary to the
  Glance image api v2 http://developer.openstack.org/api-ref-
  image-v2.html We have to return it back.

  Previously: http://paste.openstack.org/show/487782/

  Now:  http://paste.openstack.org/show/488210/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1549869/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504725] Re: rabbitmq-server restart twice, log is crazy increasing until service restart

2016-02-01 Thread Mike Merinov
** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1504725

Title:
  rabbitmq-server restart twice, log is crazy increasing until service
  restart

Status in neutron:
  New
Status in OpenStack Compute (nova):
  New
Status in oslo.messaging:
  Confirmed

Bug description:
  After I restart the rabbitmq-server for the second time, the service log(such 
as nova,neutron and so on) is increasing crazy, log is such as " TypeError: 
'NoneType' object has no attribute '__getitem__'".
  It seems that the channel is setted to None. 

  trace log:

  2015-10-10 15:20:59.413 29515 TRACE root Traceback (most recent call last):
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 95, in 
inner_func
  2015-10-10 15:20:59.413 29515 TRACE root return infunc(*args, **kwargs)
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_executors/impl_eventlet.py", 
line 96, in _executor_thread
  2015-10-10 15:20:59.413 29515 TRACE root incoming = self.listener.poll()
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 
122, in poll
  2015-10-10 15:20:59.413 29515 TRACE root self.conn.consume(limit=1, 
timeout=timeout)
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 
1202, in consume
  2015-10-10 15:20:59.413 29515 TRACE root six.next(it)
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 
1100, in iterconsume
  2015-10-10 15:20:59.413 29515 TRACE root error_callback=_error_callback)
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 
868, in ensure
  2015-10-10 15:20:59.413 29515 TRACE root ret, channel = autoretry_method()
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/kombu/connection.py", line 458, in _ensured
  2015-10-10 15:20:59.413 29515 TRACE root return fun(*args, **kwargs)
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/kombu/connection.py", line 545, in __call__
  2015-10-10 15:20:59.413 29515 TRACE root self.revive(create_channel())
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/kombu/connection.py", line 251, in channel
  2015-10-10 15:20:59.413 29515 TRACE root chan = 
self.transport.create_channel(self.connection)
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/kombu/transport/pyamqp.py", line 91, in 
create_channel
  2015-10-10 15:20:59.413 29515 TRACE root return connection.channel()
  2015-10-10 15:20:59.413 29515 TRACE root   File 
"/usr/lib/python2.7/site-packages/amqp/connection.py", line 289, in channel
  2015-10-10 15:20:59.413 29515 TRACE root return self.channels[channel_id]
  2015-10-10 15:20:59.413 29515 TRACE root TypeError: 'NoneType' object has no 
attribute '__getitem__'
  2015-10-10 15:20:59.413 29515 TRACE root

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1504725/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1539698] [NEW] Kernel and ramdisk ids cannot have 'None' value in Glance

2016-01-29 Thread Mike Fedosin
Public bug reported:

Currently if user wants to create an instance using a Glance snapshot that has 
no value for ramdisk_id or kernel_id, then Nova copies the image metadata into 
instance system metadata and prefixes the keys with 'image_'. 
Due to [1] the None value of ramdisk_id and kernel_id get written as the string 
'None' in system metadata.

Unfortunately these values are not accepted by glance image schema in v2
api [2].  They can be None, but a not string 'None'.

This issue doesn't allow us to fully adopt glance v2 api in Nova.

Paste from  ~smatzek http://paste.openstack.org/show/485397/

[1] https://github.com/openstack/nova/blob/master/nova/utils.py#L1245
[2] https://github.com/openstack/glance/blob/master/etc/schema-image.json

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1539698

Title:
  Kernel and ramdisk ids cannot have 'None' value in Glance

Status in OpenStack Compute (nova):
  New

Bug description:
  Currently if user wants to create an instance using a Glance snapshot that 
has no value for ramdisk_id or kernel_id, then Nova copies the image metadata 
into instance system metadata and prefixes the keys with 'image_'. 
  Due to [1] the None value of ramdisk_id and kernel_id get written as the 
string 'None' in system metadata.

  Unfortunately these values are not accepted by glance image schema in
  v2 api [2].  They can be None, but a not string 'None'.

  This issue doesn't allow us to fully adopt glance v2 api in Nova.

  Paste from  ~smatzek http://paste.openstack.org/show/485397/

  [1] https://github.com/openstack/nova/blob/master/nova/utils.py#L1245
  [2] https://github.com/openstack/glance/blob/master/etc/schema-image.json

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1539698/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1533150] [NEW] Downloading empty file with enabled cache management leads to 500 error

2016-01-12 Thread Mike Fedosin
Public bug reported:

When I tried to download an empty image file from glance with enabled
cache management I got 500 error:

mfedosin@wdev:~$ glance --debug image-download 
0af7b2e8-8e31-427b-a99f-9117f45418ef --file empty_file
curl -g -i -X GET -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' -H 
'User-Agent: python-glanceclient' -H 'Connection: keep-alive' -H 'X-Auth-Token: 
{SHA1}c91066a8c438769ed454eebd759b4f8b1e488cb6' -H 'Content-Type: 
application/octet-stream' 
http://10.0.2.15:9292/v2/images/0af7b2e8-8e31-427b-a99f-9117f45418ef/file
Request returned failure status 500.
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/glanceclient/shell.py", line 
605, in main
args.func(client, args)
  File "/usr/local/lib/python2.7/dist-packages/glanceclient/v2/shell.py", line 
277, in do_image_download
body = gc.images.data(args.id)
  File "/usr/local/lib/python2.7/dist-packages/glanceclient/v2/images.py", line 
194, in data
resp, body = self.http_client.get(url)
  File "/usr/local/lib/python2.7/dist-packages/glanceclient/common/http.py", 
line 284, in get
return self._request('GET', url, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/glanceclient/common/http.py", 
line 276, in _request
resp, body_iter = self._handle_response(resp)
  File "/usr/local/lib/python2.7/dist-packages/glanceclient/common/http.py", 
line 93, in _handle_response
raise exc.from_response(resp, resp.content)
HTTPInternalServerError: HTTPInternalServerError (HTTP 500)
HTTPInternalServerError (HTTP 500)

Without cache management everything works fine.

Steps to reproduce on devstack:

1. Set flavor to 'keystone+cachemanagement' in glance-api.conf (flavor = 
keystone+cachemanagement)
2. Restart glance-api server
3. Create an image with empty file (file size is 0)
4. Try to download the image file from glance.

Expected result: new empty file will be created in local folder.

Actual result: HTTPInternalServerError (HTTP 500)

Logs from glance-api: http://paste.openstack.org/show/483545/

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1533150

Title:
  Downloading empty file with enabled cache management leads to 500
  error

Status in Glance:
  New

Bug description:
  When I tried to download an empty image file from glance with enabled
  cache management I got 500 error:

  mfedosin@wdev:~$ glance --debug image-download 
0af7b2e8-8e31-427b-a99f-9117f45418ef --file empty_file
  curl -g -i -X GET -H 'Accept-Encoding: gzip, deflate' -H 'Accept: */*' -H 
'User-Agent: python-glanceclient' -H 'Connection: keep-alive' -H 'X-Auth-Token: 
{SHA1}c91066a8c438769ed454eebd759b4f8b1e488cb6' -H 'Content-Type: 
application/octet-stream' 
http://10.0.2.15:9292/v2/images/0af7b2e8-8e31-427b-a99f-9117f45418ef/file
  Request returned failure status 500.
  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/glanceclient/shell.py", line 
605, in main
  args.func(client, args)
File "/usr/local/lib/python2.7/dist-packages/glanceclient/v2/shell.py", 
line 277, in do_image_download
  body = gc.images.data(args.id)
File "/usr/local/lib/python2.7/dist-packages/glanceclient/v2/images.py", 
line 194, in data
  resp, body = self.http_client.get(url)
File "/usr/local/lib/python2.7/dist-packages/glanceclient/common/http.py", 
line 284, in get
  return self._request('GET', url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/glanceclient/common/http.py", 
line 276, in _request
  resp, body_iter = self._handle_response(resp)
File "/usr/local/lib/python2.7/dist-packages/glanceclient/common/http.py", 
line 93, in _handle_response
  raise exc.from_response(resp, resp.content)
  HTTPInternalServerError: HTTPInternalServerError (HTTP 500)
  HTTPInternalServerError (HTTP 500)

  Without cache management everything works fine.

  Steps to reproduce on devstack:

  1. Set flavor to 'keystone+cachemanagement' in glance-api.conf (flavor = 
keystone+cachemanagement)
  2. Restart glance-api server
  3. Create an image with empty file (file size is 0)
  4. Try to download the image file from glance.

  Expected result: new empty file will be created in local folder.

  Actual result: HTTPInternalServerError (HTTP 500)

  Logs from glance-api: http://paste.openstack.org/show/483545/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1533150/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1533270] [NEW] Adding remote image in v2 when cache is enabled results 500 error

2016-01-12 Thread Mike Fedosin
Public bug reported:

To reproduce the issue:

1) Add an image without specifying the size
2) Enable caching
3) Get image data. This will succeed because the Content-Length is pulled from 
the remote store (i.e. swift). At this point, the image will be properly cached.
4) Get image data again with v2 api. This will fail with 500 error 
http://paste.openstack.org/show/483545/

It happens for the reason cache middleware couldn't assign value to
image_meta['size'] because it expects a dictionary (as it was in v1
api), but in v2 api it's ImageTarget object.

** Affects: glance
 Importance: Undecided
 Assignee: Darja Shakhray (dshakhray)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1533270

Title:
  Adding remote image in v2 when cache is enabled results 500 error

Status in Glance:
  New

Bug description:
  To reproduce the issue:

  1) Add an image without specifying the size
  2) Enable caching
  3) Get image data. This will succeed because the Content-Length is pulled 
from the remote store (i.e. swift). At this point, the image will be properly 
cached.
  4) Get image data again with v2 api. This will fail with 500 error 
http://paste.openstack.org/show/483545/

  It happens for the reason cache middleware couldn't assign value to
  image_meta['size'] because it expects a dictionary (as it was in v1
  api), but in v2 api it's ImageTarget object.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1533270/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1529441] [NEW] networking guide source is undocumented

2015-12-26 Thread Mike Spreitzer
Public bug reported:

http://docs.openstack.org/contributor-guide/docs-builds.html does not
document the source of the networking guide (http://docs.openstack.org
/networking-guide/).

In fact, the contributor guide does not even document its own source ---
which is why I am opening this bug in Neutron.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1529441

Title:
  networking guide source is undocumented

Status in neutron:
  New

Bug description:
  http://docs.openstack.org/contributor-guide/docs-builds.html does not
  document the source of the networking guide (http://docs.openstack.org
  /networking-guide/).

  In fact, the contributor guide does not even document its own source
  --- which is why I am opening this bug in Neutron.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1529441/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1529444] [NEW] networking guide does not explain what OVS version 2.1 is needed for

2015-12-26 Thread Mike Spreitzer
Public bug reported:

http://docs.openstack.org/networking-guide/scenario_legacy_ovs.html
includes a warning that

"Proper operation of this scenario requires Open vSwitch 2.1 or newer"

--- but does not explain what part of the scenario requires that.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1529444

Title:
  networking guide does not explain what OVS version 2.1 is needed for

Status in neutron:
  New

Bug description:
  http://docs.openstack.org/networking-guide/scenario_legacy_ovs.html
  includes a warning that

  "Proper operation of this scenario requires Open vSwitch 2.1 or newer"

  --- but does not explain what part of the scenario requires that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1529444/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526804] [NEW] Model sync is broken for SQLite because of BigInteger type mismatch

2015-12-16 Thread Mike Fedosin
Public bug reported:

Here the output for
glance.tests.unit.test_migrations.ModelsMigrationsSyncSQLite.test_models_sync:

AssertionError: Models and migration scripts aren't in sync:
[ [ ( 'modify_type',
  None,
  'artifact_blobs',
  'size',
  { 'existing_nullable': False,
'existing_server_default': False},
  INTEGER(),
  BigInteger())],
  [ ( 'modify_type',
  None,
  'artifacts',
  'type_version_prefix',
  { 'existing_nullable': False,
'existing_server_default': False},
  INTEGER(),
  BigInteger())],
  [ ( 'modify_type',
  None,
  'artifacts',
  'version_prefix',
  { 'existing_nullable': False,
'existing_server_default': False},
  INTEGER(),
  BigInteger())],
  [ ( 'modify_type',
  None,
  'images',
  'size',
  { 'existing_nullable': True,
'existing_server_default': False},
  INTEGER(),
  BigInteger())],
  [ ( 'modify_type',
  None,
  'images',
  'virtual_size',
  { 'existing_nullable': True,
'existing_server_default': False},
  INTEGER(),
  BigInteger())]]

** Affects: glance
 Importance: Critical
 Assignee: Mike Fedosin (mfedosin)
 Status: In Progress

** Changed in: glance
 Assignee: (unassigned) => Mike Fedosin (mfedosin)

** Changed in: glance
   Importance: Undecided => Critical

** Changed in: glance
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1526804

Title:
  Model sync is broken for SQLite because of BigInteger type mismatch

Status in Glance:
  In Progress

Bug description:
  Here the output for
  glance.tests.unit.test_migrations.ModelsMigrationsSyncSQLite.test_models_sync:

  AssertionError: Models and migration scripts aren't in sync:
  [ [ ( 'modify_type',
None,
'artifact_blobs',
'size',
{ 'existing_nullable': False,
  'existing_server_default': False},
INTEGER(),
BigInteger())],
[ ( 'modify_type',
None,
'artifacts',
'type_version_prefix',
{ 'existing_nullable': False,
  'existing_server_default': False},
INTEGER(),
BigInteger())],
[ ( 'modify_type',
None,
'artifacts',
'version_prefix',
{ 'existing_nullable': False,
  'existing_server_default': False},
INTEGER(),
BigInteger())],
[ ( 'modify_type',
None,
'images',
'size',
{ 'existing_nullable': True,
  'existing_server_default': False},
INTEGER(),
BigInteger())],
[ ( 'modify_type',
None,
'images',
'virtual_size',
{ 'existing_nullable': True,
  'existing_server_default': False},
INTEGER(),
BigInteger())]]

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1526804/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489126] Re: Filtering by tags is broken in v3

2015-11-05 Thread Mike Fedosin
** Changed in: glance
   Status: In Progress => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1489126

Title:
  Filtering by tags is broken in v3

Status in Glance:
  Opinion

Bug description:
  When I want to filter list of artifacts by tag I get a 500 error:

  http://localhost:9292/v3/artifacts/myartifact/v2.0/drafts?tag=hyhyhy

  
   
500 Internal Server Error
   
   
500 Internal Server Error
The server has either erred or is incapable of performing the requested 
operation.


   
  

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1489126/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513230] [NEW] Users have cross-tenant visibility on images

2015-11-04 Thread Mike
Public bug reported:

Using Kilo 2015.1.2 and Glance Client 0.17.0:

Using two users (demo in the demo tenant, alt_demo in the alt_demo
tenant, neither have the admin role), I am able to create an image with
is_public set to False as the demo user/tenant, and then show data/use
that image to create an instance as the alt_demo:

> env | grep OS_
OS_PASSWORD=secret
OS_AUTH_URL=http://localhost:5000/v2.0
OS_USERNAME=demo
OS_TENANT_NAME=demo

> glance image-create --container-format bare --disk-format raw --is-public 
> false --name demo_image
+--+--+
| Property | Value|
+--+--+
| checksum | None |
| container_format | bare |
| created_at   | 2015-11-04T21:33:14.00   |
| deleted  | False|
| deleted_at   | None |
| disk_format  | raw  |
| id   | 51215efe-3533-4128-a36f-a44e507df5d7 |
| is_public| False|
| min_disk | 0|
| min_ram  | 0|
| name | demo_image   |
| owner| None |
| protected| False|
| size | 0|
| status   | queued   |
| updated_at   | 2015-11-04T21:33:14.00   |
| virtual_size | None |
+--+--+

The image then does not appear in image-list:
> glance image-list
+--++-+--+---++
| ID   | Name   | Disk Format | 
Container Format | Size  | Status |
+--++-+--+---++
| 7eb66946-70c1-4d35-93d8-93a315710be9 | tempest_alt_image  | raw | 
bare | 947466240 | active |
| 50eccbfd-baf3-4f0e-a10d-c20292b01d9d | tempest_main_image | raw | 
bare | 947466240 | active |
+--++-+--+---++

With --all-tenants, it appears
> glance image-list --all-tenants
+--++-+--+---++
| ID   | Name   | Disk Format | 
Container Format | Size  | Status |
+--++-+--+---++
| 51215efe-3533-4128-a36f-a44e507df5d7 | demo_image | raw | 
bare |   | queued |
| 7eb66946-70c1-4d35-93d8-93a315710be9 | tempest_alt_image  | raw | 
bare | 947466240 | active |
| 50eccbfd-baf3-4f0e-a10d-c20292b01d9d | tempest_main_image | raw | 
bare | 947466240 | active |
| 8f1430dc-8fc0-467b-b006-acf6b481714e | test_snapshot  | raw | 
bare |   | active |
+--++-+--+---++

With image-show and the name, error message:
> glance image-show demo_image
No image with a name or ID of 'demo_image' exists.

With  image-show and the uuid, data:
> glance image-show 51215efe-3533-4128-a36f-a44e507df5d7
+--+--+
| Property | Value|
+--+--+
| container_format | bare |
| created_at   | 2015-11-04T21:33:14.00   |
| deleted  | False|
| disk_format  | raw  |
| id   | 51215efe-3533-4128-a36f-a44e507df5d7 |
| is_public| False|
| min_disk | 0|
| min_ram  | 0|
| name | demo_image   |
| protected| False|
| size | 0|
| status   | queued   |
| updated_at   | 2015-11-04T21:33:14.00   |
+--+--+

Now swap to alt_demo:
env | grep OS_
OS_PASSWORD=secret
OS_AUTH_URL=http://localhost:5000/v2.0
OS_USERNAME=alt_demo
OS_TENANT_NAME=alt_demo

Image list with --all-tenants shows the 

[Yahoo-eng-team] [Bug 1424549] Re: enlisting of nodes: seed_random fails due to self signed certificate

2015-10-19 Thread Mike Pontillo
Actually, I'll go ahead and mark this "Triaged"; it *is* a real bug, it
just isn't as critical as we assumed.

To fix this bug, we should configure cloud-init to NOT call pollinate
during enlistment (to avoid this spurious error).

As a follow-on fix, it might be a good idea for cloud-init to fall back
to 'insecure" mode (or simply use the public CA roots in /etc/ssl/certs
rather than a pinned chain) and log this as a warning, if the pinned
certificate could not be validated.

** Also affects: cloud-init
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1424549

Title:
  enlisting of nodes: seed_random fails due to self signed certificate

Status in cloud-init:
  New
Status in MAAS:
  Triaged

Bug description:
  Using Maas 1.7.1 on trusty, the following error message in the MAAS
  provided ephemeral image for the step pollinate is executed:

  curl: SSL certificate problem: self signed certificate in certificate
  chain.

  This way random number generator is not initialized correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1424549/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505710] [NEW] Wrong logging setup in replicator

2015-10-13 Thread Mike Fedosin
Public bug reported:

The logging.setup accepts two parameters, the first one being the current CONF, 
the second parameter is the product name.
Currently in replicator it's not true.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1505710

Title:
  Wrong logging setup in replicator

Status in Glance:
  New

Bug description:
  The logging.setup accepts two parameters, the first one being the current 
CONF, the second parameter is the product name.
  Currently in replicator it's not true.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1505710/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505218] [NEW] Image schema doesn't contain 'deactivated' status

2015-10-12 Thread Mike Fedosin
Public bug reported:

Currently glance image schema doesn't contain 'deactivated' in the list
of statuses, which leads to the fact that client cannot validate it .

1. mfedosin@wdev:~$ glance image-list
+--+-+
| ID   | Name|
+--+-+
| 5cd380ce-a570-4270-b4d1-e328e6f49cb6 | cirros-0.3.4-x86_64-uec |
| 9f430c9d-9649-4bc3-9ec9-1013e9c9da13 | cirros-0.3.4-x86_64-uec-kernel  |
| e36a70f7-db13-4c3a-91a6-1308b74eebde | cirros-0.3.4-x86_64-uec-ramdisk |
+--+-+

2. mfedosin@wdev:~$ curl -H "X-Auth-Token:
2c2e3bc5f0d541418a98deeabb27ac5e" -X POST
http://127.0.0.1:9292/v2/images/5cd380ce-a570-4270-b4d1-e328e6f49cb6/actions/deactivate

3. mfedosin@wdev:~$ glance image-show
5cd380ce-a570-4270-b4d1-e328e6f49cb6

Expected result:
There will be output with the image info

Actual result:
u'deactivated' is not one of [u'queued', u'saving', u'active', u'killed', 
u'deleted', u'pending_delete']

Failed validating u'enum' in schema[u'properties'][u'status']:
{u'description': u'Status of the image (READ-ONLY)',
 u'enum': [u'queued',
   u'saving',
   u'active',
   u'killed',
   u'deleted',
   u'pending_delete'],
 u'type': u'string'}

On instance[u'status']:
u'deactivated'

Related to bug: https://bugs.launchpad.net/glance/+bug/1505134

** Affects: glance
 Importance: Undecided
 Assignee: Mike Fedosin (mfedosin)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1505218

Title:
  Image schema doesn't contain 'deactivated' status

Status in Glance:
  New

Bug description:
  Currently glance image schema doesn't contain 'deactivated' in the
  list of statuses, which leads to the fact that client cannot validate
  it .

  1. mfedosin@wdev:~$ glance image-list
  +--+-+
  | ID   | Name|
  +--+-+
  | 5cd380ce-a570-4270-b4d1-e328e6f49cb6 | cirros-0.3.4-x86_64-uec |
  | 9f430c9d-9649-4bc3-9ec9-1013e9c9da13 | cirros-0.3.4-x86_64-uec-kernel  |
  | e36a70f7-db13-4c3a-91a6-1308b74eebde | cirros-0.3.4-x86_64-uec-ramdisk |
  +--+-+

  2. mfedosin@wdev:~$ curl -H "X-Auth-Token:
  2c2e3bc5f0d541418a98deeabb27ac5e" -X POST
  
http://127.0.0.1:9292/v2/images/5cd380ce-a570-4270-b4d1-e328e6f49cb6/actions/deactivate

  3. mfedosin@wdev:~$ glance image-show
  5cd380ce-a570-4270-b4d1-e328e6f49cb6

  Expected result:
  There will be output with the image info

  Actual result:
  u'deactivated' is not one of [u'queued', u'saving', u'active', u'killed', 
u'deleted', u'pending_delete']

  Failed validating u'enum' in schema[u'properties'][u'status']:
  {u'description': u'Status of the image (READ-ONLY)',
   u'enum': [u'queued',
 u'saving',
 u'active',
 u'killed',
 u'deleted',
 u'pending_delete'],
   u'type': u'string'}

  On instance[u'status']:
  u'deactivated'

  Related to bug: https://bugs.launchpad.net/glance/+bug/1505134

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1505218/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503501] Re: oslo.db no longer requires testresources and testscenarios packages

2015-10-12 Thread Mike Fedosin
** Also affects: glance
   Importance: Undecided
   Status: New

** Changed in: glance
 Assignee: (unassigned) => Mike Fedosin (mfedosin)

** Changed in: glance
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1503501

Title:
  oslo.db no longer requires testresources and testscenarios packages

Status in Cinder:
  Fix Committed
Status in Glance:
  In Progress
Status in heat:
  Fix Committed
Status in Ironic:
  In Progress
Status in neutron:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Committed

Bug description:
  As of https://review.openstack.org/#/c/217347/ oslo.db no longer has
  testresources or testscenarios in its requirements, So next release of
  oslo.db will break several projects. These project that use fixtures
  from oslo.db should add these to their requirements if they need it.

  Example from Nova:
  ${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./nova/tests} 
--list 
  ---Non-zero exit code (2) from test listing.
  error: testr failed (3) 
  import errors ---
  Failed to import test module: nova.tests.unit.db.test_db_api
  Traceback (most recent call last):
File 
"/home/travis/build/dims/nova/.tox/py27/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
  module = self._get_module_from_name(name)
File 
"/home/travis/build/dims/nova/.tox/py27/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
  __import__(name)
File "nova/tests/unit/db/test_db_api.py", line 31, in 
  from oslo_db.sqlalchemy import test_base
File 
"/home/travis/build/dims/nova/.tox/py27/src/oslo.db/oslo_db/sqlalchemy/test_base.py",
 line 17, in 
  import testresources
  ImportError: No module named testresources

  https://travis-ci.org/dims/nova/jobs/83992423

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1503501/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1491049] [NEW] Filtering by invalid version string causes 500 error

2015-09-01 Thread Mike Fedosin
Public bug reported:

When I want to filter a list of artifacts by version if I provide
invalid semver string (for example, 'version=') then the server
returns 500 error.

request:
GET /v3/artifacts/some_type/2.0?version=

Stacktrace: http://paste.openstack.org/show/438140/

** Affects: glance
 Importance: Undecided
 Assignee: Mike Fedosin (mfedosin)
 Status: New


** Tags: artifacts

** Changed in: glance
 Assignee: (unassigned) => Mike Fedosin (mfedosin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1491049

Title:
  Filtering by invalid version string causes 500 error

Status in Glance:
  New

Bug description:
  When I want to filter a list of artifacts by version if I provide
  invalid semver string (for example, 'version=') then the server
  returns 500 error.

  request:
  GET /v3/artifacts/some_type/2.0?version=

  Stacktrace: http://paste.openstack.org/show/438140/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1491049/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489126] [NEW] Filtering by tags is broken in v3

2015-08-26 Thread Mike Fedosin
Public bug reported:

When I want to filter list of artifacts by tag I get a 500 error:

http://localhost:9292/v3/artifacts/myartifact/v2.0/drafts?tag=hyhyhy

html
 head
  title500 Internal Server Error/title
 /head
 body
  h1500 Internal Server Error/h1
  The server has either erred or is incapable of performing the requested 
operation.br /br /


 /body
/html

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: artifacts

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1489126

Title:
  Filtering by tags is broken in v3

Status in Glance:
  New

Bug description:
  When I want to filter list of artifacts by tag I get a 500 error:

  http://localhost:9292/v3/artifacts/myartifact/v2.0/drafts?tag=hyhyhy

  html
   head
title500 Internal Server Error/title
   /head
   body
h1500 Internal Server Error/h1
The server has either erred or is incapable of performing the requested 
operation.br /br /


   /body
  /html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1489126/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487742] [NEW] Nova passing bad 'size' property value 'None' to Glance for image metadata

2015-08-22 Thread Mike Dorman
Public bug reported:

Glance does not accept 'None' as a valid value for the 'size' property
[1].  However, in certain situations Nova is sending a 'size' property
with a 'None' value.  This results in a 400 response from Glance to
Nova, and the following backtrace in Glance:

2015-08-21 14:54:17.916 10446 TRACE glance.api.v1.images Traceback (most recent 
call last):
2015-08-21 14:54:17.916 10446 TRACE glance.api.v1.images   File 
/usr/lib/python2.7/site-packages/glance/api/v1/images.py, line 1144, in 
_deserialize
2015-08-21 14:54:17.916 10446 TRACE glance.api.v1.images 
result['image_meta'] = utils.get_image_meta_from_headers(request)
2015-08-21 14:54:17.916 10446 TRACE glance.api.v1.images   File 
/usr/lib/python2.7/site-packages/glance/common/utils.py, line 322, in 
get_image_meta_from_headers
2015-08-21 14:54:17.916 10446 TRACE glance.api.v1.images extra_msg=extra)
2015-08-21 14:54:17.916 10446 TRACE glance.api.v1.images InvalidParameterValue: 
Invalid value 'None' for parameter 'size': Cannot convert image size 'None' to 
an integer.

I believe what's happening is Nova tries to enforce certain required
properties when creating or updating an image, and in the process
reconciling those with the properties that Glance already has (through
the _translate_from_glance() [2] and _extract_attributes() [3] methods
in nova/image/glance.py)

Nova is enforcing the 'size' property being in place [4], but if Glance
does not already have a 'size' property on the image (like if the image
has been queued but not uploaded yet), the value gets set to 'None' on
the Nova side [5].  This gets sent to Glance in subsequent calls, and it
fails because 'None' cannot be converted to an integer (see backtrace
above.)


Steps to Reproduce:

Nova and Glance 2015.1.1

1.  Queue a new image in Glance
2.  Attempt to set a metadata attribute on that image (this will fail with 400 
error from Glance)
3.  Actually upload the image data sometime later


Potential Solution:

I've patched this locally to simply check that the 'size' property gets
set to 0 instead of 'None' on the Nova side.  I am not familiar enough
with all the internals here to understand if that's the right
solution, but I can confirm it's working for us and this bug is no
longer triggered.


[1] 
https://github.com/openstack/glance/blob/2015.1.1/glance/common/utils.py#L305-L319
[2] https://github.com/openstack/nova/blob/2015.1.1/nova/image/glance.py#L482
[3] https://github.com/openstack/nova/blob/2015.1.1/nova/image/glance.py#L533
[4] https://github.com/openstack/nova/blob/2015.1.1/nova/image/glance.py#L539
[5] https://github.com/openstack/nova/blob/2015.1.1/nova/image/glance.py#L571

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1487742

Title:
  Nova passing bad 'size' property value 'None' to Glance for image
  metadata

Status in OpenStack Compute (nova):
  New

Bug description:
  Glance does not accept 'None' as a valid value for the 'size' property
  [1].  However, in certain situations Nova is sending a 'size' property
  with a 'None' value.  This results in a 400 response from Glance to
  Nova, and the following backtrace in Glance:

  2015-08-21 14:54:17.916 10446 TRACE glance.api.v1.images Traceback (most 
recent call last):
  2015-08-21 14:54:17.916 10446 TRACE glance.api.v1.images   File 
/usr/lib/python2.7/site-packages/glance/api/v1/images.py, line 1144, in 
_deserialize
  2015-08-21 14:54:17.916 10446 TRACE glance.api.v1.images 
result['image_meta'] = utils.get_image_meta_from_headers(request)
  2015-08-21 14:54:17.916 10446 TRACE glance.api.v1.images   File 
/usr/lib/python2.7/site-packages/glance/common/utils.py, line 322, in 
get_image_meta_from_headers
  2015-08-21 14:54:17.916 10446 TRACE glance.api.v1.images extra_msg=extra)
  2015-08-21 14:54:17.916 10446 TRACE glance.api.v1.images 
InvalidParameterValue: Invalid value 'None' for parameter 'size': Cannot 
convert image size 'None' to an integer.

  I believe what's happening is Nova tries to enforce certain required
  properties when creating or updating an image, and in the process
  reconciling those with the properties that Glance already has (through
  the _translate_from_glance() [2] and _extract_attributes() [3] methods
  in nova/image/glance.py)

  Nova is enforcing the 'size' property being in place [4], but if
  Glance does not already have a 'size' property on the image (like if
  the image has been queued but not uploaded yet), the value gets set to
  'None' on the Nova side [5].  This gets sent to Glance in subsequent
  calls, and it fails because 'None' cannot be converted to an integer
  (see backtrace above.)

  
  Steps to Reproduce:

  Nova and Glance 2015.1.1

  1.  Queue a new image in Glance
  2.  Attempt to set a metadata attribute on that image (this will fail with 

[Yahoo-eng-team] [Bug 1487425] [NEW] Ranged filtering by version is not supported

2015-08-21 Thread Mike Fedosin
Public bug reported:

Currently filtering version by range is not supported, so requests like
?version=gt:5.0version=lt:8.0version=ne:6.0 don't work as expected -
only the last parameter is used in that cases.

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: artifacts

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1487425

Title:
  Ranged filtering by version is not supported

Status in Glance:
  New

Bug description:
  Currently filtering version by range is not supported, so requests
  like ?version=gt:5.0version=lt:8.0version=ne:6.0 don't work as
  expected - only the last parameter is used in that cases.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1487425/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432963] Re: Volume of 'in-use' remain by a timeout during the attach

2015-08-16 Thread Mike Perez
** Changed in: cinder
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1432963

Title:
  Volume of 'in-use' remain by a timeout during the attach

Status in Cinder:
  Invalid
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Remain as the 'in-use' volume due to timeout during the 'attach'.
  If the problem occurs, The instance can't detach/attach the volume.
detach - volume_id not found
attach - libvirtError: Requested operation is not valid: target vdb 
already exists 
(other volume)

  This problems are caused by the mismatching DB because c-vol does not 
rollback it when happend the timeout of RPC(CALL).
  At first, c-vol takes time in the attaching process, and the attaching 
process failed in the c-api by timeout.
  But in this time, the attaching process does not failed yet in the c-vol.

  By this, BDM is deleted and volume is updated to in-use.
  (If successful the attaching process in the c-vol.)

  Repro 
  used master
cinder: commit d4b77484c5d41f207d54f40dcdd530fb8a1b1ea6
nova  : commit eaeecdaf4743463888c3ee24fb08128eac15dee7

  1. attach volume
 (in cinder/volume/manage.py def attach_volume)
  note: I reproduced this problem by inserting sleep in this method.
  2. RPC(CALL) timeout in the c-api(attach_volume process takes time beyond the 
rpc_response_timeout)

  About the volume and BDM 
  block_device_mapping
  
+-++-+---+--+-+--+-+
  | deleted_at  | id | device_name | delete_on_termination | volume_id  
  | connection_info | instance_uuid 
   | deleted |
  
+-++-+---+--+-+--+-+
  | NULL|  1 | /dev/vda| 1 | NULL   
  | NULL| 
4683d4fb-758c-459e-9def-b8d247a56954 |   0 |
  | 2015-03-17 06:12:36 |  2 | /dev/vdb| 0 | 
46d1bfbb-bdf2-472f-8bf6-2d2367b1edb1 | NULL| 
4683d4fb-758c-459e-9def-b8d247a56954 |   2 |
  
+-++-+---+--+-+--+-+
  volumes
  
++-+--++---+---+---++
  | deleted_at | deleted | id   | status | 
attach_status | terminated_at | provider_location   
  | provider_auth   
   |
  
++-+--++---+---+---++
  | NULL   |   0 | 46d1bfbb-bdf2-472f-8bf6-2d2367b1edb1 | in-use | 
attached  | NULL  | 192.168.58.172:3260,4 
iqn.2010-10.org.openstack:volume-46d1bfbb-bdf2-472f-8bf6-2d2367b1edb1 1 | CHAP 
2W7r5XQcZJ5BHVctM8YY NogCxmq4VswXWHWE |
  
++-+--++---+---+---++

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1432963/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474079] [NEW] Cross-site web socket connections fail on Origin and Host header mismatch

2015-07-13 Thread Mike Dorman
Public bug reported:

The Kilo web socket proxy implementation for Nova consoles added an
Origin header validation to ensure the Origin hostname matches the
hostname from the Host header.  This was a result of the following XSS
security bug:  https://bugs.launchpad.net/nova/+bug/1409142
(CVE-2015-0259)

In other words, this requires that the web UI being used (Horizon, or
whatever) having a URL hostname which is the same as the hostname by
which the console proxy is accessed.  This is a safe assumption for
Horizon.  However, we have a use case where our (custom) UI runs at a
different URL than does the console proxies, and thus we need to allow
cross-site web socket connections.  The patch for 1409142
(https://github.secureserver.net/cloudplatform/els-
nova/commit/fdb73a2d445971c6158a80692c6f74094fd4193a) breaks this
functionality for us.

Would like to have some way to enable controlled XSS web socket
connections to the console proxy services, maybe via a nova config
parameter providing a list of allowed origin hosts?

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1474079

Title:
  Cross-site web socket connections fail on Origin and Host header
  mismatch

Status in OpenStack Compute (nova):
  New

Bug description:
  The Kilo web socket proxy implementation for Nova consoles added an
  Origin header validation to ensure the Origin hostname matches the
  hostname from the Host header.  This was a result of the following XSS
  security bug:  https://bugs.launchpad.net/nova/+bug/1409142
  (CVE-2015-0259)

  In other words, this requires that the web UI being used (Horizon, or
  whatever) having a URL hostname which is the same as the hostname by
  which the console proxy is accessed.  This is a safe assumption for
  Horizon.  However, we have a use case where our (custom) UI runs at a
  different URL than does the console proxies, and thus we need to allow
  cross-site web socket connections.  The patch for 1409142
  (https://github.secureserver.net/cloudplatform/els-
  nova/commit/fdb73a2d445971c6158a80692c6f74094fd4193a) breaks this
  functionality for us.

  Would like to have some way to enable controlled XSS web socket
  connections to the console proxy services, maybe via a nova config
  parameter providing a list of allowed origin hosts?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1474079/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1474069] [NEW] DeprecatedDecorators test does not setup fixtures correctly

2015-07-13 Thread Mike Bayer
Public bug reported:

this test appears to rely upon test suite setup in a different test,
outside of the test_backend_sql.py suite entirely.Below is a run of
this specific test, but you get the same error if you run all of
test_backend_sql at once as well.

[mbayer@thinkpad keystone]$ tox   -v  -e py27 
keystone.tests.unit.test_backend_sql.DeprecatedDecorators.test_assignment_to_resource_api
using tox.ini: /home/mbayer/dev/jenkins_scripts/tmp/keystone/tox.ini
using tox-1.8.1 from /usr/lib/python2.7/site-packages/tox/__init__.pyc
py27 create: /home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27
  /home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox$ /usr/bin/python 
-mvirtualenv --setuptools --python /usr/bin/python2.7 py27 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/log/py27-0.log
py27 installdeps: 
-r/home/mbayer/dev/jenkins_scripts/tmp/keystone/requirements.txt, 
-r/home/mbayer/dev/jenkins_scripts/tmp/keystone/test-requirements.txt
  /home/mbayer/dev/jenkins_scripts/tmp/keystone$ 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/bin/pip install -U 
-r/home/mbayer/dev/jenkins_scripts/tmp/keystone/requirements.txt 
-r/home/mbayer/dev/jenkins_scripts/tmp/keystone/test-requirements.txt 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/log/py27-1.log
py27 develop-inst: /home/mbayer/dev/jenkins_scripts/tmp/keystone
  /home/mbayer/dev/jenkins_scripts/tmp/keystone$ 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/bin/pip install -U -e 
/home/mbayer/dev/jenkins_scripts/tmp/keystone 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/log/py27-2.log
py27 runtests: PYTHONHASHSEED='3819984772'
py27 runtests: commands[0] | bash tools/pretty_tox.sh 
keystone.tests.unit.test_backend_sql.DeprecatedDecorators.test_assignment_to_resource_api
  /home/mbayer/dev/jenkins_scripts/tmp/keystone$ /usr/bin/bash 
tools/pretty_tox.sh 
keystone.tests.unit.test_backend_sql.DeprecatedDecorators.test_assignment_to_resource_api
 
running testr
running=
OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} \
${PYTHON:-python} -m subunit.run discover -t ./ 
${OS_TEST_PATH:-./keystone/tests/unit} --list 
running=
OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} \
${PYTHON:-python} -m subunit.run discover -t ./ 
${OS_TEST_PATH:-./keystone/tests/unit}  --load-list /tmp/tmpclgNWA
{0} 
keystone.tests.unit.test_backend_sql.DeprecatedDecorators.test_assignment_to_resource_api
 [0.245028s] ... FAILED

Captured traceback:
~~~
Traceback (most recent call last):
  File keystone/tests/unit/test_backend_sql.py, line 995, in 
test_assignment_to_resource_api
self.config_fixture.config(fatal_deprecations=True)
  File 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/lib/python2.7/site-packages/oslo_config/fixture.py,
 line 65, in config
self.conf.set_override(k, v, group)
  File 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/lib/python2.7/site-packages/oslo_config/cfg.py,
 line 1823, in __inner
result = f(self, *args, **kwargs)
  File 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/lib/python2.7/site-packages/oslo_config/cfg.py,
 line 2100, in set_override
opt_info = self._get_opt_info(name, group)
  File 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/lib/python2.7/site-packages/oslo_config/cfg.py,
 line 2418, in _get_opt_info
raise NoSuchOptError(opt_name, group)
oslo_config.cfg.NoSuchOptError: no such option: fatal_deprecations


Captured pythonlogging:
~~~
Adding cache-proxy 'keystone.tests.unit.test_cache.CacheIsolatingProxy' to 
backend.
registered 'sha512_crypt' handler: class 
'passlib.handlers.sha2_crypt.sha512_crypt'


==
Failed 1 tests - output below:
==

keystone.tests.unit.test_backend_sql.DeprecatedDecorators.test_assignment_to_resource_api
-

Captured traceback:
~~~
Traceback (most recent call last):
  File keystone/tests/unit/test_backend_sql.py, line 995, in 
test_assignment_to_resource_api
self.config_fixture.config(fatal_deprecations=True)
  File 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/lib/python2.7/site-packages/oslo_config/fixture.py,
 line 65, in config
self.conf.set_override(k, v, group)
  File 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/lib/python2.7/site-packages/oslo_config/cfg.py,
 line 1823, in __inner
result = f(self, *args, **kwargs)
  File 
/home/mbayer/dev/jenkins_scripts/tmp/keystone/.tox/py27/lib/python2.7/site-packages/oslo_config/cfg.py,
 line 2100, in set_override
opt_info = self._get_opt_info(name, group)
  File 

[Yahoo-eng-team] [Bug 1473369] Re: new mock release broke a bunch of unit tests

2015-07-10 Thread Mike Fedosin
** Also affects: python-glance-store (Ubuntu)
   Importance: Undecided
   Status: New

** No longer affects: python-glance-store (Ubuntu)

** Also affects: glance-store
   Importance: Undecided
   Status: New

** Changed in: glance-store
 Assignee: (unassigned) = Mike Fedosin (mfedosin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1473369

Title:
  new mock release broke a bunch of unit tests

Status in Glance:
  In Progress
Status in glance_store:
  New
Status in murano:
  Fix Committed
Status in murano kilo series:
  Fix Committed
Status in neutron:
  Fix Committed
Status in python-muranoclient:
  In Progress
Status in python-muranoclient kilo series:
  New
Status in OpenStack Object Storage (swift):
  In Progress

Bug description:
  http://lists.openstack.org/pipermail/openstack-
  dev/2015-July/069156.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1473369/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1468698] Re: Image-update api returns 500 while passing --min-ram and --min-disk greater than 2^(31) max value

2015-06-29 Thread Mike Fedosin
** This bug is no longer a duplicate of bug 1460060
   Glance v1 and v2 api returns 500 while passing --min-ram and --min-disk 
greater than 2^(31) max value

** Changed in: glance
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1468698

Title:
  Image-update api returns 500 while passing --min-ram and --min-disk
  greater than 2^(31) max value

Status in OpenStack Image Registry and Delivery Service (Glance):
  Confirmed

Bug description:
  $ glance image-update b3886698-04c3-4621-9a04-4a587d3288d1 --min-ram 
234578
  HTTPInternalServerError (HTTP 500)

  $ glance image-update b3886698-04c3-4621-9a04-4a587d3288d1 --min-disk 
234578
  HTTPInternalServerError (HTTP 500)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1468698/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469817] [NEW] Glance doesn't handle exceptions from glance_store

2015-06-29 Thread Mike Fedosin
Public bug reported:

Server API expects to catch exception declared at
glance/common/exception.py, but actually risen exceptions have the same
names but are declared at different module, glance_store/exceptions.py
and thus are never caught.

For example, If exception is raised here:
https://github.com/openstack/glance_store/blob/stable/kilo/glance_store/_drivers/rbd.py#L316
, it will never be caught here
https://github.com/openstack/glance/blob/stable/kilo/glance/api/v1/images.py#L1107
, because first one is instance of
https://github.com/openstack/glance_store/blob/stable/kilo/glance_store/exceptions.py#L198
, but Glance waits for
https://github.com/openstack/glance/blob/stable/kilo/glance/common/exception.py#L293

There are many cases of that issue. The investigation continues.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1469817

Title:
  Glance doesn't handle exceptions from glance_store

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Server API expects to catch exception declared at
  glance/common/exception.py, but actually risen exceptions have the
  same names but are declared at different module,
  glance_store/exceptions.py and thus are never caught.

  For example, If exception is raised here:
  
https://github.com/openstack/glance_store/blob/stable/kilo/glance_store/_drivers/rbd.py#L316
  , it will never be caught here
  
https://github.com/openstack/glance/blob/stable/kilo/glance/api/v1/images.py#L1107
  , because first one is instance of
  
https://github.com/openstack/glance_store/blob/stable/kilo/glance_store/exceptions.py#L198
  , but Glance waits for
  
https://github.com/openstack/glance/blob/stable/kilo/glance/common/exception.py#L293

  There are many cases of that issue. The investigation continues.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1469817/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467589] Re: Remove Cinder V1 supprt

2015-06-23 Thread Mike Perez
** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1467589

Title:
  Remove Cinder V1 supprt

Status in Cinder:
  In Progress
Status in OpenStack Compute (Nova):
  In Progress
Status in Rally:
  In Progress
Status in Tempest:
  New

Bug description:
  Cinder created v2 support in the Grizzly release. This is to track
  progress in removing v1 support in other projects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1467589/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467589] [NEW] Remove Cinder V1 supprt

2015-06-22 Thread Mike Perez
Public bug reported:

Cinder created v2 support in the Grizzly release. This is to track
progress in removing v1 support in other projects.

** Affects: cinder
 Importance: Undecided
 Assignee: Mike Perez (thingee)
 Status: In Progress

** Affects: nova
 Importance: Undecided
 Assignee: Mike Perez (thingee)
 Status: In Progress

** Affects: rally
 Importance: Undecided
 Assignee: Ivan Kolodyazhny (e0ne)
 Status: In Progress

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New = In Progress

** Changed in: nova
 Assignee: (unassigned) = Mike Perez (thingee)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1467589

Title:
  Remove Cinder V1 supprt

Status in Cinder:
  In Progress
Status in OpenStack Compute (Nova):
  In Progress
Status in Rally:
  In Progress

Bug description:
  Cinder created v2 support in the Grizzly release. This is to track
  progress in removing v1 support in other projects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1467589/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1415087] Re: [OSSA 2015-011] Format-guessing and file disclosure in image convert (CVE-2015-1850, CVE-2015-1851)

2015-06-17 Thread Mike Perez
** Also affects: cinder/icehouse
   Importance: Undecided
   Status: New

** Also affects: cinder/juno
   Importance: Undecided
   Status: New

** Also affects: cinder/kilo
   Importance: Undecided
   Status: New

** Changed in: cinder
Milestone: None = liberty-1

** Changed in: cinder/icehouse
 Assignee: (unassigned) = Eric Harney (eharney)

** Changed in: cinder/juno
 Assignee: (unassigned) = Eric Harney (eharney)

** Changed in: cinder/kilo
 Assignee: (unassigned) = Eric Harney (eharney)

** Changed in: cinder/icehouse
   Importance: Undecided = High

** Changed in: cinder/juno
   Importance: Undecided = High

** Changed in: cinder/kilo
   Status: New = Fix Committed

** Changed in: cinder/kilo
   Importance: Undecided = High

** Changed in: cinder/icehouse
   Status: New = Fix Committed

** Changed in: cinder/juno
   Status: New = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1415087

Title:
  [OSSA 2015-011] Format-guessing and file disclosure in image convert
  (CVE-2015-1850, CVE-2015-1851)

Status in Cinder:
  Fix Committed
Status in Cinder icehouse series:
  Fix Committed
Status in Cinder juno series:
  Fix Committed
Status in Cinder kilo series:
  Fix Committed
Status in OpenStack Compute (Nova):
  Triaged
Status in OpenStack Security Advisories:
  Fix Committed

Bug description:
  Cinder does not provide input format to several calls of qemu-img
  convert. This allows the attacker to play the format guessing by
  providing a volume with a qcow2 signature. If this signature contains
  a base file, this file will be read by a process running as root and
  embedded in the output. This bug is similar to CVE-2013-1922.

  Tested with: lvm backed volume storage, it may apply to others as well
  Steps to reproduce:
  - create volume and attach to vm,
  - create a qcow2 signature with base-file[1] from within the vm and
  - trigger upload to glance with cinder upload-to-image --disk-type qcow2[2].
  The image uploaded to glance will have /etc/passwd from the cinder-volume 
host embedded.
  Affected versions: tested on 2014.1.3, found while reading 2014.2.1

  Fix: Always specify both input -f and output format -O to qemu-
  img convert. The code is in module cinder.image.image_utils.

  Bastian Blank

  [1]: qemu-img create -f qcow2 -b /etc/passwd /dev/vdb
  [2]: The disk-type != raw triggers the use of qemu-img convert

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1415087/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463466] [NEW] Option use_user_token is created twice

2015-06-09 Thread Mike Fedosin
Public bug reported:

In glance we have two places where we register use_user_token option:
https://github.com/openstack/glance/blob/stable/kilo/glance/common/store_utils.py#L33
https://github.com/openstack/glance/blob/stable/kilo/glance/registry/client/__init__.py#L55

oslo.config considers them as one, because they have the same name and
help string, but changing help string in one of them leads to an
exception DuplicateOptError: duplicate option: use_user_token

It seems that we should remove the option creation in store_utils and
left only one declaration in registry client.

** Affects: glance
 Importance: Undecided
 Assignee: Mike Fedosin (mfedosin)
 Status: New

** Changed in: glance
 Assignee: (unassigned) = Mike Fedosin (mfedosin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1463466

Title:
  Option use_user_token is created twice

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  In glance we have two places where we register use_user_token option:
  
https://github.com/openstack/glance/blob/stable/kilo/glance/common/store_utils.py#L33
  
https://github.com/openstack/glance/blob/stable/kilo/glance/registry/client/__init__.py#L55

  oslo.config considers them as one, because they have the same name and
  help string, but changing help string in one of them leads to an
  exception DuplicateOptError: duplicate option: use_user_token

  It seems that we should remove the option creation in store_utils and
  left only one declaration in registry client.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1463466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462871] [NEW] L2Population on OVS broken due to ofctl resactoring

2015-06-07 Thread Mike Kolesnik
Public bug reported:

The refactor [1] to seperate ofctl logic to a driver broke L2pop on OVS.

The L2 agent shows this error when receiving a call to add_tunnel_port:

2015-06-08 04:33:50.287 DEBUG neutron.agent.l2population_rpc 
[req-a3dcc834-e97d-471b-8cae-02b6b0c58325 None None] 
neutron.plugins.openvswitch.
agent.ovs_neutron_agent.OVSNeutronAgent method fdb_add_tun called with 
arguments (neutron.context.Context object at 0x4421510, neutron.plug
ins.openvswitch.agent.openflow.ovs_ofctl.br_tun.DeferredOVSTunnelBridge object 
at 0x44213d0, neutron.plugins.openvswitch.agent.ovs_neutron_a
gent.LocalVLANMapping object at 0x3c43510, {u'10.35.6.102': 
[PortInfo(mac_address=u'00:00:00:00:00:00', ip_address=u'0.0.0.0'), 
PortInfo(mac_
address=u'fa:16:3e:c6:17:9f', ip_address=u'10.0.0.2'), 
PortInfo(mac_address=u'fa:16:3e:c6:17:9f', 
ip_address=u'fd59:ade1:1482:0:f816:3eff:fec6
:179f')]}, bound method OVSNeutronAgent._tunnel_port_lookup of 
neutron.plugins.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent object at
 0x3c43310) {} from (pid=14807) wrapper 
/usr/lib/python2.7/site-packages/oslo_log/helpers.py:45
2015-06-08 04:33:50.287 ERROR neutron.agent.common.ovs_lib 
[req-a3dcc834-e97d-471b-8cae-02b6b0c58325 None None] OVS flows could not be 
applied
 on bridge br-tun
2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib Traceback (most 
recent call last):
2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib   File 
/opt/openstack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.
py, line 448, in fdb_add
2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib agent_ports, 
self._tunnel_port_lookup)
2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib   File 
/usr/lib/python2.7/site-packages/oslo_log/helpers.py, line 46, in wrapper
2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib return 
method(*args, **kwargs)
2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib   File 
/opt/openstack/neutron/neutron/agent/l2population_rpc.py, line 234, in fdb
_add_tun
2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib lvm.network_type)
2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib   File 
/opt/openstack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.
py, line 1169, in setup_tunnel_port
2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib network_type)
2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib   File 
/opt/openstack/neutron/neutron/plugins/openvswitch/agent/ovs_neutron_agent.
py, line 1135, in _setup_tunnel_port
2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib ofport = 
br.add_tunnel_port(port_name,
2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib   File 
/opt/openstack/neutron/neutron/plugins/openvswitch/agent/openflow/ovs_ofctl/br_tun.py,
 line 246, in __getattr__
2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib raise 
AttributeError(name)
2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib AttributeError: 
add_tunnel_port
2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib 

[1] https://review.openstack.org/#/c/160245/

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l2-pop ovs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1462871

Title:
  L2Population on OVS broken due to ofctl resactoring

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The refactor [1] to seperate ofctl logic to a driver broke L2pop on
  OVS.

  The L2 agent shows this error when receiving a call to
  add_tunnel_port:

  2015-06-08 04:33:50.287 DEBUG neutron.agent.l2population_rpc 
[req-a3dcc834-e97d-471b-8cae-02b6b0c58325 None None] 
neutron.plugins.openvswitch.
  agent.ovs_neutron_agent.OVSNeutronAgent method fdb_add_tun called with 
arguments (neutron.context.Context object at 0x4421510, neutron.plug
  ins.openvswitch.agent.openflow.ovs_ofctl.br_tun.DeferredOVSTunnelBridge 
object at 0x44213d0, neutron.plugins.openvswitch.agent.ovs_neutron_a
  gent.LocalVLANMapping object at 0x3c43510, {u'10.35.6.102': 
[PortInfo(mac_address=u'00:00:00:00:00:00', ip_address=u'0.0.0.0'), 
PortInfo(mac_
  address=u'fa:16:3e:c6:17:9f', ip_address=u'10.0.0.2'), 
PortInfo(mac_address=u'fa:16:3e:c6:17:9f', 
ip_address=u'fd59:ade1:1482:0:f816:3eff:fec6
  :179f')]}, bound method OVSNeutronAgent._tunnel_port_lookup of 
neutron.plugins.openvswitch.agent.ovs_neutron_agent.OVSNeutronAgent object at
   0x3c43310) {} from (pid=14807) wrapper 
/usr/lib/python2.7/site-packages/oslo_log/helpers.py:45
  2015-06-08 04:33:50.287 ERROR neutron.agent.common.ovs_lib 
[req-a3dcc834-e97d-471b-8cae-02b6b0c58325 None None] OVS flows could not be 
applied
   on bridge br-tun
  2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib Traceback (most 
recent call last):
  2015-06-08 04:33:50.287 TRACE neutron.agent.common.ovs_lib   File 

[Yahoo-eng-team] [Bug 1461572] [NEW] Minimum qemu version for discard support is 1.5

2015-06-03 Thread Mike Lowe
Public bug reported:

While the minimum version of qemu that supports discard on qcow2 is 1.6,
it is incorrect to limit discard support to this version as qemu 1.5
supports discard for iscsi and rbd and possibly others.  The release
notes are clear, with qemu 1.5 and better discard support depends on the
qemu driver used.

file:
nova/virt/libvirt/driver.py

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1461572

Title:
  Minimum qemu version for discard support is 1.5

Status in OpenStack Compute (Nova):
  New

Bug description:
  While the minimum version of qemu that supports discard on qcow2 is
  1.6, it is incorrect to limit discard support to this version as qemu
  1.5 supports discard for iscsi and rbd and possibly others.  The
  release notes are clear, with qemu 1.5 and better discard support
  depends on the qemu driver used.

  file:
  nova/virt/libvirt/driver.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1461572/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460741] [NEW] security groups iptables can block legitimate traffic as INVALID

2015-06-01 Thread Mike Dorman
Public bug reported:

The iptables implementation of security groups includes a default rule
to drop any INVALID packets (according to the Linux connection state
tracking system.)  It looks like this:

-A neutron-openvswi-od0518220-e -m state --state INVALID -j DROP

This is placed near the top of the rule stack, before any security group
rules added by the user.  See:

https://github.com/openstack/neutron/blob/stable/kilo/neutron/agent/linux/iptables_firewall.py#L495
https://github.com/openstack/neutron/blob/stable/kilo/neutron/agent/linux/iptables_firewall.py#L506-L510

However, there are some cases where you would not want traffic marked as
INVALID to be dropped here.  Specifically, our use case:

We have a load balancing scheme where requests from the LB are tunneled
as IP-in-IP encapsulation between the LB and the VM.  Response traffic
is configured for DSR, so the responses go directly out the default
gateway of the VM.

The results of this are iptables on the hypervisor does not see the
initial SYN from the LB to VM (because it is encapsulated in IP-in-IP),
and thus it does not make it into the connection table.  The response
that comes out of the VM (not encapsulated) hits iptables on the
hypervisor and is dropped as invalid.

I'd like to see a Neutron option to enable/disable the population of
this INVALID state rule, so that operators (such as us) can disable it
if desired.  Obviously it's better in general to keep it in there to
drop invalid packets, but there are cases where you would like to not do
this.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460741

Title:
  security groups iptables can block legitimate traffic as INVALID

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The iptables implementation of security groups includes a default rule
  to drop any INVALID packets (according to the Linux connection state
  tracking system.)  It looks like this:

  -A neutron-openvswi-od0518220-e -m state --state INVALID -j DROP

  This is placed near the top of the rule stack, before any security
  group rules added by the user.  See:

  
https://github.com/openstack/neutron/blob/stable/kilo/neutron/agent/linux/iptables_firewall.py#L495
  
https://github.com/openstack/neutron/blob/stable/kilo/neutron/agent/linux/iptables_firewall.py#L506-L510

  However, there are some cases where you would not want traffic marked
  as INVALID to be dropped here.  Specifically, our use case:

  We have a load balancing scheme where requests from the LB are
  tunneled as IP-in-IP encapsulation between the LB and the VM.
  Response traffic is configured for DSR, so the responses go directly
  out the default gateway of the VM.

  The results of this are iptables on the hypervisor does not see the
  initial SYN from the LB to VM (because it is encapsulated in IP-in-
  IP), and thus it does not make it into the connection table.  The
  response that comes out of the VM (not encapsulated) hits iptables on
  the hypervisor and is dropped as invalid.

  I'd like to see a Neutron option to enable/disable the population of
  this INVALID state rule, so that operators (such as us) can disable it
  if desired.  Obviously it's better in general to keep it in there to
  drop invalid packets, but there are cases where you would like to not
  do this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460741/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1268680] Re: Creating an image without container format queues image and fails with 400

2015-05-07 Thread Mike Fedosin
** Changed in: glance
 Assignee: (unassigned) = Mike Fedosin (mfedosin)

** Changed in: glance
   Status: Invalid = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1268680

Title:
  Creating an image without container format queues image and fails with
  400

Status in OpenStack Image Registry and Delivery Service (Glance):
  Confirmed

Bug description:
  Description of problem:

  Creating an image from CLI without --container format queued the image
  and fails with 400.

  Request returned failure status.
  400 Bad Request
  Invalid container format 'None' for image.
  (HTTP 400)

  
  How reproducible:
  # glance --debug image-create --name cirros --disk-format qcow2 --file 
/tmp/cirros-image.qcow2 --progress
  snip
  [=] 100%

  HTTP/1.1 400 Bad Request
  date: Tue, 07 Jan 2014 14:13:54 GMT
  content-length: 64
  content-type: text/plain; charset=UTF-8
  x-openstack-request-id: req-11b4ecad-3a8d-4e44-9c37-a4d843805889

  400 Bad Request

  Invalid container format 'None' for image.


  # glance image-list
  
+---+---+-+++---+
  | ID  
  | Name| Disk Format | Container Format | Size  | Status|
  
+---+---+-+++---+
  | b2490dd2-b535-4b98-8647-cca428a63e01 | cirros | qcow2   |   
  | 307962880 | queued |
  
+---+---+-+++---+

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1268680/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445675] [NEW] missing index on virtual_interfaces can cause long queries that can cause timeouts in launching instances

2015-04-17 Thread Mike Bayer
;
+--+-++--+---+---+-+---+--++
| id   | select_type | table  | type | possible_keys | key   | 
key_len | ref   | rows | Extra  |
+--+-++--+---+---+-+---+--++
|1 | SIMPLE  | virtual_interfaces | ref  | vuidx | vuidx | 111  
   | const |1 | Using index condition; Using where |
+--+-++--+---+---+-+---+--++
1 row in set (0.00 sec)


and we get 0.00 response time for both queries:

MariaDB [nova] SELECT virtual_interfaces.created_at , 
virtual_interfaces.updated_at , virtual_interfaces.deleted_at , 
virtual_interfaces.deleted , virtual_interfaces.id , virtual_interfaces.address 
, virtual_interfaces.network_id , virtual_interfaces.instance_uuid , 
virtual_interfaces.uuid FROM virtual_interfaces WHERE 
virtual_interfaces.deleted = 0 AND virtual_interfaces.uuid = 
'0a269012-cbc7-4093-9602-35f003a766c5'  LIMIT 1;
Empty set (0.00 sec)

MariaDB [nova] SELECT virtual_interfaces.created_at , 
virtual_interfaces.updated_at , virtual_interfaces.deleted_at , 
virtual_interfaces.deleted , virtual_interfaces.id , virtual_interfaces.address 
, virtual_interfaces.network_id , virtual_interfaces.instance_uuid , 
virtual_interfaces.uuid FROM virtual_interfaces WHERE 
virtual_interfaces.deleted = 0 AND virtual_interfaces.uuid = 
'0a269012-cbc7-4093-9602-35f003a766c4'  LIMIT 1;
+-+++-+---+---++--+--+
| created_at  | updated_at | deleted_at | deleted | id| address 
  | network_id | instance_uuid| uuid
 |
+-+++-+---+---++--+--+
| 2014-08-12 22:22:14 | NULL   | NULL   |   0 | 58393 | 
address_58393 | 22 | 41f1b859-8c5d-4c27-a52e-3e97652dfe7a | 
0a269012-cbc7-4093-9602-35f003a766c4 |
+-+++-+---+---++--+--+
1 row in set (0.00 sec)


whether or not the index includes deleted doesn't really matter.  If we're 
searching for UUIDs, we get that UUID row first, then the deleted=0 is 
checked, not a big deal.

For an immediate fix,  I propose to add the aforementioned index to the
virtual_interfaces.uuid column.

** Affects: nova
 Importance: Undecided
 Assignee: Mike Bayer (zzzeek)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1445675

Title:
  missing index on virtual_interfaces can cause long queries that can
  cause timeouts in launching instances

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  In a load test where a nova environment w/ networking enabled was set
  up to have ~250K instances,  attempting to launch 50 instances would
  cause many to time out, with the error Timeout while waiting on RPC
  response - topic: network, RPC method: allocate_for_instance.
  The tester isolated the latency here to queries against the
  virtual_interfaces table, which in this test is executed some 500
  times, spending ~.5 seconds per query for a total of 200 seconds.  An
  example query looks like:

  SELECT virtual_interfaces.created_at , virtual_interfaces.updated_at , 
virtual_interfaces.deleted_at , virtual_interfaces.deleted , 
virtual_interfaces.id , virtual_interfaces.address , 
virtual_interfaces.network_id , virtual_interfaces.instance_uuid , 
virtual_interfaces.uuid FROM virtual_interfaces WHERE 
virtual_interfaces.deleted = 0 AND virtual_interfaces.uuid = 
'9774e729-7695-4e2b-a9b2-a104a4b020d0'
  LIMIT 1;

  Query profiling against this table /query directly proceeded as
  follows:

  I scripted up direct DB access to get 250K rows in a blank database:

  MariaDB [nova] select count(*) from virtual_interfaces;
  +--+
  | count(*) |
  +--+
  |   25 |
  +--+
  1 row in set (0.09 sec)

  emitting the query when the row is found, on this particular system is
  returning in .03 sec:

  MariaDB [nova] SELECT virtual_interfaces.created_at , 
virtual_interfaces.updated_at , virtual_interfaces.deleted_at , 
virtual_interfaces.deleted , virtual_interfaces.id , virtual_interfaces.address 
, virtual_interfaces.network_id , virtual_interfaces.instance_uuid , 
virtual_interfaces.uuid FROM virtual_interfaces WHERE 
virtual_interfaces.deleted = 0 AND virtual_interfaces.uuid

[Yahoo-eng-team] [Bug 1432490] Re: TestEncryptedCinderVolumes cryptsetup name is too long

2015-03-16 Thread Mike Perez
Going to take John's suggestion of just passing a uuid instead of the
volume name in the iqn.

** Changed in: cinder
   Status: New = Incomplete

** Changed in: nova
   Status: New = Invalid

** Changed in: cinder
   Status: Incomplete = Invalid

** Changed in: tempest
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1432490

Title:
  TestEncryptedCinderVolumes cryptsetup name is too long

Status in Cinder:
  Invalid
Status in OpenStack Compute (Nova):
  Invalid
Status in Tempest:
  Invalid

Bug description:
  First off, while I understand this is not reproducible with the
  reference implementation LVM, this seems like a unknown limitation
  today since we're not enforcing any length on the IQN or recommending
  anything.

  When running Datera storage with Cinder and the following
  TestEncryptedCinderVolumes tests:

  {0} 
tempest.scenario.test_encrypted_cinder_volumes.TestEncryptedCinderVolumes.test_encrypted_cinder_volumes_cryptsetup
  {0} 
tempest.scenario.test_encrypted_cinder_volumes.TestEncryptedCinderVolumes.test_encrypted_cinder_volumes_luks

  cryptsetup complains about the name being too long:

  http://paste.openstack.org/show/192537

  Nova uses the device name that's in /dev/disk-by-path, which in this
  case is the returned iqn from the backend:

  ip-172.30.128.2:3260-iscsi-iqn.2013-05.com.daterainc:OpenStack-
  TestEncryptedCinderVolumes-676292884:01:sn:aef6a6f1cd84768f-lun-0

  Already started talking Matt Treinish about this on IRC last week.
  Unsure where the fix should actual go into.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1432490/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432490] [NEW] TestEncryptedCinderVolumes cryptsetup name is too long

2015-03-15 Thread Mike Perez
Public bug reported:

When running Datera storage with Cinder and the following
TestEncryptedCinderVolumes tests:

{0} 
tempest.scenario.test_encrypted_cinder_volumes.TestEncryptedCinderVolumes.test_encrypted_cinder_volumes_cryptsetup
 
{0} 
tempest.scenario.test_encrypted_cinder_volumes.TestEncryptedCinderVolumes.test_encrypted_cinder_volumes_luks

cryptsetup complains about the name being too long:

http://paste.openstack.org/show/192537

Nova uses the device name that's in /dev/disk-by-path, which in this
case is the returned iqn from the backend:

ip-172.30.128.2:3260-iscsi-iqn.2013-05.com.daterainc:OpenStack-
TestEncryptedCinderVolumes-676292884:01:sn:aef6a6f1cd84768f-lun-0

Already started talking Matt Treinish about this on IRC last week.
Unsure where the fix should actual go into. While I understand this is
not reproducible with the reference implementation LVM, this seems like
a unknown limitation today since we're not enforcing any length on the
IQN or recommending anything.

** Affects: cinder
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: tempest
 Importance: Undecided
 Status: New

** Also affects: cinder
   Importance: Undecided
   Status: New

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1432490

Title:
  TestEncryptedCinderVolumes cryptsetup name is too long

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  New
Status in Tempest:
  New

Bug description:
  When running Datera storage with Cinder and the following
  TestEncryptedCinderVolumes tests:

  {0} 
tempest.scenario.test_encrypted_cinder_volumes.TestEncryptedCinderVolumes.test_encrypted_cinder_volumes_cryptsetup
 
  {0} 
tempest.scenario.test_encrypted_cinder_volumes.TestEncryptedCinderVolumes.test_encrypted_cinder_volumes_luks

  cryptsetup complains about the name being too long:

  http://paste.openstack.org/show/192537

  Nova uses the device name that's in /dev/disk-by-path, which in this
  case is the returned iqn from the backend:

  ip-172.30.128.2:3260-iscsi-iqn.2013-05.com.daterainc:OpenStack-
  TestEncryptedCinderVolumes-676292884:01:sn:aef6a6f1cd84768f-lun-0

  Already started talking Matt Treinish about this on IRC last week.
  Unsure where the fix should actual go into. While I understand this is
  not reproducible with the reference implementation LVM, this seems
  like a unknown limitation today since we're not enforcing any length
  on the IQN or recommending anything.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1432490/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1431571] [NEW] ArchiveTestCase erroneously assumes the tables that are populated

2015-03-12 Thread Mike Bayer
Public bug reported:

Running subsets of Nova tests or individual tests within test_db_api
reveals a simple error in several of the tests within ArchiveTestCase.

A test such as test_archive_deleted_rows_2_tables attempts the
following:

1. places six rows into instance_id_mappings
2. places six rows into instances
3. runs the archive_deleted_rows_ routine with a max of 7 rows to archive
4. runs a SELECT of instances and instance_id_mappings, and confirms that only 
5 remain.

Running this test directly with PYTHONHASHSEED=random will very easily
encounter failures such as:

Traceback (most recent call last):
  File 
/Users/classic/dev/redhat/openstack/nova/nova/tests/unit/db/test_db_api.py, 
line 7869, in test_archive_deleted_rows_2_tables
self.assertEqual(len(iim_rows) + len(i_rows), 5)
  File 
/Users/classic/dev/redhat/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py,
 line 350, in assertEqual
self.assertThat(observed, matcher, message)
  File 
/Users/classic/dev/redhat/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py,
 line 435, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: 8 != 5


or 

Traceback (most recent call last):
  File 
/Users/classic/dev/redhat/openstack/nova/nova/tests/unit/db/test_db_api.py, 
line 7872, in test_archive_deleted_rows_2_tables
self.assertEqual(len(iim_rows) + len(i_rows), 5)
  File 
/Users/classic/dev/redhat/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py,
 line 350, in assertEqual
self.assertThat(observed, matcher, message)
  File 
/Users/classic/dev/redhat/openstack/nova/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py,
 line 435, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: 10 != 5


The reason is that the archive_deleted_rows() routine looks for rows in *all* 
tables, in *non-deterministic order*, e.g. by searching through 
models.__dict__.itervalues().   In the 8 != 5 case, there are rows present 
also in the instance_types table.  By PDBing into archive_deleted_rows during 
the test, we can see here:

ARCHIVED 4 ROWS FROM TABLE instances
ARCHIVED 3 ROWS FROM TABLE instance_types
Traceback (most recent call last):
...
testtools.matchers._impl.MismatchError: 8 != 5

that is, the archiver locates seven rows just between instances and
instance_types, then stops.  It never even gets to the
instance_id_mappings table.

The serious problem with the way this test is designed, is that if we
were to make it ignore only certain tables, or make the ordering fixed,
or anything else, that will never keep the test from breaking again, any
time a new table is added which contains rows when the test fixtures
start.

The only solution to making these tests runnable in their current form
is to limit the listing of tables that are searched in
archive_deleted_rows; that is, the test needs to inject a fixture into
it.  The most straightforward way to achieve this would look like this:

 @require_admin_context
-def archive_deleted_rows(context, max_rows=None):
+def archive_deleted_rows(context, max_rows=None, 
_limit_tablenames_fixture=None):
 Move up to max_rows rows from production tables to the corresponding
 shadow tables.
 
@@ -5870,6 +5870,9 @@ def archive_deleted_rows(context, max_rows=None):
 if hasattr(model_class, __tablename__):
 tablenames.append(model_class.__tablename__)
 rows_archived = 0
+if _limit_tablenames_fixture:
+tablenames = set(tablenames).intersection(_limit_tablenames_fixture)
+
 for tablename in tablenames:
 rows_archived += archive_deleted_rows_for_table(context, tablename,
  max_rows=max_rows - rows_archived)

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1431571

Title:
  ArchiveTestCase erroneously assumes the tables that are populated

Status in OpenStack Compute (Nova):
  New

Bug description:
  Running subsets of Nova tests or individual tests within test_db_api
  reveals a simple error in several of the tests within ArchiveTestCase.

  A test such as test_archive_deleted_rows_2_tables attempts the
  following:

  1. places six rows into instance_id_mappings
  2. places six rows into instances
  3. runs the archive_deleted_rows_ routine with a max of 7 rows to archive
  4. runs a SELECT of instances and instance_id_mappings, and confirms that 
only 5 remain.

  Running this test directly with PYTHONHASHSEED=random will very easily
  encounter failures such as:

  Traceback (most recent call last):
File 
/Users/classic/dev/redhat/openstack/nova/nova/tests/unit/db/test_db_api.py, 
line 7869, in test_archive_deleted_rows_2_tables
  self.assertEqual(len(iim_rows) + len(i_rows), 5)
File 

[Yahoo-eng-team] [Bug 1428072] [NEW] Don't allow to resize down the default ephemeral disk

2015-03-04 Thread Mike Durnosvistov
Public bug reported:

If we create an instance with the default ephemeral disk and then to re-
size it down, we get wrong ephemeral size.

eph_size = (block_device.get_bdm_ephemeral_disk_size(ephemerals)

It will return 0, if you don't create an ephemeral disk by the means of
--block-device explicitly - nova will create a 'default' ephemeral disk
for you, which won't be listed in block device mapping.

** Affects: nova
 Importance: Undecided
 Assignee: Mike Durnosvistov (mdurnosvistov)
 Status: New


** Tags: libvirt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1428072

Title:
  Don't allow to resize down the default ephemeral disk

Status in OpenStack Compute (Nova):
  New

Bug description:
  If we create an instance with the default ephemeral disk and then to
  re-size it down, we get wrong ephemeral size.

  eph_size = (block_device.get_bdm_ephemeral_disk_size(ephemerals)

  It will return 0, if you don't create an ephemeral disk by the means
  of --block-device explicitly - nova will create a 'default' ephemeral
  disk for you, which won't be listed in block device mapping.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1428072/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1414232] [NEW] l3-agent restart fails to remove qrouter namespace

2015-01-23 Thread Mike Smith
Public bug reported:

When a router is removed while a l3-agent is stopped, then started again
the qrouter namespace will fail to be destroyed because the driver
returns a 'Device or resource busy' error.   The reason for the error is
the metadata proxy is still running on the namespace.

The metadata proxy code has recently been refactored and no longer is
called in the _destroy_router_namespace() method.  In the use case of
this bug, there is no ri/router object since it has been removed, only
the namespace remains.  The new before_router_removed() method requires
a router object.

Changes will be required in both the l3-agent code and metadata proxy
service code to resolve this bug.

** Affects: neutron
 Importance: Undecided
 Assignee: Mike Smith (michael-smith6)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Mike Smith (michael-smith6)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1414232

Title:
  l3-agent restart fails to remove qrouter namespace

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When a router is removed while a l3-agent is stopped, then started
  again the qrouter namespace will fail to be destroyed because the
  driver returns a 'Device or resource busy' error.   The reason for the
  error is the metadata proxy is still running on the namespace.

  The metadata proxy code has recently been refactored and no longer is
  called in the _destroy_router_namespace() method.  In the use case of
  this bug, there is no ri/router object since it has been removed, only
  the namespace remains.  The new before_router_removed() method
  requires a router object.

  Changes will be required in both the l3-agent code and metadata proxy
  service code to resolve this bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1414232/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1380806] Re: Shouldn't use unicode() when exception used in msgs

2015-01-19 Thread Mike Perez
** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1380806

Title:
  Shouldn't use unicode() when exception used in msgs

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  There are cases (identified by mriedem) where an exception is used as
  replacement text and is coerced using unicode():

  
http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py?id=2014.2.rc2#n3264

 reason=_(Driver Error: %s) % unicode(e))

  
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/ec2/__init__.py?id=2014.2.rc2#n89

 LOG.exception(_(FaultWrapper: %s), unicode(ex))

  doing this can interfere with translation by causing things to be
  prematurely translated.

  Need to scan for and correct any occurrences.  Also need to look at
  adding/updating a hacking check.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1380806/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398588] Re: volume_attach action registers volume attachment even on failure

2015-01-02 Thread Mike Perez
Reporter didn't provide a version. Using the latest from master, I was
not able to reproduce this issue (see below). Since the reporter
mentioned that Cinder does not know about this false attachment, but
Nova does, I would bet something is being set on the Nova side.

ubuntu@mount-issue:~/devstack$ nova list
c+--+-+++-+--+
| ID   | Name| Status | Task State | Power 
State | Networks |
+--+-+++-+--+
| 57304c45-101d-4ce0-8f4b-6b7ad853d135 | server1 | ACTIVE | -  | 
Running | private=10.0.0.2 |
| 1cb270bd-2131-42fa-9f99-ed95cd077cde | server2 | ACTIVE | -  | 
Running | private=10.0.0.3 |
+--+-+++-+--+
iubuntu@mount-issue:~/devstack$ cinder list
+--+---+--+--+-+--+-+
|  ID  |   Status  | Name | Size | Volume Type 
| Bootable | Attached to |
+--+---+--+--+-+--+-+
| 2a57f161-0828-4b68-8f93-cd4493ff725b | available | None |  1   | lvmdriver-1 
|  false   | |
+--+---+--+--+-+--+-+
ubuntu@mount-issue:~/devstack$ nova volume-attach server1 
2a57f161-0828-4b68-8f93-cd4493ff725b
+--+--+
| Property | Value|
+--+--+
| device   | /dev/vdb |
| id   | 2a57f161-0828-4b68-8f93-cd4493ff725b |
| serverId | 57304c45-101d-4ce0-8f4b-6b7ad853d135 |
| volumeId | 2a57f161-0828-4b68-8f93-cd4493ff725b |
+--+--+
ubuntu@mount-issue:~/devstack$ nova volume-attach server2 
2a57f161-0828-4b68-8f93-cd4493ff725b
ERROR (BadRequest): Invalid volume: volume 
'2a57f161-0828-4b68-8f93-cd4493ff725b' status must be 'available'. Currently in 
'in-use' (HTTP 400) (Request-ID: req-bfa40f00-56f9-4535-a4d1-23a8cb69b908)
ubuntu@mount-issue:~/devstack$ nova volume-attach server2 
2a57f161-0828-4b68-8f93-cd4493ff725b
ERROR (BadRequest): Invalid volume: volume 
'2a57f161-0828-4b68-8f93-cd4493ff725b' status must be 'available'. Currently in 
'in-use' (HTTP 400) (Request-ID: req-83736950-ff20-4d93-b512-ab7ffc25c7bc)
ubuntu@mount-issue:~/devstack$ cinder list
+--++--+--+-+--+--+
|  ID  | Status | Name | Size | Volume Type | 
Bootable | Attached to  |
+--++--+--+-+--+--+
| 2a57f161-0828-4b68-8f93-cd4493ff725b | in-use | None |  1   | lvmdriver-1 |  
false   | 57304c45-101d-4ce0-8f4b-6b7ad853d135 |
+--++--+--+-+--+--+
ubuntu@mount-issue:~/devstack$ http 
http://localhost:8774/v2/b9903b32e6e94a5ab2ee40e217be0fab/servers/1cb270bd-2131-42fa-9f99-ed95cd077cde/os-volume_attachments
 X-Auth-Token:'e08bc40fd5f148cf817f99c1133da8aa'
HTTP/1.1 200 OK
Content-Length: 25
Content-Type: application/json
Date: Fri, 02 Jan 2015 18:37:27 GMT
X-Compute-Request-Id: req-5d213be2-60af-450a-b521-8f979d5a80e6

{
volumeAttachments: []
}

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: cinder
   Status: Confirmed = Invalid

** Changed in: cinder
   Status: Invalid = Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1398588

Title:
  volume_attach action registers volume attachment even on failure

Status in Cinder:
  Incomplete
Status in OpenStack Compute (Nova):
  New

Bug description:
  When attaching volumes to instances, if the volume attachment fails, it is 
still noted as successful by the system in some cases.
  This is the information reflected when requesting the details of a servers 
volume attachments
  http://developer.openstack.org/api-ref-compute-v2-ext.html
  /v2/​{tenant_id}​/servers/​{server_id}​/os-volume_attachments
  Show volume attachment details

  In the example, I have 2 test servers and 1 test volume.
  I attach the volume to test_server1 and it is successful (though please see: 
https://bugs.launchpad.net/cinder/+bug/1398583)
  Next, I try to attach the same volume to test_server2.
  This call fails as expected, but the mountpoint / attachment is still 
registered.

  To demonstrate, I 

[Yahoo-eng-team] [Bug 1406598] [NEW] nova-cells doesn't url decode transport_url

2014-12-30 Thread Mike Dorman
Public bug reported:

When creating a cell using the nova-manage cell create command, the
transport_url generated in the database is url-encoded (i.e. '=' is
changed to '%3D', etc.)  That's propably the correct behavior, since the
connection string is stored as URL.

However, nova-cells doesn't properly decode that string.  So for
transport_url credentials that contain url-encodable characters, nova-
cells uses the url encoded string, rather than the actual correct
credentials.

Steps to reproduce:

- Create a cell using nova-manage with credentials containing url-
encodable characters:

nova-manage cell create  --name=cell_02 --cell_type=child
--username='the=user' --password='the=password' --hostname='hostname'
--port=5672 --virtual_host=/ --woffset=1 --wscale=1

- nova.cells table now contains a url-encoded transport_url:

mysql select * from cells \G
*** 1. row ***
   created_at: 2014-12-30 17:30:41
   updated_at: NULL
   deleted_at: NULL
   id: 3
  api_url: NULL
weight_offset: 1
 weight_scale: 1
 name: cell_02
is_parent: 0
  deleted: 0
transport_url: rabbit://the%3Duser:the%3Dpassword@hostname:5672//
1 row in set (0.00 sec)

- nova-cells uses the literal credentials 'the%3Duser' and
'the%3Dpassword' to connect to RMQ, rather than the correct 'the=user'
and 'the=password' credentials.

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  When creating a cell using the nova-manage cell create command, the
  transport_url generated in the database is url-encoded (i.e. '=' is
  changed to '%3D', etc.)  That's propably the correct behavior, since the
  connection string is stored as URL.
  
  However, nova-cells doesn't properly decode that string.  So for
  transport_url credentials that contain url-encodable characters, nova-
  cells uses the url encoded string, rather than the actual correct
  credentials.
  
  Steps to reproduce:
  
  - Create a cell using nova-manage with credentials containing url-
  encodable characters:
  
  nova-manage cell create  --name=cell_02 --cell_type=child
  --username='the=user' --password='the=password' --hostname='hostname'
  --port=5672 --virtual_host=/ --woffset=1 --wscale=1
  
  - nova.cells table now contains a url-encoded transport_url:
  
- mysql select * from cells;
- 
+-++++-+---+--+-+---+-+---+
- | created_at  | updated_at | deleted_at | id | api_url | 
weight_offset | weight_scale | name| is_parent | deleted | transport_url
 |
- 
+-++++-+---+--+-+---+-+---+
- | 2014-12-30 17:18:53 | NULL   | NULL   |  2 | NULL| 
1 |1 | cell_02 | 0 |   0 | 
rabbit://the%3Duser:the%3Dpassword@hostname:5672//  |
- 
+-++++-+---+--+-+---+-+---+
+ mysql select * from cells \G
+ *** 1. row ***
+created_at: 2014-12-30 17:30:41
+updated_at: NULL
+deleted_at: NULL
+id: 3
+   api_url: NULL
+ weight_offset: 1
+  weight_scale: 1
+  name: cell_02
+ is_parent: 0
+   deleted: 0
+ transport_url: rabbit://the%3Duser:the%3Dpassword@hostname:5672//
+ 1 row in set (0.00 sec)
  
  - nova-cells uses the literal credentials 'the%3Duser' and
  'the%3Dpassword' to connect to RMQ, rather than the correct 'the=user'
  and 'the=password' credentials.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1406598

Title:
  nova-cells doesn't url decode transport_url

Status in OpenStack Compute (Nova):
  New

Bug description:
  When creating a cell using the nova-manage cell create command, the
  transport_url generated in the database is url-encoded (i.e. '=' is
  changed to '%3D', etc.)  That's propably the correct behavior, since
  the connection string is stored as URL.

  However, nova-cells doesn't properly decode that string.  So for
  transport_url credentials that contain url-encodable characters, nova-
  cells uses the url encoded string, rather than the actual correct
  credentials.

  Steps to reproduce:

  - Create a cell using nova-manage with credentials containing url-
  encodable characters:

  nova-manage cell create  --name=cell_02 --cell_type=child
  --username='the=user' --password='the=password' --hostname='hostname'
  

[Yahoo-eng-team] [Bug 1404341] [NEW] Spelling error in l3_rpc_agent_api.py

2014-12-19 Thread Mike King
Public bug reported:

Log.debug statement in _notification_host() mis-spells Notify

** Affects: neutron
 Importance: Undecided
 Assignee: Mike King (mike-king)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Mike King (mike-king)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1404341

Title:
  Spelling error in l3_rpc_agent_api.py

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Log.debug statement in _notification_host() mis-spells Notify

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1404341/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1401095] [NEW] HA router can't be manually scheduled on L3 agent

2014-12-10 Thread Mike Kolesnik
Public bug reported:

HA routers get scheduled automatically to L3 agents, you can view the
router using l3-agent-list-hosting-router

$ neutron l3-agent-list-hosting-router harouter2
+--+--++---+
| id   | host | admin_state_up | alive |
+--+--++---+
| 9c34ec17-9045-4744-ae82-1f65f72ce3bd | net1 | True   | :-)   |
| cf758b1b-423e-44d9-ab0f-cf0d524b3dac | net2 | True   | :-)   |
| f2aac1e3-7a00-47c3-b6c9-2543d4a2ba9a | net3 | True   | :-)   |
+--+--++---+

You can remove it from an agent using l3-agent-router-remove, but when using 
l3-agent-router-add you get a 409:
$ neutron l3-agent-router-add bff55e85-65f6-4299-a3bb-f0e1c1ee2a05 harouter2
Conflict (HTTP 409) (Request-ID: req-22c1bb67-f0f8-4194-b863-93b8bb561c83)

The log says:
2014-12-10 07:47:41.036 INFO neutron.api.v2.resource 
[req-22c1bb67-f0f8-4194-b863-93b8bb561c83 admin 
f1bb80396ef34197b30117dfef45bea8] create failed (client error): The router 
72b9f897-b84d-4270-a645-af38fe3bd838 has been already hosted by the L3 Agent 
9c34ec17-9045-4744-ae82-1f65f72ce3bd.

** Affects: neutron
 Importance: Undecided
 Assignee: Yoni (yshafrir)
 Status: New


** Tags: ha l3agent router scheduling

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1401095

Title:
  HA router can't be manually scheduled on L3 agent

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  HA routers get scheduled automatically to L3 agents, you can view the
  router using l3-agent-list-hosting-router

  $ neutron l3-agent-list-hosting-router harouter2
  +--+--++---+
  | id   | host | admin_state_up | alive |
  +--+--++---+
  | 9c34ec17-9045-4744-ae82-1f65f72ce3bd | net1 | True   | :-)   |
  | cf758b1b-423e-44d9-ab0f-cf0d524b3dac | net2 | True   | :-)   |
  | f2aac1e3-7a00-47c3-b6c9-2543d4a2ba9a | net3 | True   | :-)   |
  +--+--++---+

  You can remove it from an agent using l3-agent-router-remove, but when using 
l3-agent-router-add you get a 409:
  $ neutron l3-agent-router-add bff55e85-65f6-4299-a3bb-f0e1c1ee2a05 harouter2
  Conflict (HTTP 409) (Request-ID: req-22c1bb67-f0f8-4194-b863-93b8bb561c83)

  The log says:
  2014-12-10 07:47:41.036 INFO neutron.api.v2.resource 
[req-22c1bb67-f0f8-4194-b863-93b8bb561c83 admin 
f1bb80396ef34197b30117dfef45bea8] create failed (client error): The router 
72b9f897-b84d-4270-a645-af38fe3bd838 has been already hosted by the L3 Agent 
9c34ec17-9045-4744-ae82-1f65f72ce3bd.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1401095/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1397796] [NEW] alembic v. 0.7.1 will support remove_fk and others not expected by heal_script

2014-11-30 Thread Mike Bayer
Public bug reported:

neutron/db/migration/alembic_migrations/heal_script.py seems to have a
hardcoded notion of what commands Alembic is prepared to pass within the
execute_alembic_command() call.   When Alembic 0.7.1 is released, the
tests in neutron.tests.unit.db.test_migration will fail as follows:

Traceback (most recent call last):
  File neutron/tests/unit/db/test_migration.py, line 194, in 
test_models_sync
self.db_sync(self.get_engine())
  File neutron/tests/unit/db/test_migration.py, line 136, in db_sync
migration.do_alembic_command(self.alembic_config, 'upgrade', 'head')
  File neutron/db/migration/cli.py, line 61, in do_alembic_command
getattr(alembic_command, cmd)(config, *args, **kwargs)
  File 
/var/jenkins/workspace/openstack_sqla_master/neutron/.tox/sqla_py27/lib/python2.7/site-packages/alembic/command.py,
 line 165, in upgrade
script.run_env()
  File 
/var/jenkins/workspace/openstack_sqla_master/neutron/.tox/sqla_py27/lib/python2.7/site-packages/alembic/script.py,
 line 382, in run_env
util.load_python_file(self.dir, 'env.py')
  File 
/var/jenkins/workspace/openstack_sqla_master/neutron/.tox/sqla_py27/lib/python2.7/site-packages/alembic/util.py,
 line 241, in load_python_file
module = load_module_py(module_id, path)
  File 
/var/jenkins/workspace/openstack_sqla_master/neutron/.tox/sqla_py27/lib/python2.7/site-packages/alembic/compat.py,
 line 79, in load_module_py
mod = imp.load_source(module_id, path, fp)
  File neutron/db/migration/alembic_migrations/env.py, line 109, in 
module
run_migrations_online()
  File neutron/db/migration/alembic_migrations/env.py, line 100, in 
run_migrations_online
context.run_migrations()
  File string, line 7, in run_migrations
  File 
/var/jenkins/workspace/openstack_sqla_master/neutron/.tox/sqla_py27/lib/python2.7/site-packages/alembic/environment.py,
 line 742, in run_migrations
self.get_context().run_migrations(**kw)
  File 
/var/jenkins/workspace/openstack_sqla_master/neutron/.tox/sqla_py27/lib/python2.7/site-packages/alembic/migration.py,
 line 305, in run_migrations
step.migration_fn(**kw)
  File 
/var/jenkins/workspace/openstack_sqla_master/neutron/neutron/db/migration/alembic_migrations/versions/1d6ee1ae5da5_db_healing.py,
 line 32, in upgrade
heal_script.heal()
  File neutron/db/migration/alembic_migrations/heal_script.py, line 81, 
in heal
execute_alembic_command(el)
  File neutron/db/migration/alembic_migrations/heal_script.py, line 92, 
in execute_alembic_command
METHODS[command[0]](*command[1:])
KeyError: 'remove_fk'


I'll send a review for the obvious fix though I have a suspicion there's
something more deliberate going on here, so consider this just a heads
up!

** Affects: neutron
 Importance: Undecided
 Assignee: Mike Bayer (zzzeek)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1397796

Title:
  alembic v. 0.7.1 will support remove_fk and others not expected by
  heal_script

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  neutron/db/migration/alembic_migrations/heal_script.py seems to have a
  hardcoded notion of what commands Alembic is prepared to pass within
  the execute_alembic_command() call.   When Alembic 0.7.1 is released,
  the tests in neutron.tests.unit.db.test_migration will fail as
  follows:

  Traceback (most recent call last):
File neutron/tests/unit/db/test_migration.py, line 194, in 
test_models_sync
  self.db_sync(self.get_engine())
File neutron/tests/unit/db/test_migration.py, line 136, in db_sync
  migration.do_alembic_command(self.alembic_config, 'upgrade', 'head')
File neutron/db/migration/cli.py, line 61, in do_alembic_command
  getattr(alembic_command, cmd)(config, *args, **kwargs)
File 
/var/jenkins/workspace/openstack_sqla_master/neutron/.tox/sqla_py27/lib/python2.7/site-packages/alembic/command.py,
 line 165, in upgrade
  script.run_env()
File 
/var/jenkins/workspace/openstack_sqla_master/neutron/.tox/sqla_py27/lib/python2.7/site-packages/alembic/script.py,
 line 382, in run_env
  util.load_python_file(self.dir, 'env.py')
File 
/var/jenkins/workspace/openstack_sqla_master/neutron/.tox/sqla_py27/lib/python2.7/site-packages/alembic/util.py,
 line 241, in load_python_file
  module = load_module_py(module_id, path)
File 
/var/jenkins/workspace/openstack_sqla_master/neutron/.tox/sqla_py27/lib/python2.7/site-packages/alembic/compat.py,
 line 79, in load_module_py
  mod = imp.load_source(module_id, path, fp)
File neutron/db/migration/alembic_migrations/env.py, line 109, in 
module
  run_migrations_online()
File neutron

[Yahoo-eng-team] [Bug 1394026] [NEW] floatingip_agent_gateway port is not deleted on fip disassociate

2014-11-18 Thread Mike Smith
Public bug reported:

When the last FIP is disassociated on a node, the floating IP agent
gateway port should be deleted from the db.  The same thing should
happen when a nova VM is deleted on a host which was the last FIP
associated VM.  The delete VM path is currently working, but the
disassociate path is not.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-dvr-backlog

** Tags added: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1394026

Title:
  floatingip_agent_gateway port is not deleted on fip disassociate

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When the last FIP is disassociated on a node, the floating IP agent
  gateway port should be deleted from the db.  The same thing should
  happen when a nova VM is deleted on a host which was the last FIP
  associated VM.  The delete VM path is currently working, but the
  disassociate path is not.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1394026/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1394043] [NEW] KeyError: 'gw_port_host' seen for DVR router removal

2014-11-18 Thread Mike Smith
Public bug reported:

In some multi-node setups, a qrouter namespace might be hosted on a node
where only a dhcp port is hosted (no VMs, no SNAT).

When the router is removed from the db, the host with only the qrouter
and dhcp namespace will have the qrouter namespace remain.  Other hosts
with the same qrouter will remove the namespace.  The following KeyError
is seen on the host with the remaining namespace -

2014-11-18 17:18:43.334 ERROR neutron.agent.l3_agent [-] 'gw_port_host'
2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent Traceback (most recent 
call last):
2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent   File 
/opt/stack/neutron/neutron/common/utils.py, line 341, in call
2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent return func(*args, 
**kwargs)
2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent   File 
/opt/stack/neutron/neutron/agent/l3_agent.py, line 958, in process_router
2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent 
self.external_gateway_removed(ri, ri.ex_gw_port, interface_name)
2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent   File 
/opt/stack/neutron/neutron/agent/l3_agent.py, line 1429, in 
external_gateway_removed
2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent 
ri.router['gw_port_host'] == self.host):
2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent KeyError: 'gw_port_host'
2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent 
Traceback (most recent call last):
  File /usr/local/lib/python2.7/dist-packages/eventlet/greenpool.py, line 82, 
in _spawn_n_impl
func(*args, **kwargs)
  File /opt/stack/neutron/neutron/agent/l3_agent.py, line 1842, in 
_process_router_update
self._process_router_if_compatible(router)
  File /opt/stack/neutron/neutron/agent/l3_agent.py, line 1817, in 
_process_router_if_compatible
self.process_router(ri)
  File /opt/stack/neutron/neutron/common/utils.py, line 344, in call
self.logger(e)
  File /opt/stack/neutron/neutron/openstack/common/excutils.py, line 82, in 
__exit__
six.reraise(self.type_, self.value, self.tb)
  File /opt/stack/neutron/neutron/common/utils.py, line 341, in call
return func(*args, **kwargs)
  File /opt/stack/neutron/neutron/agent/l3_agent.py, line 958, in 
process_router
self.external_gateway_removed(ri, ri.ex_gw_port, interface_name)
  File /opt/stack/neutron/neutron/agent/l3_agent.py, line 1429, in 
external_gateway_removed
ri.router['gw_port_host'] == self.host):
KeyError: 'gw_port_host'

For the issue to be seen, the router in question needs to have the
router-gateway-set previously.

** Affects: neutron
 Importance: Undecided
 Assignee: Mike Smith (michael-smith6)
 Status: New


** Tags: l3-dvr-backlog

** Changed in: neutron
 Assignee: (unassigned) = Mike Smith (michael-smith6)

** Tags added: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1394043

Title:
  KeyError: 'gw_port_host' seen for DVR router removal

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In some multi-node setups, a qrouter namespace might be hosted on a
  node where only a dhcp port is hosted (no VMs, no SNAT).

  When the router is removed from the db, the host with only the qrouter
  and dhcp namespace will have the qrouter namespace remain.  Other
  hosts with the same qrouter will remove the namespace.  The following
  KeyError is seen on the host with the remaining namespace -

  2014-11-18 17:18:43.334 ERROR neutron.agent.l3_agent [-] 'gw_port_host'
  2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent Traceback (most recent 
call last):
  2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent   File 
/opt/stack/neutron/neutron/common/utils.py, line 341, in call
  2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent return func(*args, 
**kwargs)
  2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent   File 
/opt/stack/neutron/neutron/agent/l3_agent.py, line 958, in process_router
  2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent 
self.external_gateway_removed(ri, ri.ex_gw_port, interface_name)
  2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent   File 
/opt/stack/neutron/neutron/agent/l3_agent.py, line 1429, in 
external_gateway_removed
  2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent 
ri.router['gw_port_host'] == self.host):
  2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent KeyError: 'gw_port_host'
  2014-11-18 17:18:43.334 TRACE neutron.agent.l3_agent 
  Traceback (most recent call last):
File /usr/local/lib/python2.7/dist-packages/eventlet/greenpool.py, line 
82, in _spawn_n_impl
  func(*args, **kwargs)
File /opt/stack/neutron/neutron/agent/l3_agent.py, line 1842, in 
_process_router_update
  self._process_router_if_compatible(router)
File /opt/stack/neutron/neutron/agent/l3_agent.py, line 1817, in 
_process_router_if_compatible

[Yahoo-eng-team] [Bug 1387311] [NEW] Unprocessable Entity error for large images on Ceph Swift store

2014-10-29 Thread Mike Dorman
Public bug reported:

There is an implementation difference between Ceph Swift and OS Swift in
how the ETag/checksum of a dynamic large object (DLO) manifest object is
verified.

OS Swift verifies it just like any other object, md5’ing the content of the 
object:
https://github.com/openstack/swift/blob/master/swift/obj/server.py#L439-L459

Ceph Swift actually does the full DLO checksum across all the component objects:
https://github.com/ceph/ceph/blob/master/src/rgw/rgw_op.cc#L1765-L1781

The Glance Swift store driver assumes the OS Swift behavior, and sends an ETag 
of md5() with the PUT request for the manifest object.  Technically, this is 
correct, since that object itself is a zero-byte object:
https://github.com/openstack/glance_store/blob/master/glance_store/_drivers/swift/store.py#L552

However, when using a Ceph Swift store, this results in a 422
Unprocessable Entity response from Swift, because the provided ETag
doesn't match the expected ETag for the DLO.

It would seem to make sense to just not send any ETag with the manifest
object PUT request.  It is not required by the API, and only marginally
improves the validation of the object.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1387311

Title:
  Unprocessable Entity error for large images on Ceph Swift store

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  There is an implementation difference between Ceph Swift and OS Swift
  in how the ETag/checksum of a dynamic large object (DLO) manifest
  object is verified.

  OS Swift verifies it just like any other object, md5’ing the content of the 
object:
  https://github.com/openstack/swift/blob/master/swift/obj/server.py#L439-L459

  Ceph Swift actually does the full DLO checksum across all the component 
objects:
  https://github.com/ceph/ceph/blob/master/src/rgw/rgw_op.cc#L1765-L1781

  The Glance Swift store driver assumes the OS Swift behavior, and sends an 
ETag of md5() with the PUT request for the manifest object.  Technically, 
this is correct, since that object itself is a zero-byte object:
  
https://github.com/openstack/glance_store/blob/master/glance_store/_drivers/swift/store.py#L552

  However, when using a Ceph Swift store, this results in a 422
  Unprocessable Entity response from Swift, because the provided ETag
  doesn't match the expected ETag for the DLO.

  It would seem to make sense to just not send any ETag with the
  manifest object PUT request.  It is not required by the API, and only
  marginally improves the validation of the object.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1387311/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1380823] [NEW] outerjoins used as a result of plugin architecture are inefficient

2014-10-13 Thread Mike Bayer
Public bug reported:

Hi there -

I'm posting this as a bug sort of as a means to locate who best to talk
about a. how critical these queries are and b. what other approaches
would be feasible (I'm zzzeek on IRC).

We're talking here about the plugin architecture in
neutron/db/common_db_mixin.py, where the register_model_query_hook()
method presents a way of applying modifiers to queries.This system
appears to be used by:  db/external_net_db.py, plugins/ml2/plugin.py,
db/portbindings_db.py, plugins/metaplugin/meta_neutron_plugin.py.

What the use of the hook has in common in these cases is that a LEFT
OUTER JOIN is applied to the Query early on, in anticipation of either
the filter_hook or result_filters being applied to the query, but only
*possibly*, and then even within those hooks as supplied, again only
*possibly*.   It's these two *possiblies* that leads to the use of
LEFT OUTER JOIN - this extra table is present in the query's FROM
clause, but if we decide we don't need to filter on it, its OK!  it's
just a left outer join.  And even, in the case of external_net_db.py,
maybe we even add a criteria WHERE extra model id IS NULL, that is
doing a not contains off of this left outer join.

The result is that we can get a query like this:

SELECT a.* FROM a LEFT OUTER JOIN b ON a.id=b.aid WHERE b.id IS NOT
NULL

this can happen for example if using External_net_db_mixin, the
outerjoin to ExternalNetwork is created, _network_filter_hook applies
expr.or_(ExternalNetwork.network_id != expr.null()), and that's it.

The database will usually have a much easier time if this query is
expressed correctly:

   SELECT a.* FROM a INNER JOIN b ON a.id=b.aid


the reason this bugs me is because the SQL output is being compromised as a 
result of how the plugin system is organized here.   Preferable would be a 
system where the plugins are either organized into fewer functions that perform 
all the checking at once, or if the plugin system had more granularity to know 
that it needs to apply an optional JOIN or not.   

There's a lot of ways I could propose reorganizing this but I wanted to
talk to someone on IRC to make sure that no external projects are using
these hooks, and to get some other background.

Overall long term I seek to consolidate the use of model_query into
oslo.db, so I'm looking to take in all of its variants into a common
form.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1380823

Title:
  outerjoins used as a result of plugin architecture are inefficient

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Hi there -

  I'm posting this as a bug sort of as a means to locate who best to
  talk about a. how critical these queries are and b. what other
  approaches would be feasible (I'm zzzeek on IRC).

  We're talking here about the plugin architecture in
  neutron/db/common_db_mixin.py, where the register_model_query_hook()
  method presents a way of applying modifiers to queries.This system
  appears to be used by:  db/external_net_db.py, plugins/ml2/plugin.py,
  db/portbindings_db.py, plugins/metaplugin/meta_neutron_plugin.py.

  What the use of the hook has in common in these cases is that a LEFT
  OUTER JOIN is applied to the Query early on, in anticipation of either
  the filter_hook or result_filters being applied to the query, but only
  *possibly*, and then even within those hooks as supplied, again only
  *possibly*.   It's these two *possiblies* that leads to the use of
  LEFT OUTER JOIN - this extra table is present in the query's FROM
  clause, but if we decide we don't need to filter on it, its OK!  it's
  just a left outer join.  And even, in the case of external_net_db.py,
  maybe we even add a criteria WHERE extra model id IS NULL, that is
  doing a not contains off of this left outer join.

  The result is that we can get a query like this:

  SELECT a.* FROM a LEFT OUTER JOIN b ON a.id=b.aid WHERE b.id IS
  NOT NULL

  this can happen for example if using External_net_db_mixin, the
  outerjoin to ExternalNetwork is created, _network_filter_hook applies
  expr.or_(ExternalNetwork.network_id != expr.null()), and that's it.

  The database will usually have a much easier time if this query is
  expressed correctly:

 SELECT a.* FROM a INNER JOIN b ON a.id=b.aid

  
  the reason this bugs me is because the SQL output is being compromised as a 
result of how the plugin system is organized here.   Preferable would be a 
system where the plugins are either organized into fewer functions that perform 
all the checking at once, or if the plugin system had more granularity to know 
that it needs to apply an optional JOIN or not.   

  There's a lot of ways I could propose reorganizing this but I wanted
  to talk to someone on IRC to make sure that no external 

[Yahoo-eng-team] [Bug 1370297] Re: volume/snapshot allows name with only white spaces

2014-10-06 Thread Mike Perez
** Changed in: cinder
 Assignee: Liyingjun (liyingjun) = (unassigned)

** Changed in: cinder
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1370297

Title:
  volume/snapshot allows name with only white spaces

Status in Cinder:
  Invalid
Status in OpenStack Dashboard (Horizon):
  Won't Fix

Bug description:
  When create or edit volume or snapshot, it allows name field with only
  white spaces.

  How to reproduce:

  Just go to project - volume to create a volume with only white spaces
  a name, the volume shows up in the volume table with an empty name.

  
  same for snapshot

  Expect:

  form should not allow empty name when create/edit volume or snapshot

  
  This is split from https://bugs.launchpad.net/horizon/+bug/1357586 since 
volume and snapshot name is different from volume type name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1370297/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1375467] [NEW] db deadlock on _instance_update()

2014-09-29 Thread Mike Bayer
Public bug reported:

continuing from the same pattern as that of
https://bugs.launchpad.net/nova/+bug/1370191, we are also observing
unhandled deadlocks on derivatives of _instance_update(), such as the
stacktrace below.  As _instance_update() is a point of transaction
demarcation based on its use of get_session(), the @_retry_on_deadlock
should be added to this method.

Traceback (most recent call last):
File /usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 
133, in _dispatch_and_reply\
incoming.message))\
File /usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 
176, in _dispatch\
return self._do_dispatch(endpoint, method, ctxt, args)\
File /usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, line 
122, in _do_dispatch\
result = getattr(endpoint, method)(ctxt, **new_args)\
File /usr/lib/python2.7/site-packages/nova/conductor/manager.py, line 887, in 
instance_update\
service)\
File /usr/lib/python2.7/site-packages/oslo/messaging/rpc/server.py, line 139, 
in inner\
return func(*args, **kwargs)\
File /usr/lib/python2.7/site-packages/nova/conductor/manager.py, line 130, in 
instance_update\
context, instance_uuid, updates)\
File /usr/lib/python2.7/site-packages/nova/db/api.py, line 742, in 
instance_update_and_get_original\
 columns_to_join=columns_to_join)\
File /usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py, line 164, in 
wrapper\
return f(*args, **kwargs)\
File /usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py, line 2208, 
in instance_update_and_get_original\
 columns_to_join=columns_to_join)\
File /usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py, line 2299, 
in _instance_update\
session.add(instance_ref)\
File /usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py, line 447, 
in __exit__\
self.rollback()\
File /usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py, line 
58, in __exit__\
compat.reraise(exc_type, exc_value, exc_tb)\
File /usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py, line 444, 
in __exit__\
self.commit()\
File 
/usr/lib/python2.7/site-packages/nova/openstack/common/db/sqlalchemy/sessi 
on.py, line 443, in _wrap\
_raise_if_deadlock_error(e, self.bind.dialect.name)\
File 
/usr/lib/python2.7/site-packages/nova/openstack/common/db/sqlalchemy/sessi 
on.py, line 427, in _raise_if_deadlock_error\
raise exception.DBDeadlock(operational_error)\
DBDeadlock: (OperationalError) (1213, \'Deadlock found when trying to get lock; 
try restarting transaction\') None None\

** Affects: nova
 Importance: Undecided
 Assignee: Mike Bayer (zzzeek)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1375467

Title:
  db deadlock on _instance_update()

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  continuing from the same pattern as that of
  https://bugs.launchpad.net/nova/+bug/1370191, we are also observing
  unhandled deadlocks on derivatives of _instance_update(), such as the
  stacktrace below.  As _instance_update() is a point of transaction
  demarcation based on its use of get_session(), the @_retry_on_deadlock
  should be added to this method.

  Traceback (most recent call last):
  File /usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, 
line 133, in _dispatch_and_reply\
  incoming.message))\
  File /usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, 
line 176, in _dispatch\
  return self._do_dispatch(endpoint, method, ctxt, args)\
  File /usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py, 
line 122, in _do_dispatch\
  result = getattr(endpoint, method)(ctxt, **new_args)\
  File /usr/lib/python2.7/site-packages/nova/conductor/manager.py, line 887, 
in instance_update\
  service)\
  File /usr/lib/python2.7/site-packages/oslo/messaging/rpc/server.py, line 
139, in inner\
  return func(*args, **kwargs)\
  File /usr/lib/python2.7/site-packages/nova/conductor/manager.py, line 130, 
in instance_update\
  context, instance_uuid, updates)\
  File /usr/lib/python2.7/site-packages/nova/db/api.py, line 742, in 
instance_update_and_get_original\
   columns_to_join=columns_to_join)\
  File /usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py, line 164, 
in wrapper\
  return f(*args, **kwargs)\
  File /usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py, line 2208, 
in instance_update_and_get_original\
   columns_to_join=columns_to_join)\
  File /usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py, line 2299, 
in _instance_update\
  session.add(instance_ref)\
  File /usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py, line 
447, in __exit__\
  self.rollback()\
  File /usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py, 
line 58, in __exit__

[Yahoo-eng-team] [Bug 1373478] [NEW] filter scheduler makes invalid assumption of monotonicity

2014-09-24 Thread Mike Spreitzer
Public bug reported:

The current filter scheduler handles the scheduling of a homogenous
batch of N instances with a loop that assumes that a host ruled out in
one iteration can not be desirable in a later iteration --- but that is
a false assumption.

Consider the case of a filter whose purpose is to achieve balance across
some sort of areas.  These might be AZs, host aggregates, racks,
whatever.  Consider a request to schedule 4 identical instances; suppose
that there are two hosts, one in each of two different areas, initially
hosting nothing.  For the first iteration, both hosts pass this filter.
One gets picked, call it host A.  On the second iteration, only the
other host (call it B) passes the filter.  So the second instance goes
on B.  On the third iteration, both hosts would pass the filter but the
filter is only asked about host B.  So the third instance goes on B.  On
the fourth iteration, host B is unacceptable but that is the only host
about which the filter is asked.  So the scheduling fails with a
complaint about no acceptable host found.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1373478

Title:
  filter scheduler makes invalid assumption of monotonicity

Status in OpenStack Compute (Nova):
  New

Bug description:
  The current filter scheduler handles the scheduling of a homogenous
  batch of N instances with a loop that assumes that a host ruled out in
  one iteration can not be desirable in a later iteration --- but that
  is a false assumption.

  Consider the case of a filter whose purpose is to achieve balance
  across some sort of areas.  These might be AZs, host aggregates,
  racks, whatever.  Consider a request to schedule 4 identical
  instances; suppose that there are two hosts, one in each of two
  different areas, initially hosting nothing.  For the first iteration,
  both hosts pass this filter.  One gets picked, call it host A.  On the
  second iteration, only the other host (call it B) passes the filter.
  So the second instance goes on B.  On the third iteration, both hosts
  would pass the filter but the filter is only asked about host B.  So
  the third instance goes on B.  On the fourth iteration, host B is
  unacceptable but that is the only host about which the filter is
  asked.  So the scheduling fails with a complaint about no acceptable
  host found.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1373478/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373524] [NEW] dvr snat delete binding changed

2014-09-24 Thread Mike Smith
Public bug reported:

Recent changes in the l3 plugin related to dvr snat binding has changed
how the binding is sent to the l3-agent.  This patch changes the
l3-agent to properly handle the external gateway clear/delete cases.
SNAT namespaces will not be deleted in all cases without this fix.

** Affects: neutron
 Importance: Undecided
 Assignee: Mike Smith (michael-smith6)
 Status: New


** Tags: l3-dvr-backlog

** Tags added: l3-dvr-backlog

** Changed in: neutron
 Assignee: (unassigned) = Mike Smith (michael-smith6)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373524

Title:
  dvr snat delete binding changed

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Recent changes in the l3 plugin related to dvr snat binding has
  changed how the binding is sent to the l3-agent.  This patch changes
  the l3-agent to properly handle the external gateway clear/delete
  cases.  SNAT namespaces will not be deleted in all cases without this
  fix.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1373524/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371118] [NEW] Image file stays in store if image has been deleted during upload

2014-09-18 Thread Mike Fedosin
Public bug reported:

When I create a new task in v2 to upload an image, it creates the image
record in db, sets status to saving and then begins the uploading.

If the image is deleted by appropriate API call while its content is
still being uploaded, an exception is raised and it is not handled in
the API code. This leads to the fact that the uploaded image file stays
in a storage and clogs it.

File /opt/stack/glance/glance/common/scripts/image_import/main.py, line 62, 
in _execute 
uri)
File /opt/stack/glance/glance/common/scripts/image_import/main.py, line 95, 
in import_image
new_image = image_repo.get(image_id)
File /opt/stack/glance/glance/api/authorization.py, line 106, in get
image = self.image_repo.get(image_id)
File /opt/stack/glance/glance/domain/proxy.py, line 86, in get
return self.helper.proxy(self.base.get(item_id))
File /opt/stack/glance/glance/api/policy.py, line 179, in get
return super(ImageRepoProxy, self).get(image_id)
File /opt/stack/glance/glance/domain/proxy.py, line 86, in get
return self.helper.proxy(self.base.get(item_id))
File /opt/stack/glance/glance/domain/proxy.py, line 86, in get
return self.helper.proxy(self.base.get(item_id))
File /opt/stack/glance/glance/domain/proxy.py, line 86, in get 
return self.helper.proxy(self.base.get(item_id))
File /opt/stack/glance/glance/db/__init__.py, line 72, in get raise 
exception.NotFound(msg)
NotFound: No image found with ID e2285448-a56f-45b1-9e6e-216d2b304967

This bug is very similar to
https://bugs.launchpad.net/glance/+bug/1188532, but it relates to task
mechanism in v2.

** Affects: glance
 Importance: Undecided
 Assignee: Mike Fedosin (mfedosin)
 Status: New

** Changed in: glance
 Assignee: (unassigned) = Mike Fedosin (mfedosin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1371118

Title:
  Image file stays in store if image has been deleted during upload

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  When I create a new task in v2 to upload an image, it creates the
  image record in db, sets status to saving and then begins the
  uploading.

  If the image is deleted by appropriate API call while its content is
  still being uploaded, an exception is raised and it is not handled in
  the API code. This leads to the fact that the uploaded image file
  stays in a storage and clogs it.

  File /opt/stack/glance/glance/common/scripts/image_import/main.py, line 62, 
in _execute 
  uri)
  File /opt/stack/glance/glance/common/scripts/image_import/main.py, line 95, 
in import_image
  new_image = image_repo.get(image_id)
  File /opt/stack/glance/glance/api/authorization.py, line 106, in get
  image = self.image_repo.get(image_id)
  File /opt/stack/glance/glance/domain/proxy.py, line 86, in get
  return self.helper.proxy(self.base.get(item_id))
  File /opt/stack/glance/glance/api/policy.py, line 179, in get
  return super(ImageRepoProxy, self).get(image_id)
  File /opt/stack/glance/glance/domain/proxy.py, line 86, in get
  return self.helper.proxy(self.base.get(item_id))
  File /opt/stack/glance/glance/domain/proxy.py, line 86, in get
  return self.helper.proxy(self.base.get(item_id))
  File /opt/stack/glance/glance/domain/proxy.py, line 86, in get 
  return self.helper.proxy(self.base.get(item_id))
  File /opt/stack/glance/glance/db/__init__.py, line 72, in get raise 
exception.NotFound(msg)
  NotFound: No image found with ID e2285448-a56f-45b1-9e6e-216d2b304967

  This bug is very similar to
  https://bugs.launchpad.net/glance/+bug/1188532, but it relates to task
  mechanism in v2.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1371118/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370492] [NEW] calling curl HEAD ops time out on /v3/auth/tokens

2014-09-17 Thread Mike Abrams
Public bug reported:

the following command works --
'curl -H x-auth-token:$TOKEN -H x-subject-token:$TOKEN 
http://localhost:35357/v3/auth/tokens'

but this command does not work.  it does not return (hangs indefinitely) --
'curl -X HEAD -H x-auth-token:$TOKEN -H x-subject-token:$TOKEN 
http://localhost:35357/v3/auth/tokens'

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1370492

Title:
  calling curl HEAD ops time out on /v3/auth/tokens

Status in OpenStack Identity (Keystone):
  New

Bug description:
  the following command works --
  'curl -H x-auth-token:$TOKEN -H x-subject-token:$TOKEN 
http://localhost:35357/v3/auth/tokens'

  but this command does not work.  it does not return (hangs indefinitely) --
  'curl -X HEAD -H x-auth-token:$TOKEN -H x-subject-token:$TOKEN 
http://localhost:35357/v3/auth/tokens'

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1370492/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369721] [NEW] manually moving dvr-snat router fails

2014-09-15 Thread Mike Smith
Public bug reported:

An admin should be able to manually move the snat router functionality
from one dvr_snat node to another for DVR routers.  The commands
neutron l3-agent-router-delete and neutron l3-agent-router-add
should be used to manually move or reschedule the snat namespace from
one dvr_snat configured node to another.

Currently the old agent does remove the namespace and the l3-agent-
list-hosting-router command shows the agent missing, but the following
error is returned when the neutron l3-agent-router-add command is used
-

Not Found (HTTP 404) (Request-ID: req-1f2681ba-1d0c-47fe-8d6a-
aea80086ed29)

** Affects: neutron
 Importance: Undecided
 Assignee: Mike Smith (michael-smith6)
 Status: New


** Tags: l3-dvr-backlog

** Tags added: l3-dvr-backlog

** Changed in: neutron
 Assignee: (unassigned) = Mike Smith (michael-smith6)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1369721

Title:
  manually moving dvr-snat router fails

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  An admin should be able to manually move the snat router
  functionality from one dvr_snat node to another for DVR routers.  The
  commands neutron l3-agent-router-delete and neutron l3-agent-
  router-add should be used to manually move or reschedule the snat
  namespace from one dvr_snat configured node to another.

  Currently the old agent does remove the namespace and the l3-agent-
  list-hosting-router command shows the agent missing, but the
  following error is returned when the neutron l3-agent-router-add
  command is used -

  Not Found (HTTP 404) (Request-ID: req-1f2681ba-1d0c-47fe-8d6a-
  aea80086ed29)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1369721/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368910] Re: intersphinx requires network access which sometimes fails

2014-09-12 Thread Mike Perez
** Also affects: cinder
   Importance: Undecided
   Status: New

** Changed in: cinder
 Assignee: (unassigned) = Andreas Jaeger (jaegerandi)

** Changed in: cinder
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368910

Title:
  intersphinx requires network access  which sometimes fails

Status in Cinder:
  In Progress
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  The intersphinx module requires internet access, and periodically
  causes docs jobs to fail.

  This module also prevents docs from being built without internet
  access.

  Since we don't actually use intersphinx for much (if anything), lets
  just remove it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1368910/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369012] [NEW] FIP namespace not created for dvr

2014-09-12 Thread Mike Smith
Public bug reported:

There has been a regression in functionality recently.  When a FIP
namespace is scheduled to a dvr node, the gw_port_host should contain
the host binding where the FIP namespace should be hosted.  The
gw_port_host field is currently missing when the router info is sent to
a dvr node with a ex_gw_port.

** Affects: neutron
 Importance: Undecided
 Assignee: Mike Smith (michael-smith6)
 Status: New


** Tags: l3-dvr-backlog

** Tags added: l3-dvr-backlog

** Changed in: neutron
 Assignee: (unassigned) = Mike Smith (michael-smith6)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1369012

Title:
  FIP namespace not created for dvr

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  There has been a regression in functionality recently.  When a FIP
  namespace is scheduled to a dvr node, the gw_port_host should contain
  the host binding where the FIP namespace should be hosted.  The
  gw_port_host field is currently missing when the router info is sent
  to a dvr node with a ex_gw_port.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1369012/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   >