[Yahoo-eng-team] [Bug 1929469] [NEW] Regression Xenial/Queens: caused by d/p/CVE-2020-29565.patch

2021-05-24 Thread Jorge Niedbalski
Public bug reported:

[Environment]

Xenial/Queens
Horizon 13.0.3 >

[Description]

Horizon horizon (3:13.0.3-0ubuntu2) introduced patch CVE-2020-29565, which 
breaks X/Q clouds the reason
is that the allowed_host argument was introduced in 1.11 
(https://github.com/django/django/commit/f227b8d15d9d0e0c50eb6459cf4556bccc3fae53)
but Xenial has 1.8.7

The regression is introduced by patch
debian/patches/CVE-2020-29565.patch.

Operations such as associating a floating ip via dashboard fails with
the following traceback:

[Thu May 06 20:28:40.715395 2021] [wsgi:error] [pid 227689:tid 139873006274304] 
Internal Server Error: /project/floating_ips/associate/
[Thu May 06 20:28:40.715463 2021] [wsgi:error] [pid 227689:tid 139873006274304] 
Traceback (most recent call last):
[Thu May 06 20:28:40.715469 2021] [wsgi:error] [pid 227689:tid 139873006274304] 
File "/usr/lib/python2.7/dist-packages/django/core/handlers/base.py", line 132, 
in get_response
[Thu May 06 20:28:40.715474 2021] [wsgi:error] [pid 227689:tid 139873006274304] 
response = wrapped_callback(request, *callback_args, **callback_kwargs)
[Thu May 06 20:28:40.715479 2021] [wsgi:error] [pid 227689:tid 139873006274304] 
File "/usr/share/openstack-dashboard/horizon/decorators.py", line 36, in dec
[Thu May 06 20:28:40.715483 2021] [wsgi:error] [pid 227689:tid 139873006274304] 
return view_func(request, *args, **kwargs)
[Thu May 06 20:28:40.715488 2021] [wsgi:error] [pid 227689:tid 139873006274304] 
File "/usr/share/openstack-dashboard/horizon/decorators.py", line 52, in dec
[Thu May 06 20:28:40.715492 2021] [wsgi:error] [pid 227689:tid 139873006274304] 
return view_func(request, *args, **kwargs)
[Thu May 06 20:28:40.715497 2021] [wsgi:error] [pid 227689:tid 139873006274304] 
File "/usr/share/openstack-dashboard/horizon/decorators.py", line 36, in dec
[Thu May 06 20:28:40.715501 2021] [wsgi:error] [pid 227689:tid 139873006274304] 
return view_func(request, *args, **kwargs)
[Thu May 06 20:28:40.715506 2021] [wsgi:error] [pid 227689:tid 139873006274304] 
File "/usr/share/openstack-dashboard/horizon/decorators.py", line 113, in dec
[Thu May 06 20:28:40.715510 2021] [wsgi:error] [pid 227689:tid 139873006274304] 
return view_func(request, *args, **kwargs)
[Thu May 06 20:28:40.715515 2021] [wsgi:error] [pid 227689:tid 139873006274304] 
File "/usr/share/openstack-dashboard/horizon/decorators.py", line 84, in dec
[Thu May 06 20:28:40.715535 2021] [wsgi:error] [pid 227689:tid 139873006274304] 
return view_func(request, *args, **kwargs)
[Thu May 06 20:28:40.715540 2021] [wsgi:error] [pid 227689:tid 139873006274304] 
File "/usr/lib/python2.7/dist-packages/django/views/generic/base.py", line 71, 
in view
[Thu May 06 20:28:40.715545 2021] [wsgi:error] [pid 227689:tid 139873006274304] 
return self.dispatch(request, *args, **kwargs)
[Thu May 06 20:28:40.715549 2021] [wsgi:error] [pid 227689:tid 139873006274304] 
File "/usr/lib/python2.7/dist-packages/django/views/generic/base.py", line 89, 
in dispatch
[Thu May 06 20:28:40.715553 2021] [wsgi:error] [pid 227689:tid 139873006274304] 
return handler(request, *args, **kwargs)
[Thu May 06 20:28:40.715557 2021] [wsgi:error] [pid 227689:tid 139873006274304] 
File "/usr/share/openstack-dashboard/horizon/workflows/views.py", line 155, in 
get
[Thu May 06 20:28:40.715561 2021] [wsgi:error] [pid 227689:tid 139873006274304] 
context = self.get_context_data(**kwargs)
[Thu May 06 20:28:40.715565 2021] [wsgi:error] [pid 227689:tid 139873006274304] 
File "/usr/share/openstack-dashboard/horizon/workflows/views.py", line 101, in 
get_context_data
[Thu May 06 20:28:40.715569 2021] [wsgi:error] [pid 227689:tid 139873006274304] 
allowed_hosts=[self.request.get_host()]):
[Thu May 06 20:28:40.715573 2021] [wsgi:error] [pid 227689:tid 139873006274304] 
TypeError: is_safe_url() got an unexpected keyword argument 'allowed_hosts'

** Affects: horizon
 Importance: Undecided
 Status: New

** Description changed:

  [Environment]
  
  Xenial/Queens
  Horizon 13.0.3 >
  
  [Description]
  
- Horizon horizon (3:13.0.3-0ubuntu2) bionic-security; urgency=medium
- introduced patch CVE-2020-29565, which breaks X/Q clouds the reason
+ Horizon horizon (3:13.0.3-0ubuntu2) introduced patch CVE-2020-29565, which 
breaks X/Q clouds the reason
  is that the allowed_host argument was introduced in 1.11 
(https://github.com/django/django/commit/f227b8d15d9d0e0c50eb6459cf4556bccc3fae53)
  but Xenial has 1.8.7
  
  The regression is introduced by patch
  debian/patches/CVE-2020-29565.patch.
  
  Operations such as associating a floating ip via dashboard fails with
  the following traceback:
  
  [Thu May 06 20:28:40.715395 2021] [wsgi:error] [pid 227689:tid 
139873006274304] Internal Server Error: /project/floating_ips/associate/
  [Thu May 06 20:28:40.715463 2021] [wsgi:error] [pid 227689:tid 
139873006274304] Traceback (most recent call last):
  [Thu May 06 20:28:40.715469 2021] [wsgi:error] [pid 227689:tid 
139873006274304] File 

[Yahoo-eng-team] [Bug 1751923] Re: [SRU]_heal_instance_info_cache periodic task bases on port list from nova db, not from neutron server

2021-05-17 Thread Jorge Niedbalski
** Changed in: cloud-archive/rocky
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1751923

Title:
  [SRU]_heal_instance_info_cache periodic task bases on port list from
  nova db, not from neutron server

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive queens series:
  In Progress
Status in Ubuntu Cloud Archive rocky series:
  Won't Fix
Status in Ubuntu Cloud Archive stein series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Bionic:
  Confirmed
Status in nova source package in Disco:
  Fix Released

Bug description:
  [Impact]

  * During periodic task _heal_instance_info_cache the instance_info_caches are 
not updated using instance port_ids taken from neutron, but from nova db.
  * This causes that existing VMs to loose their network interfaces after 
reboot.

  [Test Plan]

  * This bug is reproducible on Bionic/Queens clouds.

  1) Deploy the following Juju bundle: https://paste.ubuntu.com/p/HgsqZfsDGh/
  2) Run the following script: https://paste.ubuntu.com/p/DrFcDXZGSt/
  3) If the script finishes with "Port not found" , the bug is still present.

  [Where problems could occur]

  ** Check the other info section ***

  
  [Other Info]

  How it looks now?
  =

  _heal_instance_info_cache during crontask:

  
https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/compute/manager.py#L6525

  is using network_api to get instance_nw_info (instance_info_caches):

  \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0try:
  
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0#
 Call to network API to get instance info.. this will
  
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0#
 force an update to the instance's info_cache
  
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0self.network_api.get_instance_nw_info(context,
 instance)

  self.network_api.get_instance_nw_info() is listed below:

  
https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L1377

  and it uses _build_network_info_model() without networks and port_ids
  parameters (because we're not adding any new interface to instance):

  
https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L2356

  Next: _gather_port_ids_and_networks() generates the list of instance
  networks and port_ids:

  \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0networks, port_ids = 
self._gather_port_ids_and_networks(
  
\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0context,
 instance, networks, port_ids, client)

  
https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L2389-L2390

  
https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L1393

  As we see that _gather_port_ids_and_networks() takes the port list
  from DB:

  
https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/objects/instance.py#L1173-L1176

  And thats it. When we lose a port its not possible to add it again with this 
periodic task.
  The only way is to clean device_id field in neutron port object and re-attach 
the interface using `nova interface-attach`.

  When the interface is missing and there is no port configured on
  compute host (for example after compute reboot) - interface is not
  added to instance and from neutron point of view port state is DOWN.

  When the interface is missing in cache and we reboot hard the instance
  - its not added as tapinterface in xml file = we don't have the
  network on host.

  Steps to reproduce
  ==
  1. Spawn devstack
  2. Spawn VM inside devstack with multiple ports (for example also from 2 
different networks)
  3. Update the DB row, drop one interface from interfaces_list
  4. Hard-Reboot the instance
  5. See that nova list shows instance without one address, but nova 
interface-list shows all addresses
  6. See that one port is missing in instance xml files
  7. In theory the _heal_instance_info_cache should fix this things, it relies 
on memory, not on the fresh list of instance ports taken from neutron.

  Reproduced Example
  ==
  1. Spawn VM with 1 private network port
  nova boot --flavor m1.small --image cirros-0.3.5-x86_64-disk --nic 
net-name=private  test-2
  2. Attach ports to have 2 private and 2 public interfaces
  nova list:
  | a64ed18d-9868-4bf0-90d3-d710d278922d | test-2 | ACTIVE | -  | 
Running 

[Yahoo-eng-team] [Bug 1751923] Re: _heal_instance_info_cache periodic task bases on port list from nova db, not from neutron server

2021-05-15 Thread Jorge Niedbalski
** Changed in: nova (Ubuntu)
   Status: Confirmed => Fix Released

** Changed in: cloud-archive/queens
   Status: New => In Progress

** Changed in: cloud-archive/queens
 Assignee: (unassigned) => Jorge Niedbalski (niedbalski)

** Changed in: nova (Ubuntu Bionic)
 Assignee: (unassigned) => Jorge Niedbalski (niedbalski)

** Summary changed:

- _heal_instance_info_cache periodic task bases on port list from nova db, not 
from neutron server
+ [SRU]_heal_instance_info_cache periodic task bases on port list from nova db, 
not from neutron server

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1751923

Title:
  [SRU]_heal_instance_info_cache periodic task bases on port list from
  nova db, not from neutron server

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive queens series:
  In Progress
Status in Ubuntu Cloud Archive rocky series:
  In Progress
Status in Ubuntu Cloud Archive stein series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Bionic:
  Confirmed
Status in nova source package in Disco:
  Fix Released

Bug description:
  Description
  ===

  During periodic task _heal_instance_info_cache the
  instance_info_caches are not updated using instance port_ids taken
  from neutron, but from nova db.

  Sometimes, perhaps because of some race-condition, its possible to
  lose some ports from instance_info_caches. Periodic task
  _heal_instance_info_cache should clean this up (add missing records),
  but in fact it's not working this way.

  How it looks now?
  =

  _heal_instance_info_cache during crontask:

  
https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/compute/manager.py#L6525

  is using network_api to get instance_nw_info (instance_info_caches):

  try:
  # Call to network API to get instance info.. this will
  # force an update to the instance's info_cache
  self.network_api.get_instance_nw_info(context, instance)

  self.network_api.get_instance_nw_info() is listed below:

  
https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L1377

  and it uses _build_network_info_model() without networks and port_ids
  parameters (because we're not adding any new interface to instance):

  
https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L2356

  Next: _gather_port_ids_and_networks() generates the list of instance
  networks and port_ids:

    networks, port_ids = self._gather_port_ids_and_networks(
  context, instance, networks, port_ids, client)

  
https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L2389-L2390

  
https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/network/neutronv2/api.py#L1393

  As we see that _gather_port_ids_and_networks() takes the port list
  from DB:

  
https://github.com/openstack/nova/blob/ef4000a0d326deb004843ee51d18030224c5630f/nova/objects/instance.py#L1173-L1176

  And thats it. When we lose a port its not possible to add it again with this 
periodic task.
  The only way is to clean device_id field in neutron port object and re-attach 
the interface using `nova interface-attach`.

  When the interface is missing and there is no port configured on
  compute host (for example after compute reboot) - interface is not
  added to instance and from neutron point of view port state is DOWN.

  When the interface is missing in cache and we reboot hard the instance
  - its not added as tapinterface in xml file = we don't have the
  network on host.

  Steps to reproduce
  ==
  1. Spawn devstack
  2. Spawn VM inside devstack with multiple ports (for example also from 2 
different networks)
  3. Update the DB row, drop one interface from interfaces_list
  4. Hard-Reboot the instance
  5. See that nova list shows instance without one address, but nova 
interface-list shows all addresses
  6. See that one port is missing in instance xml files
  7. In theory the _heal_instance_info_cache should fix this things, it relies 
on memory, not on the fresh list of instance ports taken from neutron.

  Reproduced Example
  ==
  1. Spawn VM with 1 private network port
  nova boot --flavor m1.small --image cirros-0.3.5-x86_64-disk --nic 
net-name=private  test-2
  2. Attach ports to have 2 private and 2 public interfaces
  nova list:
  | a64ed18d-9868-4bf0-90d3-d710d278922d | test-2 | ACTIVE | -  | 
Running | public=2001:db8::e, 172.24.4.15, 2001:db8::c, 172.24.4.16; 
private=fdda:5d77:e18e:0:f816:3eff:fee8:, 10.0.0.3, 
fdda

[Yahoo-eng-team] [Bug 1840844] Re: user with admin role gets logged out when trying to list images

2020-04-27 Thread Jorge Niedbalski
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: horizon (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: horizon (Ubuntu Eoan)
   Importance: Undecided
   Status: New

** Also affects: horizon (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: horizon (Ubuntu Groovy)
   Importance: Undecided
   Status: New

** Also affects: horizon (Ubuntu Focal)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1840844

Title:
  user with admin role gets logged out when trying to list images

Status in Ubuntu Cloud Archive:
  New
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in horizon package in Ubuntu:
  New
Status in horizon source package in Bionic:
  New
Status in horizon source package in Eoan:
  New
Status in horizon source package in Focal:
  New
Status in horizon source package in Groovy:
  New

Bug description:
  When admin user tries to access project-> compute -> images, if the
  user failed on the identity: get_project policy, user  will get logged
  out.

  code that failed is in
  openstack_dashboard/static/app/core/images/images.module.js
  .tableColumns
  .append(

  { id: 'owner', priority: 1, filters:
  [$memoize(keystone.getProjectName)], policies: [

  {rules: [['identity', 'identity:get_project']]}
  ]
  })

  it didn't happen in default Horizon. In our production cloud
  environment, keystone policy is "identity:get_project":
  "rule:cloud_admin or rule:admin_and_matching_target_project_domain_id
  or project_id:%(target.project.id)s". If user is not a cloud_admin,
  the admin user of a project, need to be member of the domain to
  satisfies the rule.

  The problem here is the admin user should not get logged out.
  It  is probably caused by horizon/static/framework/framework.module.js

    if (error.status === 403) {
   var msg2 = gettext('Forbidden. Redirecting to login');
   handleRedirectMessage(msg2, $rootScope, $window, frameworkEvents, 
toastService);
    }

  some log info from keystone

  19389 (oslo_policy._cache_handler): 2019-08-20 02:07:25,856 DEBUG 
_cache_handler read_cached_file Reloading cached file /etc/keystone/policy.json
  19389 (oslo_policy.policy): 2019-08-20 02:07:26,010 DEBUG policy 
_load_policy_file Reloaded policy file: /etc/keystone/policy.json
  19389 (keystone.common.wsgi): 2019-08-20 02:07:26,019 WARNING wsgi _call_ You 
are not authorized to perform the requested action: identity:get_project.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1840844/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1841700] Re: instance ingress bandwidth limiting doesn't works in ocata.

2019-09-02 Thread Jorge Niedbalski
** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Changed in: neutron (Ubuntu)
   Status: New => Fix Released

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: Invalid => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1841700

Title:
  instance ingress bandwidth limiting doesn't works in ocata.

Status in Ubuntu Cloud Archive:
  New
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Xenial:
  New

Bug description:
  [Environment]

  Xenial-Ocata deployment

  [Description]

  The instance ingress bandwidth limit implementation was targeted for
  Ocata [0], but the full implementation ingress/egress was done during
  the pike [1] cycle.

  However, isn't reported or explicit that ingress direction isn't
  supported in ocata, which causes the following exception when
  --ingress is specified.

  $ openstack network qos rule create --type bandwidth-limit --max-kbps 300 
--max-burst-kbits 300 --ingress bw-limiter
  Failed to create Network QoS rule: BadRequestException: 400: Client Error for 
url: https://openstack:9696/v2.0/qos/policies//bandwidth_limit_rules, 
Unrecognized attribute(s) 'direction'

  It would be desirable for this feature to be available on Ocata for being 
able to 
  set ingress/egress bandwidth limits on the ports.

  [0] https://blueprints.launchpad.net/neutron/+spec/instance-ingress-bw-limit
  [1] https://bugs.launchpad.net/neutron/+bug/1560961

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1841700/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1841700] [NEW] instance ingress bandwidth limiting doesn't works in ocata.

2019-08-27 Thread Jorge Niedbalski
Public bug reported:

[Environment]

Xenial-Ocata deployment

[Description]

The instance ingress bandwidth limit implementation was targeted for
Ocata [0], but the full implementation ingress/egress was done during
the pike [1] cycle.

However, isn't reported or explicit that ingress direction isn't
supported in ocata, which causes the following exception when --ingress
is specified.

$ openstack network qos rule create --type bandwidth-limit --max-kbps 300 
--max-burst-kbits 300 --ingress bw-limiter
Failed to create Network QoS rule: BadRequestException: 400: Client Error for 
url: https://openstack:9696/v2.0/qos/policies//bandwidth_limit_rules, 
Unrecognized attribute(s) 'direction'

It would be desirable for this feature to be available on Ocata for being able 
to 
set ingress/egress bandwidth limits on the ports.

[0] https://blueprints.launchpad.net/neutron/+spec/instance-ingress-bw-limit
[1] https://bugs.launchpad.net/neutron/+bug/1560961

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1841700

Title:
  instance ingress bandwidth limiting doesn't works in ocata.

Status in neutron:
  New

Bug description:
  [Environment]

  Xenial-Ocata deployment

  [Description]

  The instance ingress bandwidth limit implementation was targeted for
  Ocata [0], but the full implementation ingress/egress was done during
  the pike [1] cycle.

  However, isn't reported or explicit that ingress direction isn't
  supported in ocata, which causes the following exception when
  --ingress is specified.

  $ openstack network qos rule create --type bandwidth-limit --max-kbps 300 
--max-burst-kbits 300 --ingress bw-limiter
  Failed to create Network QoS rule: BadRequestException: 400: Client Error for 
url: https://openstack:9696/v2.0/qos/policies//bandwidth_limit_rules, 
Unrecognized attribute(s) 'direction'

  It would be desirable for this feature to be available on Ocata for being 
able to 
  set ingress/egress bandwidth limits on the ports.

  [0] https://blueprints.launchpad.net/neutron/+spec/instance-ingress-bw-limit
  [1] https://bugs.launchpad.net/neutron/+bug/1560961

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1841700/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1560961] Re: [RFE] Allow instance-ingress bandwidth limiting

2019-08-26 Thread Jorge Niedbalski
** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1560961

Title:
  [RFE] Allow instance-ingress bandwidth limiting

Status in Ubuntu Cloud Archive:
  New
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  New
Status in neutron source package in Xenial:
  New

Bug description:
  The current implementation of bandwidth limiting rules only supports egress 
bandwidth
  limiting.

  Use cases
  =
  There are cases where ingress bandwidth limiting is more important than
  egress limiting, for example when the workload of the cloud is mostly a 
consumer of data (crawlers, datamining, etc), and administrators need to ensure 
other workloads won't be affected.

  Other example are CSPs which need to plan & allocate the bandwidth
  provided to customers, or provide different levels of network service.

  API/Model impact
  ===
  The BandwidthLimiting rules will be added a direction field (egress/ingress), 
which by default will be egress to match the current behaviour and, therefore
  be backward compatible.

  Combining egress/ingress would be achieved by including an egress
  bandwidth limit and an ingress bandwidth limit.

  Additional information
  ==
  The CLI and SDK modifications are addressed in 
https://bugs.launchpad.net/python-openstackclient/+bug/1614121

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1560961/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1825882] Re: [SRU] Virsh disk attach errors silently ignored

2019-05-15 Thread Jorge Niedbalski
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1825882

Title:
  [SRU] Virsh disk attach errors silently ignored

Status in Ubuntu Cloud Archive:
  New
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) queens series:
  Fix Committed
Status in OpenStack Compute (nova) rocky series:
  Fix Committed
Status in OpenStack Compute (nova) stein series:
  Fix Committed
Status in nova package in Ubuntu:
  New
Status in nova source package in Bionic:
  New
Status in nova source package in Cosmic:
  New
Status in nova source package in Disco:
  New

Bug description:
  [Impact]

  The following commit (1) is causing volume attachments which fail due
  to libvirt device attach erros to be silently ignored and Nova report
  the attachment as successful.

  It seems that the original intention of the commit was to log a
  condition and re-raise the exeption, but if the exception is of type
  libvirt.libvirtError and does not contain the searched pattern, the
  exception is ignored. If you unindent the raise statement, errors are
  reported again.

  In our case we had ceph/apparmor configuration problems in compute
  nodes which prevented virsh attaching the device; volumes appeared as
  successfully attached but the corresponding block device missing in
  guests VMs. Other libvirt attach error conditions are ignored also, as
  when you have already occuppied device names (i.e. 'Target vdb already
  exists', device is busy, etc.)

  (1)
  
https://github.com/openstack/nova/commit/78891c2305bff6e16706339a9c5eca99a84e409c

  [Test Case]

  * Deploy any OpenStack version up to Pike , which includes ceph backed cinder
  * Create a guest VM (openstack server ...)
  * Create a test cinder volume

  $ openstack volume create test --size 10

  * Force a drop on ceph traffic. Run the following command on the nova
  hypervisor on which the server runs.

  $ iptables -A OUTPUT -d ceph-mon-addr -p tcp --dport 6800 -j DROP

  * Attach the volume to a running instance.

  $ openstack server add volume 7151f507-a6b7-4f6d-a4cc-fd223d9feb5d
  742ff117-21ae-4d1b-a52b-5b37955716ff

  * This should cause the volume attachment to fail

  $ virsh domblklist instance-x
  Target Source
  
  vda nova/7151f507-a6b7-4f6d-a4cc-fd223d9feb5d_disk

  No volume should attached after this step.

  * If the behavior is fixed:

 * Check that openstack server show , doesn't displays the displays the 
volume as attached.
 * Check that proper log entries states the libvirt exception and error.

  * If the behavior isn't fixed:

 * openstack server show  , will display the volume in the
  volumes_attached property.

  [Expected result]

  * Volume attach fails and a proper exception is logged.

  [Actual result]

  * Volume attach fails but remains connected to the host and no further
  exception gets logged.

  [Regression Potential]

  * We haven't identified any regression potential on this SRU.

  [Other Info]

  * N/A

  Description

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1825882/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1825882] Re: Virsh disk attach errors silently ignored

2019-05-14 Thread Jorge Niedbalski
** Also affects: nova (Ubuntu Disco)
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Cosmic)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1825882

Title:
  Virsh disk attach errors silently ignored

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) queens series:
  Fix Committed
Status in OpenStack Compute (nova) rocky series:
  Fix Committed
Status in OpenStack Compute (nova) stein series:
  Fix Committed
Status in nova package in Ubuntu:
  New
Status in nova source package in Bionic:
  New
Status in nova source package in Cosmic:
  New
Status in nova source package in Disco:
  New

Bug description:
  Description
  ===
  The following commit (1) is causing volume attachments which fail due to 
libvirt device attach erros to be silently ignored and Nova report the 
attachment as successful.

  It seems that the original intention of the commit was to log a
  condition and re-raise the exeption, but if the exception is of type
  libvirt.libvirtError and does not contain the searched pattern, the
  exception is ignored. If you unindent the raise statement, errors are
  reported again.

  In our case we had ceph/apparmor configuration problems in compute
  nodes which prevented virsh attaching the device; volumes appeared as
  successfully attached but the corresponding block device missing in
  guests VMs. Other libvirt attach error conditions are ignored also, as
  when you have already occuppied device names (i.e. 'Target vdb already
  exists', device is busy, etc.)

  (1)
  
https://github.com/openstack/nova/commit/78891c2305bff6e16706339a9c5eca99a84e409c

  Steps to reproduce
  ==
  This is somewhat hacky, but is a quick way to provoke a virsh attach error:
  - virsh detach-disk  vdb
  - update nova & cinder DB as if volume is detached
  - re-attach volume
  - volume is marked as attached, but VM block device is missing

  Expected result
  ===
  - Error 'libvirtError: Requested operation is not valid: target vdb already 
exists' should be raised, and volume not attached

  Actual result
  =
  - Attach successful but virsh block device not created

  Environment
  ===
  - Openstack version Queens

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1825882/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1826523] Re: libvirtError exceptions during volume attach leave volume connected to host

2019-05-14 Thread Jorge Niedbalski
** Also affects: nova (Ubuntu Cosmic)
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Disco)
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu Bionic)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1826523

Title:
  libvirtError exceptions during volume attach leave volume connected to
  host

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) queens series:
  Fix Committed
Status in OpenStack Compute (nova) rocky series:
  Fix Committed
Status in OpenStack Compute (nova) stein series:
  Fix Committed
Status in nova package in Ubuntu:
  New
Status in nova source package in Bionic:
  New
Status in nova source package in Cosmic:
  New
Status in nova source package in Disco:
  New

Bug description:
  Description
  ===

  In addition to bug #1825882 where libvirtError exceptions are not
  raised correctly when attaching volumes to domains the underlying
  volumes are not disconnected from the host.

  Steps to reproduce
  ==

  - virsh detach-disk  vdb
  - update nova & cinder DB as if volume is detached
  - re-attach volume

  Expected result
  ===
  Volume attach fails and the volume is disconnected from the host.

  Actual result
  =
  volume attach fails but remains connected to the host.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/

 master to stable/queens

  2. Which hypervisor did you use?
 (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
 What's the version of that?

 Libvirt + QEMU/KVM

  2. Which storage type did you use?
 (For example: Ceph, LVM, GPFS, ...)
 What's the version of that?

 N/A

  3. Which networking type did you use?
 (For example: nova-network, Neutron with OpenVSwitch, ...)

 N/A

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1826523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1826523] Re: libvirtError exceptions during volume attach leave volume connected to host

2019-05-14 Thread Jorge Niedbalski
** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1826523

Title:
  libvirtError exceptions during volume attach leave volume connected to
  host

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) queens series:
  Fix Committed
Status in OpenStack Compute (nova) rocky series:
  Fix Committed
Status in OpenStack Compute (nova) stein series:
  Fix Committed
Status in nova package in Ubuntu:
  New

Bug description:
  Description
  ===

  In addition to bug #1825882 where libvirtError exceptions are not
  raised correctly when attaching volumes to domains the underlying
  volumes are not disconnected from the host.

  Steps to reproduce
  ==

  - virsh detach-disk  vdb
  - update nova & cinder DB as if volume is detached
  - re-attach volume

  Expected result
  ===
  Volume attach fails and the volume is disconnected from the host.

  Actual result
  =
  volume attach fails but remains connected to the host.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/

 master to stable/queens

  2. Which hypervisor did you use?
 (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
 What's the version of that?

 Libvirt + QEMU/KVM

  2. Which storage type did you use?
 (For example: Ceph, LVM, GPFS, ...)
 What's the version of that?

 N/A

  3. Which networking type did you use?
 (For example: nova-network, Neutron with OpenVSwitch, ...)

 N/A

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1826523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1825882] Re: Virsh disk attach errors silently ignored

2019-05-14 Thread Jorge Niedbalski
** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1825882

Title:
  Virsh disk attach errors silently ignored

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) queens series:
  Fix Committed
Status in OpenStack Compute (nova) rocky series:
  Fix Committed
Status in OpenStack Compute (nova) stein series:
  Fix Committed
Status in nova package in Ubuntu:
  New

Bug description:
  Description
  ===
  The following commit (1) is causing volume attachments which fail due to 
libvirt device attach erros to be silently ignored and Nova report the 
attachment as successful.

  It seems that the original intention of the commit was to log a
  condition and re-raise the exeption, but if the exception is of type
  libvirt.libvirtError and does not contain the searched pattern, the
  exception is ignored. If you unindent the raise statement, errors are
  reported again.

  In our case we had ceph/apparmor configuration problems in compute
  nodes which prevented virsh attaching the device; volumes appeared as
  successfully attached but the corresponding block device missing in
  guests VMs. Other libvirt attach error conditions are ignored also, as
  when you have already occuppied device names (i.e. 'Target vdb already
  exists', device is busy, etc.)

  (1)
  
https://github.com/openstack/nova/commit/78891c2305bff6e16706339a9c5eca99a84e409c

  Steps to reproduce
  ==
  This is somewhat hacky, but is a quick way to provoke a virsh attach error:
  - virsh detach-disk  vdb
  - update nova & cinder DB as if volume is detached
  - re-attach volume
  - volume is marked as attached, but VM block device is missing

  Expected result
  ===
  - Error 'libvirtError: Requested operation is not valid: target vdb already 
exists' should be raised, and volume not attached

  Actual result
  =
  - Attach successful but virsh block device not created

  Environment
  ===
  - Openstack version Queens

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1825882/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1768094] Re: nova_api nova-manage cell_v2 list_hosts not displaying all hypervisors

2018-05-07 Thread Jorge Niedbalski
** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1768094

Title:
  nova_api nova-manage cell_v2 list_hosts not displaying all hypervisors

Status in kolla-ansible:
  In Progress
Status in OpenStack Compute (nova):
  New

Bug description:
  [Environment]

  Kernel 4.14
  Queens release

  [Description]

  While doing a multinode deployment [0] ,not all the hypervisors are
  registered on the default nova cell right after the deployment has
  completed.

  The machines that fails to be enlisted are coincidentally part of
  different services (controllers/compute) groups, however, the machines
  that are exclusive part of the compute group, are enlisted correctly.

  /var/log/kolla/nova/nova-scheduler.log:2018-04-30 14:45:28.516 7 INFO 
nova.scheduler.host_manager [req-c719e3c6-ceb0-4a39-9997-59799f785a72 - - - - 
-] Host mapping not found for host xxx-09. Not tracking instance info for this 
host.
  /var/log/kolla/nova/nova-scheduler.log:2018-04-30 14:45:51.388 7 INFO 
nova.scheduler.host_manager [req-5ba217bb-f449-4a65-a875-8251191df434 - - - - 
-] Host mapping not found for host xxx-04. Not tracking instance info for this 
host.
  /var/log/kolla/nova/nova-scheduler.log:2018-04-30 14:45:56.353 7 INFO 
nova.scheduler.host_manager [req-6e44a9e0-9b90-41aa-826d-65ead111a1ed - - - - 
-] Host mapping not found for host xxx-10. Not tracking instance info for this 
host.
  /var/log/kolla/nova/nova-scheduler.log:2018-04-30 14:46:16.451 7 INFO 
nova.scheduler.host_manager [req-b9755302-3eca-47a8-90c3-7e2b72537956 - - - - 
-] Host mapping not found for host xxx-02. Not tracking instance info for this 
host.
  /var/log/kolla/nova/nova-scheduler.log:2018-04-30 14:46:37.436 7 INFO 
nova.scheduler.host_manager [req-86348de2-ee5c-4a38-9851-caecdfd5374e - - - - 
-] Host mapping not found for host xxx-08. Not tracking instance info for this 
host.
  /var/log/kolla/nova/nova-scheduler.log:2018-04-30 14:46:37.779 7 INFO 
nova.scheduler.host_manager [req-93183d20-8ac7-4c02-9d45-288939679a09 - - - - 
-] Host mapping not found for host xxx-03. Not tracking instance info for this 
host.
  /var/log/kolla/nova/nova-scheduler.log:2018-04-30 14:47:10.784 7 INFO 
nova.scheduler.host_manager [req-0c008505-c0b5-4f44-8a04-569d2013d63b - - - - 
-] Host mapping not found for host xxx-01. Not tracking instance info for this 
host.
  /var/log/kolla/nova/nova-scheduler.log:2018-04-30 14:47:18.598 7 INFO 
nova.scheduler.host_manager [req-7afb8034-470a-408e-b9c8-b1611df1b5de - - - - 
-] Host mapping not found for host xxx-07. Not tracking instance info for this 
host.
  /var/log/kolla/nova/nova-scheduler.log:2018-04-30 14:47:19.196 7 INFO 
nova.scheduler.host_manager [req-7dfb3e0c-a734-49b3-865e-1856a7939e55 - - - - 
-] Host mapping not found for host xxx-05. Not tracking instance info for this 
host.
  /var/log/kolla/nova/nova-scheduler.log:2018-04-30 14:47:21.354 7 INFO 
nova.scheduler.host_manager [req-6fce72dc-8716-4ac4-b2bc-cd13af812dd8 - - - - 
-] Host mapping not found for host xxx-06. Not tracking instance info for this 
host.
  /var/log/kolla/nova/nova-scheduler.log.1:2018-04-28 04:22:17.212 7 INFO 
nova.scheduler.host_manager [req-9795b7c4-9ce2-4299-b198-5697eef8b3bf - - - - 
-] Host mapping not found for host xxx-05. Not tracking instance info for this 
host.
  /var/log/kolla/nova/nova-scheduler.log.1:2018-04-28 04:22:17.356 7 INFO 
nova.scheduler.host_manager [req-6f01945d-9405-4364-9597-d7444a58bd91 - - - - 
-] Host mapping not found for host xxx-06. Not tracking instance info for this 
host.
  /var/log/kolla/nova/nova-scheduler.log.1:2018-04-28 04:22:17.385 7 INFO 
nova.scheduler.host_manager [req-f5de3e7c-5034-4980-9897-59b56a9257ee - - - - 
-] Host mapping not found for host xxx-04. Not tracking instance info for this 
host.
  /var/log/kolla/nova/nova-scheduler.log.1:2018-04-28 04:22:17.600 7 INFO 
nova.scheduler.host_manager [req-c506d111-d309-422a-b4fb-afe6620dd838 - - - - 
-] Host mapping not found for host xxx-07. Not tracking instance info for this 
host.
  /var/log/kolla/nova/nova-scheduler.log.1:2018-04-28 04:22:17.744 7 INFO 
nova.scheduler.host_manager [req-7d136730-e3f4-49db-bd3d-260516eae96a - - - - 
-] Host mapping not found for host xxx-03. Not tracking instance info for this 
host.
  /var/log/kolla/nova/nova-scheduler.log.1:2018-04-28 04:22:32.841 7 INFO 
nova.scheduler.host_manager [req-d227c40d-9a4b-4fbc-84ed-87e00cb01f4e - - - - 
-] Host mapping not found for host uk-dc-moonshot-cartridge-02. Not tracking 
instance info for this host.
  /var/log/kolla/nova/nova-scheduler.log.1:2018-04-28 04:22:37.443 7 INFO 
nova.scheduler.host_manager [req-11730392-be04-4d32-8aef-9384da570cdf - - - - 
-] Host mapping not found for host xxx-08. Not tracking instance info for this 
host.
  /var/log/kolla/nova/nova-scheduler.log.1:2018-04-28 

[Yahoo-eng-team] [Bug 1502136] Re: Everything returns 403 if show_multiple_locations is true and get_image_location policy is set

2017-06-23 Thread Jorge Niedbalski
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Changed in: glance (Ubuntu Trusty)
   Status: In Progress => New

** Changed in: glance (Ubuntu Trusty)
 Assignee: Jorge Niedbalski (niedbalski) => (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1502136

Title:
  Everything returns 403 if show_multiple_locations is true and
  get_image_location policy is set

Status in Ubuntu Cloud Archive:
  New
Status in Glance:
  Fix Released
Status in glance package in Ubuntu:
  Fix Released
Status in glance source package in Trusty:
  New
Status in glance source package in Xenial:
  Fix Released

Bug description:
  If, in glance-api.conf you set:

   show_multiple_locations = true

  Things work as expected:

   $ glance --os-image-api-version 2 image-show 
13ae74f0-74bf-4792-a8bb-7c622abc5410
   
+--+--+
   | Property | Value   
 |
   
+--+--+
   | checksum | 9cb02fe7fcac26f8a25d6db3109063ae
 |
   | container_format | bare
 |
   | created_at   | 2015-10-02T12:43:33Z
 |
   | disk_format  | raw 
 |
   | id   | 13ae74f0-74bf-4792-a8bb-7c622abc5410
 |
   | locations| [{"url": 
"swift+config://ref1/glance/13ae74f0-74bf-4792-a8bb-7c622abc5410",  |
   |  | "metadata": {}}]
 |
   | min_disk | 0   
 |
   | min_ram  | 0   
 |
   | name | good-image  
 |
   | owner| 88cffb9c8aee457788066c97b359585b
 |
   | protected| False   
 |
   | size | 145 
 |
   | status   | active  
 |
   | tags | []  
 |
   | updated_at   | 2015-10-02T12:43:34Z
 |
   | virtual_size | None
 |
   | visibility   | private 
 |
   
+--+--+

  but if you then set the get_image_location policy to role:admin, most
  calls return 403:

   $ glance --os-image-api-version 2 image-list
   403 Forbidden: You are not authorized to complete this action. (HTTP 403)

   $ glance --os-image-api-version 2 image-show 
13ae74f0-74bf-4792-a8bb-7c622abc5410
   403 Forbidden: You are not authorized to complete this action. (HTTP 403)

   $ glance --os-image-api-version 2 image-delete 
13ae74f0-74bf-4792-a8bb-7c622abc5410
   403 Forbidden: You are not authorized to complete this action. (HTTP 403)

  etc.

  As https://review.openstack.org/#/c/48401/ says:

   1. A user should be able to list/show/update/download image without
   needing permission on get_image_location.
   2. A policy failure should result in a 403 return code. We're
   getting a 500

  This is v2 only, v1 works ok.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1502136/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1502136] Re: Everything returns 403 if show_multiple_locations is true and get_image_location policy is set

2017-06-23 Thread Jorge Niedbalski
** Changed in: glance (Ubuntu Xenial)
   Status: New => Fix Released

** Changed in: glance (Ubuntu Trusty)
   Status: New => In Progress

** Changed in: glance (Ubuntu Trusty)
   Importance: Undecided => High

** Changed in: glance (Ubuntu Trusty)
 Assignee: (unassigned) => Jorge Niedbalski (niedbalski)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1502136

Title:
  Everything returns 403 if show_multiple_locations is true and
  get_image_location policy is set

Status in Glance:
  Fix Released
Status in glance package in Ubuntu:
  Fix Released
Status in glance source package in Trusty:
  In Progress
Status in glance source package in Xenial:
  Fix Released

Bug description:
  If, in glance-api.conf you set:

   show_multiple_locations = true

  Things work as expected:

   $ glance --os-image-api-version 2 image-show 
13ae74f0-74bf-4792-a8bb-7c622abc5410
   
+--+--+
   | Property | Value   
 |
   
+--+--+
   | checksum | 9cb02fe7fcac26f8a25d6db3109063ae
 |
   | container_format | bare
 |
   | created_at   | 2015-10-02T12:43:33Z
 |
   | disk_format  | raw 
 |
   | id   | 13ae74f0-74bf-4792-a8bb-7c622abc5410
 |
   | locations| [{"url": 
"swift+config://ref1/glance/13ae74f0-74bf-4792-a8bb-7c622abc5410",  |
   |  | "metadata": {}}]
 |
   | min_disk | 0   
 |
   | min_ram  | 0   
 |
   | name | good-image  
 |
   | owner| 88cffb9c8aee457788066c97b359585b
 |
   | protected| False   
 |
   | size | 145 
 |
   | status   | active  
 |
   | tags | []  
 |
   | updated_at   | 2015-10-02T12:43:34Z
 |
   | virtual_size | None
 |
   | visibility   | private 
 |
   
+--+--+

  but if you then set the get_image_location policy to role:admin, most
  calls return 403:

   $ glance --os-image-api-version 2 image-list
   403 Forbidden: You are not authorized to complete this action. (HTTP 403)

   $ glance --os-image-api-version 2 image-show 
13ae74f0-74bf-4792-a8bb-7c622abc5410
   403 Forbidden: You are not authorized to complete this action. (HTTP 403)

   $ glance --os-image-api-version 2 image-delete 
13ae74f0-74bf-4792-a8bb-7c622abc5410
   403 Forbidden: You are not authorized to complete this action. (HTTP 403)

  etc.

  As https://review.openstack.org/#/c/48401/ says:

   1. A user should be able to list/show/update/download image without
   needing permission on get_image_location.
   2. A policy failure should result in a 403 return code. We're
   getting a 500

  This is v2 only, v1 works ok.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1502136/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1502136] Re: Everything returns 403 if show_multiple_locations is true and get_image_location policy is set

2017-06-23 Thread Jorge Niedbalski
** Also affects: glance (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: glance (Ubuntu)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1502136

Title:
  Everything returns 403 if show_multiple_locations is true and
  get_image_location policy is set

Status in Glance:
  Fix Released
Status in glance package in Ubuntu:
  Fix Released
Status in glance source package in Trusty:
  In Progress
Status in glance source package in Xenial:
  Fix Released

Bug description:
  If, in glance-api.conf you set:

   show_multiple_locations = true

  Things work as expected:

   $ glance --os-image-api-version 2 image-show 
13ae74f0-74bf-4792-a8bb-7c622abc5410
   
+--+--+
   | Property | Value   
 |
   
+--+--+
   | checksum | 9cb02fe7fcac26f8a25d6db3109063ae
 |
   | container_format | bare
 |
   | created_at   | 2015-10-02T12:43:33Z
 |
   | disk_format  | raw 
 |
   | id   | 13ae74f0-74bf-4792-a8bb-7c622abc5410
 |
   | locations| [{"url": 
"swift+config://ref1/glance/13ae74f0-74bf-4792-a8bb-7c622abc5410",  |
   |  | "metadata": {}}]
 |
   | min_disk | 0   
 |
   | min_ram  | 0   
 |
   | name | good-image  
 |
   | owner| 88cffb9c8aee457788066c97b359585b
 |
   | protected| False   
 |
   | size | 145 
 |
   | status   | active  
 |
   | tags | []  
 |
   | updated_at   | 2015-10-02T12:43:34Z
 |
   | virtual_size | None
 |
   | visibility   | private 
 |
   
+--+--+

  but if you then set the get_image_location policy to role:admin, most
  calls return 403:

   $ glance --os-image-api-version 2 image-list
   403 Forbidden: You are not authorized to complete this action. (HTTP 403)

   $ glance --os-image-api-version 2 image-show 
13ae74f0-74bf-4792-a8bb-7c622abc5410
   403 Forbidden: You are not authorized to complete this action. (HTTP 403)

   $ glance --os-image-api-version 2 image-delete 
13ae74f0-74bf-4792-a8bb-7c622abc5410
   403 Forbidden: You are not authorized to complete this action. (HTTP 403)

  etc.

  As https://review.openstack.org/#/c/48401/ says:

   1. A user should be able to list/show/update/download image without
   needing permission on get_image_location.
   2. A policy failure should result in a 403 return code. We're
   getting a 500

  This is v2 only, v1 works ok.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1502136/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1649616] Re: Keystone Token Flush job does not complete in HA deployed environment

2017-05-29 Thread Jorge Niedbalski
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Tags added: sts

** Also affects: keystone (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1649616

Title:
  Keystone Token Flush job does not complete in HA deployed environment

Status in Ubuntu Cloud Archive:
  New
Status in OpenStack Identity (keystone):
  Fix Released
Status in puppet-keystone:
  Triaged
Status in tripleo:
  Triaged
Status in keystone package in Ubuntu:
  New

Bug description:
  The Keystone token flush job can get into a state where it will never
  complete because the transaction size exceeds the mysql galara
  transaction size - wsrep_max_ws_size (1073741824).

  
  Steps to Reproduce:
  1. Authenticate many times
  2. Observe that keystone token flush job runs (should be a very long time 
depending on disk) >20 hours in my environment
  3. Observe errors in mysql.log indicating a transaction that is too large

  
  Actual results:
  Expired tokens are not actually flushed from the database without any errors 
in keystone.log.  Only errors appear in mysql.log.

  
  Expected results:
  Expired tokens to be removed from the database

  
  Additional info:
  It is likely that you can demonstrate this with less than 1 million tokens as 
the >1 million token table is larger than 13GiB and the max transaction size is 
1GiB, my token bench-marking Browbeat job creates more than needed.  

  Once the token flush job can not complete the token table will never
  decrease in size and eventually the cloud will run out of disk space.

  Furthermore the flush job will consume disk utilization resources.
  This was demonstrated on slow disks (Single 7.2K SATA disk).  On
  faster disks you will have more capacity to generate tokens, you can
  then generate the number of tokens to exceed the transaction size even
  faster.

  Log evidence:
  [root@overcloud-controller-0 log]# grep " Total expired" 
/var/log/keystone/keystone.log
  2016-12-08 01:33:40.530 21614 INFO keystone.token.persistence.backends.sql 
[-] Total expired tokens removed: 1082434
  2016-12-09 09:31:25.301 14120 INFO keystone.token.persistence.backends.sql 
[-] Total expired tokens removed: 1084241
  2016-12-11 01:35:39.082 4223 INFO keystone.token.persistence.backends.sql [-] 
Total expired tokens removed: 1086504
  2016-12-12 01:08:16.170 32575 INFO keystone.token.persistence.backends.sql 
[-] Total expired tokens removed: 1087823
  2016-12-13 01:22:18.121 28669 INFO keystone.token.persistence.backends.sql 
[-] Total expired tokens removed: 1089202
  [root@overcloud-controller-0 log]# tail mysqld.log 
  161208  1:33:41 [Warning] WSREP: transaction size limit (1073741824) 
exceeded: 1073774592
  161208  1:33:41 [ERROR] WSREP: rbr write fail, data_len: 0, 2
  161209  9:31:26 [Warning] WSREP: transaction size limit (1073741824) 
exceeded: 1073774592
  161209  9:31:26 [ERROR] WSREP: rbr write fail, data_len: 0, 2
  161211  1:35:39 [Warning] WSREP: transaction size limit (1073741824) 
exceeded: 1073774592
  161211  1:35:40 [ERROR] WSREP: rbr write fail, data_len: 0, 2
  161212  1:08:16 [Warning] WSREP: transaction size limit (1073741824) 
exceeded: 1073774592
  161212  1:08:17 [ERROR] WSREP: rbr write fail, data_len: 0, 2
  161213  1:22:18 [Warning] WSREP: transaction size limit (1073741824) 
exceeded: 1073774592
  161213  1:22:19 [ERROR] WSREP: rbr write fail, data_len: 0, 2

  
  Disk utilization issue graph is attached.  The entire job in that graph takes 
from the first spike is disk util(~5:18UTC) and culminates in about ~90 minutes 
of pegging the disk (between 1:09utc to 2:43utc).

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1649616/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1608934] Re: ephemeral disk creation fails for local storage with image type raw/lvm

2016-08-16 Thread Jorge Niedbalski
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: nova (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1608934

Title:
  ephemeral disk creation fails for local storage with image type
  raw/lvm

Status in Ubuntu Cloud Archive:
  New
Status in OpenStack Compute (nova):
  In Progress
Status in nova package in Ubuntu:
  New

Bug description:
  Description
  ===
  I am currently trying to launch an instance in my mitaka cluster with a 
flavor with ephemeral and root storage. Whenever i am trying to start the 
instance i am running into an "DiskNotFound" Error (see trace below). Starting 
instances without ephemeral works perfectly fine and the root disk is created 
as expected in /var/lib/nova/instance/$INSTANCEID/disk .

  Steps to reproduce
  ==
  1. Create a flavor with ephemeral and root storage.
  2. Start an instance with that flavor.

  Expected result
  ===
  Instance starts and ephemeral disk is created in 
/var/lib/nova/instances/$INSTANCEID/disk.eph0 or disk.local ? (Not sure where 
the switchase for the naming is)

  Actual result
  =
  Instance does not start, ephemeral disk seems to be created at 
/var/lib/nova/instances/$INSTANCEID/disk.eph0, but nova checks 
/var/lib/nova/instances/_base/ephemeral_* for disk_size

  TRACE: http://pastebin.com/raw/TwtiNLY2

  Environment
  ===
  I am running OpenStack mitaka on Ubuntu 16.04 in the latest version with 
Libvirt + KVM as hypervisor (also latest stable in xenial).

  Config
  ==

  nova.conf:

  ...
  [libvirt]
  images_type = raw
  rbd_secret_uuid = XXX
  virt_type = kvm
  inject_key = true
  snapshot_image_format = raw
  disk_cachemodes = "network=writeback"
  rng_dev_path = /dev/random
  rbd_user = cinder
  ...

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1608934/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1288438] Re: Neutron server takes a long time to recover from VIP move

2016-03-04 Thread Jorge Niedbalski
** Changed in: neutron (Ubuntu Trusty)
 Assignee: (unassigned) => Mario Splivalo (mariosplivalo)

** Changed in: neutron (Ubuntu)
   Importance: Undecided => Medium

** Changed in: neutron (Ubuntu)
   Status: New => Fix Released

** Changed in: neutron (Ubuntu Trusty)
   Status: New => In Progress

** Changed in: neutron (Ubuntu Trusty)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1288438

Title:
  Neutron server takes a long time to recover from VIP move

Status in Fuel for OpenStack:
  Fix Committed
Status in neutron:
  Fix Released
Status in oslo-incubator:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Trusty:
  In Progress

Bug description:
  Neutron waits sequentially for read_timeout seconds for each
  connection in its connection pool. The default pool_size is 10 so it
  takes 10 minutes for Neutron server to be available after the VIP is
  moved.

  This is log output from neutron-server after the VIP has been moved:
  2014-03-05 17:48:23.844 9899 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 17:49:23.887 9899 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 17:50:24.055 9899 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 17:51:24.067 9899 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 17:52:24.079 9899 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 17:53:24.115 9899 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 17:54:24.123 9899 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 17:55:24.131 9899 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 17:56:24.143 9899 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 17:57:24.163 9899 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')

  Here is the log output after the pool_size was changed to 7 and the 
read_timeout to 30.
  2014-03-05 18:50:25.300 15731 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 18:50:55.331 15731 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 18:51:25.351 15731 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 18:51:55.387 15731 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 18:52:25.415 15731 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 18:52:55.427 15731 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 18:53:25.439 15731 WARNING 
neutron.openstack.common.db.sqlalchemy.session [-] Got mysql server has gone 
away: (2013, 'Lost connection to MySQL server during query')
  2014-03-05 18:53:25.549 15731 INFO urllib3.connectionpool [-] Starting new 
HTTP connection (1): 192.168.0.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/fuel/+bug/1288438/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488645] [NEW] Fix typo on b/nova/tests/unit/api/ec2/test_api.py

2015-08-25 Thread Jorge Niedbalski
Public bug reported:

[Description]

Test /nova/tests/unit/api/ec2/test_api.py has a typo on the method 
test_properties_root_defice_name,
should be called test_properties_root_device_name

** Affects: nova
 Importance: Undecided
 Assignee: Jorge Niedbalski (niedbalski)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Jorge Niedbalski (niedbalski)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1488645

Title:
  Fix typo on b/nova/tests/unit/api/ec2/test_api.py

Status in OpenStack Compute (nova):
  New

Bug description:
  [Description]

  Test /nova/tests/unit/api/ec2/test_api.py has a typo on the method 
test_properties_root_defice_name,
  should be called test_properties_root_device_name

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1488645/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1452032] Re: Device descriptor not removed with different iqn and multipath enabled.

2015-08-05 Thread Jorge Niedbalski
@johngarbutt,

I think this is still affecting stable releases.

** Changed in: nova
   Status: Invalid = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1452032

Title:
  Device descriptor not removed with different iqn and multipath
  enabled.

Status in OpenStack Compute (nova):
  Confirmed
Status in Ubuntu:
  New

Bug description:
  [Environment]

  OpenStack Kilo
  Trusty 14.04.4

  [Description]

  if the attached multipath devices doesn't have same iqn like regular
  lvm+iscsi backend, in_use will be false.

  In that case,_disconnect_volume_multipath_iscsi() returns without
  calling _remove_multipath_device_descriptor().

  [Reproduction]

  1) Enable cinder LVM ISCSI on /etc/cinder/cinder.conf

  volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

  2) Enable iscsi_use_multipath on /etc/nova/nova.conf on your compute
  nodes:

  iscsi_use_multipath = True

  3) Create 3 cinder volumes

  $ cinder create 1
  $ cinder create 1
  $ cinder create 1

  $ cinder list

  ubuntu@niedbalski2-bastion:~/specs/1374999$ cinder list
  
+--+--+--+--+-+--+--+
  |  ID  |  Status  | Display Name | Size | 
Volume Type | Bootable | Attached to  |
  
+--+--+--+--+-+--+--+
  | 10844be6-8f86-414f-a10e-e1a31e2ba6e7 |  in-use  | None |  1   | 
None|  false   | b0a14447-5740-408a-b96f-a1e904b229e5 |
  | 1648d24c-0d65-4377-9fa5-6d3aeb8b1291 |  in-use  | None |  1   | 
None|  false   | b0a14447-5740-408a-b96f-a1e904b229e5 |
  | 53d6bb4e-2ca2-45ab-9ed1-887b1df2ff8f |  in-use  | None |  1   | 
None|  false   | b0a14447-5740-408a-b96f-a1e904b229e5 |
  
+--+--+--+--+-+--+--+

  4) Attach them to nova

  $ nova volume-attach instance_id 10844be6-8f86-414f-a10e-e1a31e2ba6e7
  $ nova volume-attach instance_id 1648d24c-0d65-4377-9fa5-6d3aeb8b1291
  $ nova volume-attach instance_id 53d6bb4e-2ca2-45ab-9ed1-887b1df2ff8f

  5) Check on the nova-compute unit for the current multipath/session
  status

  tcp: [1] 10.5.1.43:3260,1 
iqn.2010-10.org.openstack:volume-10844be6-8f86-414f-a10e-e1a31e2ba6e7
  tcp: [2] 10.5.1.43:3260,1 
iqn.2010-10.org.openstack:volume-1648d24c-0d65-4377-9fa5-6d3aeb8b1291
  tcp: [3] 10.5.1.43:3260,1 
iqn.2010-10.org.openstack:volume-53d6bb4e-2ca2-45ab-9ed1-887b1df2ff8f

  Multipath:

  root@juju-1374999-machine-10:/home/ubuntu# multipath -ll
  330030001 dm-2 IET,VIRTUAL-DISK
  size=1.0G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=1 status=active
    `- 10:0:0:1 sdb 8:16   active ready  running
  330010001 dm-0 IET,VIRTUAL-DISK
  size=1.0G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=1 status=active
    `- 8:0:0:1  sdg 8:96   active ready  running
  330020001 dm-1 IET,VIRTUAL-DISK
  size=1.0G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=1 status=active
    `- 9:0:0:1  sda 8:0active ready  running

  6) Detach the current volumes.

  First.

  ubuntu@niedbalski2-bastion:~/specs/1374999$ nova volume-detach
  b0a14447-5740-408a-b96f-a1e904b229e5 10844be6-8f86-414f-a10e-
  e1a31e2ba6e7

  ubuntu@niedbalski2-bastion:~/specs/1374999$ juju ssh 10 sudo multipath -ll
  330030001 dm-2 IET,VIRTUAL-DISK
  size=1.0G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=1 status=active
    `- 10:0:0:1 sdb 8:16   active ready  running
  330010001 dm-0 ,
  size=1.0G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=0 status=active
    `- #:#:#:#  -   #:#active faulty running
  330020001 dm-1 IET,VIRTUAL-DISK
  size=1.0G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=1 status=active
    `- 9:0:0:1  sda 8:0active ready  running

  Second raises the faulty state

  330030001 dm-2 IET,VIRTUAL-DISK
  size=1.0G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=1 status=active
    `- 10:0:0:1 sdb 8:16   active ready  running
  330010001 dm-0 ,
  size=1.0G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=0 status=active
    `- #:#:#:#  -   #:#active faulty running
  330020001 dm-1 ,
  size=1.0G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=0 status=active
    `- #:#:#:#  -   #:#active faulty running

  Third, raises the faulty state also

  sudo: unable to resolve host juju-1374999-machine-10
  330030001 dm-2 ,
  size=1.0G features='0' hwhandler='0' wp=rw
  `-+- 

[Yahoo-eng-team] [Bug 1419823] Re: Nullable image description crashes v2 client

2015-07-30 Thread Jorge Niedbalski
** Changed in: glance (Ubuntu)
   Status: Confirmed = Fix Released

** No longer affects: cloud-archive

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1419823

Title:
  Nullable image description crashes v2 client

Status in Glance:
  Fix Released
Status in Glance kilo series:
  Fix Released
Status in glance package in Ubuntu:
  Fix Released

Bug description:
  When you somehow set the image description to None the glanceclient v2
  image-list crashes (as well as image-show, image-update for this
  particular image). The only way to show all images now is to use
  client v1, because it's more stable in this case.

  Steps to reproduce:

  1. Open Horizon and go to the edit page of any image.
  2. Set description to anything eg. 123 and save.
  3. Open image edit page again, remove description and save it.
  4. List all images using glanceclient v2: glance --os-image-api-version 2 
image-list
  5. Be sad, because of raised exception:

  None is not of type u'string'

  Failed validating u'type' in schema[u'additionalProperties']:
  {u'type': u'string'}

  On instance[u'description']:
  None

  During investigating the issue I've found that the
  additionalProperties schema is set to accept only string values, so it
  should be expanded to allow for null values as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1419823/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447215] Re: Schema Missing kernel_id, ramdisk_id causes #1447193

2015-07-24 Thread Jorge Niedbalski
** Changed in: glance
   Status: Fix Committed = Fix Released

** Changed in: glance/kilo
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1447215

Title:
  Schema Missing kernel_id, ramdisk_id causes #1447193

Status in Glance:
  Fix Released
Status in Glance kilo series:
  In Progress
Status in glance package in Ubuntu:
  Confirmed

Bug description:
  [Description]

  
  [Environment]

  - Ubuntu 14.04.2
  - OpenStack Kilo

  ii  glance   1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - Daemons
  ii  glance-api   1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - API
  ii  glance-common1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - Common
  ii  glance-registry  1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - Registry
  ii  python-glance1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - Python library
  ii  python-glance-store  0.4.0-0ubuntu1~cloud0all 
 OpenStack Image Service store library - Python 2.x
  ii  python-glanceclient  1:0.15.0-0ubuntu1~cloud0 all 
 Client library for Openstack glance server.

  [Steps to reproduce]

  0) Set /etc/glance/glance-api.conf to enable_v2_api=False
  1) nova boot --flavor m1.small --image base-image --key-name keypair 
--availability-zone nova --security-groups default snapshot-bug 
  2) nova image-create snapshot-bug snapshot-bug-instance 

  At this point the created image has no kernel_id (None) and image_id
  (None)

  3) Enable_v2_api=True in glance-api.conf and restart.

  4) Run a os-image-api=2 client,

  $ glance --os-image-api-version 2 image-list

  This will fail with #1447193

  [Description]

  The schema-image.json file needs to be modified to allow null, string
  values for both attributes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1447215/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1362863] Re: reply queues fill up with unacked messages

2015-06-22 Thread Jorge Niedbalski
** Also affects: oslo.messaging (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1362863

Title:
  reply queues fill up with unacked messages

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  Invalid
Status in Messaging API for OpenStack:
  Fix Released
Status in oslo.messaging package in Ubuntu:
  New
Status in oslo.messaging source package in Trusty:
  New

Bug description:
  Since upgrading to icehouse we consistently get reply_x queues
  filling up with unacked messages. To fix this I have to restart the
  service. This seems to happen when something is wrong for a short
  period of time and it doesn't clean up after itself.

  So far I've seen the issue with nova-api, nova-compute, nova-network,
  nova-api-metadata, cinder-api but I'm sure there are others.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1362863/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1419823] Re: Nullable image description crashes v2 client

2015-06-11 Thread Jorge Niedbalski
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Changed in: cloud-archive
   Status: New = Confirmed

** Changed in: cloud-archive
 Assignee: (unassigned) = Jorge Niedbalski (niedbalski)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1419823

Title:
  Nullable image description crashes v2 client

Status in Ubuntu Cloud Archive:
  Confirmed
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in Glance kilo series:
  Fix Committed
Status in glance package in Ubuntu:
  Confirmed

Bug description:
  When you somehow set the image description to None the glanceclient v2
  image-list crashes (as well as image-show, image-update for this
  particular image). The only way to show all images now is to use
  client v1, because it's more stable in this case.

  Steps to reproduce:

  1. Open Horizon and go to the edit page of any image.
  2. Set description to anything eg. 123 and save.
  3. Open image edit page again, remove description and save it.
  4. List all images using glanceclient v2: glance --os-image-api-version 2 
image-list
  5. Be sad, because of raised exception:

  None is not of type u'string'

  Failed validating u'type' in schema[u'additionalProperties']:
  {u'type': u'string'}

  On instance[u'description']:
  None

  During investigating the issue I've found that the
  additionalProperties schema is set to accept only string values, so it
  should be expanded to allow for null values as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1419823/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1419823] Re: Nullable image description crashes v2 client

2015-06-09 Thread Jorge Niedbalski
** Also affects: glance (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1419823

Title:
  Nullable image description crashes v2 client

Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in Glance kilo series:
  Confirmed
Status in glance package in Ubuntu:
  New

Bug description:
  When you somehow set the image description to None the glanceclient v2
  image-list crashes (as well as image-show, image-update for this
  particular image). The only way to show all images now is to use
  client v1, because it's more stable in this case.

  Steps to reproduce:

  1. Open Horizon and go to the edit page of any image.
  2. Set description to anything eg. 123 and save.
  3. Open image edit page again, remove description and save it.
  4. List all images using glanceclient v2: glance --os-image-api-version 2 
image-list
  5. Be sad, because of raised exception:

  None is not of type u'string'

  Failed validating u'type' in schema[u'additionalProperties']:
  {u'type': u'string'}

  On instance[u'description']:
  None

  During investigating the issue I've found that the
  additionalProperties schema is set to accept only string values, so it
  should be expanded to allow for null values as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1419823/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1452032] [NEW] Device descriptor not removed with different iqn and multipath enabled.

2015-05-05 Thread Jorge Niedbalski
 330020001

** Affects: nova
 Importance: Undecided
 Assignee: Jorge Niedbalski (niedbalski)
 Status: In Progress

** Affects: ubuntu
 Importance: Undecided
 Status: New

** Changed in: nova
   Status: New = In Progress

** Changed in: nova
 Assignee: (unassigned) = Jorge Niedbalski (niedbalski)

** Description changed:

  [Environment]
  
  OpenStack Kilo
  Trusty 14.04.4
  
  [Description]
  
  if the attached multipath devices doesn't have same iqn like regular
  lvm+iscsi backend, in_use will be false.
  
  In that case,_disconnect_volume_multipath_iscsi() returns without
  calling _remove_multipath_device_descriptor().
  
  [Reproduction]
  
  1) Enable cinder LVM ISCSI on /etc/cinder/cinder.conf
  
  volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
  
- 
- 2) Enable iscsi_use_multipath on /etc/nova/nova.conf on your compute nodes:
+ 2) Enable iscsi_use_multipath on /etc/nova/nova.conf on your compute
+ nodes:
  
  iscsi_use_multipath = True
  
- 
- 3) Create 3 cinder volumes  
+ 3) Create 3 cinder volumes
  
  $ cinder create 1
  $ cinder create 1
  $ cinder create 1
  
  $ cinder list
  
  ubuntu@niedbalski2-bastion:~/specs/1374999$ cinder list
  
+--+--+--+--+-+--+--+
  |  ID  |  Status  | Display Name | Size | 
Volume Type | Bootable | Attached to  |
  
+--+--+--+--+-+--+--+
  | 10844be6-8f86-414f-a10e-e1a31e2ba6e7 |  in-use  | None |  1   | 
None|  false   | b0a14447-5740-408a-b96f-a1e904b229e5 |
  | 1648d24c-0d65-4377-9fa5-6d3aeb8b1291 |  in-use  | None |  1   | 
None|  false   | b0a14447-5740-408a-b96f-a1e904b229e5 |
  | 53d6bb4e-2ca2-45ab-9ed1-887b1df2ff8f |  in-use  | None |  1   | 
None|  false   | b0a14447-5740-408a-b96f-a1e904b229e5 |
  
+--+--+--+--+-+--+--+
  
- 
  4) Attach them to nova
  
  $ nova volume-attach instance_id 10844be6-8f86-414f-a10e-e1a31e2ba6e7
  $ nova volume-attach instance_id 1648d24c-0d65-4377-9fa5-6d3aeb8b1291
- $ nova volume-attach instance_id 53d6bb4e-2ca2-45ab-9ed1-887b1df2ff8f 
+ $ nova volume-attach instance_id 53d6bb4e-2ca2-45ab-9ed1-887b1df2ff8f
  
  5) Check on the nova-compute unit for the current multipath/session
  status
  
  tcp: [1] 10.5.1.43:3260,1 
iqn.2010-10.org.openstack:volume-10844be6-8f86-414f-a10e-e1a31e2ba6e7
  tcp: [2] 10.5.1.43:3260,1 
iqn.2010-10.org.openstack:volume-1648d24c-0d65-4377-9fa5-6d3aeb8b1291
  tcp: [3] 10.5.1.43:3260,1 
iqn.2010-10.org.openstack:volume-53d6bb4e-2ca2-45ab-9ed1-887b1df2ff8f
  
  Multipath:
  
  root@juju-1374999-machine-10:/home/ubuntu# multipath -ll
  330030001 dm-2 IET,VIRTUAL-DISK
  size=1.0G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=1 status=active
-   `- 10:0:0:1 sdb 8:16   active ready  running
+   `- 10:0:0:1 sdb 8:16   active ready  running
  330010001 dm-0 IET,VIRTUAL-DISK
  size=1.0G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=1 status=active
-   `- 8:0:0:1  sdg 8:96   active ready  running
+   `- 8:0:0:1  sdg 8:96   active ready  running
  330020001 dm-1 IET,VIRTUAL-DISK
  size=1.0G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=1 status=active
-   `- 9:0:0:1  sda 8:0active ready  running
+   `- 9:0:0:1  sda 8:0active ready  running
  
  6) Detach the current volumes.
  
  First.
  
  ubuntu@niedbalski2-bastion:~/specs/1374999$ nova volume-detach
  b0a14447-5740-408a-b96f-a1e904b229e5 10844be6-8f86-414f-a10e-
  e1a31e2ba6e7
  
- 
  ubuntu@niedbalski2-bastion:~/specs/1374999$ juju ssh 10 sudo multipath -ll
  330030001 dm-2 IET,VIRTUAL-DISK
  size=1.0G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=1 status=active
-   `- 10:0:0:1 sdb 8:16   active ready  running
+   `- 10:0:0:1 sdb 8:16   active ready  running
  330010001 dm-0 ,
  size=1.0G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=0 status=active
-   `- #:#:#:#  -   #:#active faulty running
+   `- #:#:#:#  -   #:#active faulty running
  330020001 dm-1 IET,VIRTUAL-DISK
  size=1.0G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=1 status=active
-   `- 9:0:0:1  sda 8:0active ready  running
+   `- 9:0:0:1  sda 8:0active ready  running
  
  Second raises the faulty state
  
  330030001 dm-2 IET,VIRTUAL-DISK
  size=1.0G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=1 status=active
-   `- 10:0:0:1 sdb 8:16   active ready  running
+   `- 10:0:0:1 sdb 8:16   active ready  running
  330010001 dm-0 ,
  size=1.0G features='0' hwhandler='0

[Yahoo-eng-team] [Bug 1452032] Re: Device descriptor not removed with different iqn and multipath enabled.

2015-05-05 Thread Jorge Niedbalski
** Also affects: ubuntu
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1452032

Title:
  Device descriptor not removed with different iqn and multipath
  enabled.

Status in OpenStack Compute (Nova):
  In Progress
Status in Ubuntu:
  New

Bug description:
  [Environment]

  OpenStack Kilo
  Trusty 14.04.4

  [Description]

  if the attached multipath devices doesn't have same iqn like regular
  lvm+iscsi backend, in_use will be false.

  In that case,_disconnect_volume_multipath_iscsi() returns without
  calling _remove_multipath_device_descriptor().

  [Reproduction]

  1) Enable cinder LVM ISCSI on /etc/cinder/cinder.conf

  volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver

  2) Enable iscsi_use_multipath on /etc/nova/nova.conf on your compute
  nodes:

  iscsi_use_multipath = True

  3) Create 3 cinder volumes

  $ cinder create 1
  $ cinder create 1
  $ cinder create 1

  $ cinder list

  ubuntu@niedbalski2-bastion:~/specs/1374999$ cinder list
  
+--+--+--+--+-+--+--+
  |  ID  |  Status  | Display Name | Size | 
Volume Type | Bootable | Attached to  |
  
+--+--+--+--+-+--+--+
  | 10844be6-8f86-414f-a10e-e1a31e2ba6e7 |  in-use  | None |  1   | 
None|  false   | b0a14447-5740-408a-b96f-a1e904b229e5 |
  | 1648d24c-0d65-4377-9fa5-6d3aeb8b1291 |  in-use  | None |  1   | 
None|  false   | b0a14447-5740-408a-b96f-a1e904b229e5 |
  | 53d6bb4e-2ca2-45ab-9ed1-887b1df2ff8f |  in-use  | None |  1   | 
None|  false   | b0a14447-5740-408a-b96f-a1e904b229e5 |
  
+--+--+--+--+-+--+--+

  4) Attach them to nova

  $ nova volume-attach instance_id 10844be6-8f86-414f-a10e-e1a31e2ba6e7
  $ nova volume-attach instance_id 1648d24c-0d65-4377-9fa5-6d3aeb8b1291
  $ nova volume-attach instance_id 53d6bb4e-2ca2-45ab-9ed1-887b1df2ff8f

  5) Check on the nova-compute unit for the current multipath/session
  status

  tcp: [1] 10.5.1.43:3260,1 
iqn.2010-10.org.openstack:volume-10844be6-8f86-414f-a10e-e1a31e2ba6e7
  tcp: [2] 10.5.1.43:3260,1 
iqn.2010-10.org.openstack:volume-1648d24c-0d65-4377-9fa5-6d3aeb8b1291
  tcp: [3] 10.5.1.43:3260,1 
iqn.2010-10.org.openstack:volume-53d6bb4e-2ca2-45ab-9ed1-887b1df2ff8f

  Multipath:

  root@juju-1374999-machine-10:/home/ubuntu# multipath -ll
  330030001 dm-2 IET,VIRTUAL-DISK
  size=1.0G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=1 status=active
    `- 10:0:0:1 sdb 8:16   active ready  running
  330010001 dm-0 IET,VIRTUAL-DISK
  size=1.0G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=1 status=active
    `- 8:0:0:1  sdg 8:96   active ready  running
  330020001 dm-1 IET,VIRTUAL-DISK
  size=1.0G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=1 status=active
    `- 9:0:0:1  sda 8:0active ready  running

  6) Detach the current volumes.

  First.

  ubuntu@niedbalski2-bastion:~/specs/1374999$ nova volume-detach
  b0a14447-5740-408a-b96f-a1e904b229e5 10844be6-8f86-414f-a10e-
  e1a31e2ba6e7

  ubuntu@niedbalski2-bastion:~/specs/1374999$ juju ssh 10 sudo multipath -ll
  330030001 dm-2 IET,VIRTUAL-DISK
  size=1.0G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=1 status=active
    `- 10:0:0:1 sdb 8:16   active ready  running
  330010001 dm-0 ,
  size=1.0G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=0 status=active
    `- #:#:#:#  -   #:#active faulty running
  330020001 dm-1 IET,VIRTUAL-DISK
  size=1.0G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=1 status=active
    `- 9:0:0:1  sda 8:0active ready  running

  Second raises the faulty state

  330030001 dm-2 IET,VIRTUAL-DISK
  size=1.0G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=1 status=active
    `- 10:0:0:1 sdb 8:16   active ready  running
  330010001 dm-0 ,
  size=1.0G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=0 status=active
    `- #:#:#:#  -   #:#active faulty running
  330020001 dm-1 ,
  size=1.0G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=0 status=active
    `- #:#:#:#  -   #:#active faulty running

  Third, raises the faulty state also

  sudo: unable to resolve host juju-1374999-machine-10
  330030001 dm-2 ,
  size=1.0G features='0' hwhandler='0' wp=rw
  `-+- policy='round-robin 0' prio=0 status=active
    `- #:#:#:# -   

[Yahoo-eng-team] [Bug 1447215] [NEW] Schema Missing kernel_id, ramdisk_id causes #1447193

2015-04-22 Thread Jorge Niedbalski
Public bug reported:

[Description]


[Environment]

- Ubuntu 14.04.2
- OpenStack Kilo

ii  glance   1:2015.1~rc1-0ubuntu2~cloud0 all   
   OpenStack Image Registry and Delivery Service - Daemons
ii  glance-api   1:2015.1~rc1-0ubuntu2~cloud0 all   
   OpenStack Image Registry and Delivery Service - API
ii  glance-common1:2015.1~rc1-0ubuntu2~cloud0 all   
   OpenStack Image Registry and Delivery Service - Common
ii  glance-registry  1:2015.1~rc1-0ubuntu2~cloud0 all   
   OpenStack Image Registry and Delivery Service - Registry
ii  python-glance1:2015.1~rc1-0ubuntu2~cloud0 all   
   OpenStack Image Registry and Delivery Service - Python library
ii  python-glance-store  0.4.0-0ubuntu1~cloud0all   
   OpenStack Image Service store library - Python 2.x
ii  python-glanceclient  1:0.15.0-0ubuntu1~cloud0 all   
   Client library for Openstack glance server.

[Steps to reproduce]

0) Set /etc/glance/glance-api.conf to enable_v2_api=False
1) nova boot --flavor m1.small --image base-image --key-name keypair 
--availability-zone nova --security-groups default snapshot-bug 
2) nova image-create snapshot-bug snapshot-bug-instance 

At this point the created image has no kernel_id (None) and image_id
(None)

3) Enable_v2_api=True in glance-api.conf and restart.

4) Run a os-image-api=2 client,

$ glance --os-image-api-version 2 image-list

This will fail with #1447193

[Description]

The schema-image.json file needs to be modified to allow null, string
values for both attributes.

** Affects: glance
 Importance: Undecided
 Assignee: Jorge Niedbalski (niedbalski)
 Status: New

** Affects: glance (Ubuntu)
 Importance: Undecided
 Status: New

** Changed in: glance
 Assignee: (unassigned) = Jorge Niedbalski (niedbalski)

** Also affects: glance (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1447215

Title:
  Schema Missing kernel_id, ramdisk_id causes #1447193

Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in glance package in Ubuntu:
  New

Bug description:
  [Description]

  
  [Environment]

  - Ubuntu 14.04.2
  - OpenStack Kilo

  ii  glance   1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - Daemons
  ii  glance-api   1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - API
  ii  glance-common1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - Common
  ii  glance-registry  1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - Registry
  ii  python-glance1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - Python library
  ii  python-glance-store  0.4.0-0ubuntu1~cloud0all 
 OpenStack Image Service store library - Python 2.x
  ii  python-glanceclient  1:0.15.0-0ubuntu1~cloud0 all 
 Client library for Openstack glance server.

  [Steps to reproduce]

  0) Set /etc/glance/glance-api.conf to enable_v2_api=False
  1) nova boot --flavor m1.small --image base-image --key-name keypair 
--availability-zone nova --security-groups default snapshot-bug 
  2) nova image-create snapshot-bug snapshot-bug-instance 

  At this point the created image has no kernel_id (None) and image_id
  (None)

  3) Enable_v2_api=True in glance-api.conf and restart.

  4) Run a os-image-api=2 client,

  $ glance --os-image-api-version 2 image-list

  This will fail with #1447193

  [Description]

  The schema-image.json file needs to be modified to allow null, string
  values for both attributes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1447215/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1379476] Re: Timeouts to keystone VIP, sporadic issues with keystone, may be caused by haproxy/corosync

2014-10-09 Thread Jorge Niedbalski
** Changed in: keystone
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1379476

Title:
  Timeouts to keystone VIP, sporadic issues with keystone, may be caused
  by haproxy/corosync

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  It was brought to my attention the following situation:
  

  ConnectionError: HTTPSConnectionPool(host='10.29.41.136', port=5000):
  Max retries exceeded with url: /v2.0/tokens (Caused by class
  'httplib.BadStatusLine': '')

  Had corosync with all 3 nodes in it, then 2 of them all of a sudden died 
after a little while.
  Now we are up on 1 node keystone cluster.

  In the syslog we're able to see that keystone is being signaled to
  terminate at points where we see failed connections within the apache
  logs. Even though there are 2 nodes within the cluster which are
  physically down it may help to remove the nodes from the cluster on
  the last surviving node. By turning the cluster into a one node
  cluster, we're going to make sure that corosync doesn't worry about
  those other nodes anymore. The hope is that this will prevent the
  keystone service from being taken down unexpectedly.

  

  What is the deployment recommendation for keystone to be configured
  together with pacemaker + corosync ?

  Right now users can be using dhclient on top of a bridge interface as
  the cluster interconnect, for example. Is this a supported
  configuration ? Are there any upstream problems related to having dhcp
  interfaces as the cluster interconnects ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1379476/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1379429] Re: Keystone HA - Corosync - 100% CPU consumption - HA not working

2014-10-09 Thread Jorge Niedbalski
** Changed in: keystone
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1379429

Title:
  Keystone HA - Corosync - 100% CPU consumption - HA not working

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  It was brought to my attention the following scenario:
  

  One keystone HA deployment has the following errors:

  Aug 18 07:54:24 r1-keystone pengine[1232]: warning: stage6: Node 
r2-keystone-bk is unclean!
  Aug 18 07:54:24 r1-keystone pengine[1232]: warning: stage6: YOUR RESOURCES 
ARE NOW LIKELY COMPROMISED
  Aug 18 07:54:24 r1-keystone pengine[1232]: error: stage6: ENABLE STONITH TO 
KEEP YOUR RESOURCES SAFE
  Aug 18 07:54:24 r1-keystone pengine[1232]: error: process_pe_message: 
Calculated Transition 2315: /var/lib/pacemaker/pengine/pe-error-0.bz2

  Keystone cluster CIB :

  cib epoch=19 num_updates=0 admin_epoch=0 validate-
  with=pacemaker-1.2 crm_feature_set=3.0.7 cib-last-written=Tue Aug
  12 08:23:19 2014 update-origin=r2-horizon update-client=crmd
  have-quorum=1

  cib epoch=19 num_updates=0 admin_epoch=0 validate-
  with=pacemaker-1.2 crm_feature_set=3.0.7 cib-last-written=Fri Jul
  25 06:02:30 2014 update-origin=r2-horizon update-client=crmd
  have-quorum=1 dc-uuid=169738387

  Has never been synchronized.

  
  What is the deployment recommendation for keystone to be configured together 
with pacemaker + corosync ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1379429/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp