[Yahoo-eng-team] [Bug 1281098] [NEW] Too long tunnel devices names

2014-02-17 Thread Viktor Křivák
Public bug reported:

Openvswitch neutron agent create too long names for tunnel devices. 
Ports name are created like %type-%remoteip.
For example for gre type tunnel, name is gre-192.168.201.10 which exceed max 
length for linux network devices (15 chars).
This name pass throw openvswitch, but create failed port with ofport -1 and 
proceed with error like this:

2014-02-17 11:25:14.048 22908 ERROR neutron.agent.linux.ovs_lib [-] Unable to 
execute ['ovs-ofctl', 'add-flow', 'br-tun', 
'hard_timeout=0,idle_timeout=0,priority=1,in_port=-1,actions=resubmit(,2)']. 
Exception: 
Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ovs-ofctl', 'add-flow', 'br-tun', 
'hard_timeout=0,idle_timeout=0,priority=1,in_port=-1,actions=resubmit(,2)']
Exit code: 1
Stdout: ''
Stderr: 'ovs-ofctl: -1: negative values not supported for in_port\n'

This bug affect only devices with long ip address 10.0.0.1 will pass,
but 192.168.201.10 fail.

Found in HAVANA with:
openvswitch: 1.9.3-1
linux: 3.2.54-2

But I think length limit will apply on all versions.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1281098

Title:
  Too long tunnel devices names

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Openvswitch neutron agent create too long names for tunnel devices. 
  Ports name are created like %type-%remoteip.
  For example for gre type tunnel, name is gre-192.168.201.10 which exceed max 
length for linux network devices (15 chars).
  This name pass throw openvswitch, but create failed port with ofport -1 and 
proceed with error like this:

  2014-02-17 11:25:14.048 22908 ERROR neutron.agent.linux.ovs_lib [-] Unable to 
execute ['ovs-ofctl', 'add-flow', 'br-tun', 
'hard_timeout=0,idle_timeout=0,priority=1,in_port=-1,actions=resubmit(,2)']. 
Exception: 
  Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ovs-ofctl', 'add-flow', 'br-tun', 
'hard_timeout=0,idle_timeout=0,priority=1,in_port=-1,actions=resubmit(,2)']
  Exit code: 1
  Stdout: ''
  Stderr: 'ovs-ofctl: -1: negative values not supported for in_port\n'

  This bug affect only devices with long ip address 10.0.0.1 will pass,
  but 192.168.201.10 fail.

  Found in HAVANA with:
  openvswitch: 1.9.3-1
  linux: 3.2.54-2

  But I think length limit will apply on all versions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1281098/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2077009] [NEW] Nova doesn't clean claims after evacuation

2024-08-14 Thread Viktor Křivák
Public bug reported:

When a VM is evacuated from the failed server without stating explicit
destination (i.e. letting the scheduler decide) claims to the old
hypervisor in placement are never deleted.

How to replicate:
- Place VM on the hypervisor
- Check that claims are OK:
+--++---+--+--+
| resource_provider| generation | resources 
| project_id   | user_id  |
+--++---+--+--+
| 229cce5f-3b87-438a-baa9-539be0fc9bd8 |  5 | {'VCPU': 1, 'MEMORY_MB': 
256} | 4facfb06808a4621b4f47123a0184a4a | 15da82817e56446198fcdd870a45d8f4 |
+--++---+--+--+
- Stop the hypervisor and after nova pronounce hypervisor dead run evacuation 
without stating the destination
- Check claims again
+--++---+--+--+
| resource_provider| generation | resources 
| project_id   | user_id  |
+--++---+--+--+
| 229cce5f-3b87-438a-baa9-539be0fc9bd8 |  6 | {'VCPU': 1, 'MEMORY_MB': 
256} | 4facfb06808a4621b4f47123a0184a4a | 15da82817e56446198fcdd870a45d8f4 |
| 5395932e-b5e0-4a0c-be6a-7328af751642 | 14 | {'VCPU': 1, 'MEMORY_MB': 
256} | 4facfb06808a4621b4f47123a0184a4a | 15da82817e56446198fcdd870a45d8f4 |
+--++---+--+--+


Result: Claims to the old hypervisor have not been deleted
Expected results: Only claims for new hypervisor exist


It is possible regression of https://bugs.launchpad.net/nova/+bug/1896463
It probably happened when the resource tracker was improved and the whole 
migration procedure was rewritten. Migration/resize work because claims 
deletion happens in confirm/revert action, however, evacuation doesn't have 
anything like that and so it's never deleted.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2077009

Title:
  Nova doesn't clean claims after evacuation

Status in OpenStack Compute (nova):
  New

Bug description:
  When a VM is evacuated from the failed server without stating explicit
  destination (i.e. letting the scheduler decide) claims to the old
  hypervisor in placement are never deleted.

  How to replicate:
  - Place VM on the hypervisor
  - Check that claims are OK:
  
+--++---+--+--+
  | resource_provider| generation | resources   
  | project_id   | user_id  |
  
+--++---+--+--+
  | 229cce5f-3b87-438a-baa9-539be0fc9bd8 |  5 | {'VCPU': 1, 
'MEMORY_MB': 256} | 4facfb06808a4621b4f47123a0184a4a | 
15da82817e56446198fcdd870a45d8f4 |
  
+--++---+--+--+
  - Stop the hypervisor and after nova pronounce hypervisor dead run evacuation 
without stating the destination
  - Check claims again
  
+--++---+--+--+
  | resource_provider| generation | resources   
  | project_id   | user_id  |
  
+--++---+--+--+
  | 229cce5f-3b87-438a-baa9-539be0fc9bd8 |  6 | {'VCPU': 1, 
'MEMORY_MB': 256} | 4facfb06808a4621b4f47123a0184a4a | 
15da82817e56446198fcdd870a45d8f4 |
  | 5395932e-b5e0-4a0c-be6a-7328af751642 | 14 | {'VCPU': 1, 
'MEMORY_MB': 256} | 4facfb06808a4621b4f47123a0184a4a | 
15da82817e56446198fcdd870a45d8f4 |
  
+--+---

[Yahoo-eng-team] [Bug 1794718] [NEW] Neutron VPNAAS don't update site connections on python3

2018-09-27 Thread Viktor Křivák
Public bug reported:

Tested with StrongSwan but I hope this can cause issues on others
drivers too.

On python3 when new connection is created it's stuck in PENDING_CREATE
state. Everything is working but connection state is never updated. Main
reason is that agent send wrong id to server. On python3 message look
like this:

[{'updated_pending_status': False, 'ipsec_site_connections': {'a':
{'updated_pending_status': False, 'status': 'ACTIVE'}}, 'status':
'ACTIVE', 'id': 'a621a382-308d-4cd0-be0a-01c757064a13'},
{'updated_pending_status': False, 'ipsec_site_connections': {'a':
{'updated_pending_status': False, 'status': 'ACTIVE'}}, 'status':
'ACTIVE', 'id': 'd004c466-cc36-4b6b-8aa3-84d7e45569ad'}]

on python2

[{'status': 'ACTIVE', 'ipsec_site_connections': {u'7e14400a-60df-48d8
-91aa-ec97749555fc': {'status': 'ACTIVE', 'updated_pending_status':
False}}, 'updated_pending_status': False, 'id': u'c903732e-
67da-4363-baf1-0cdcb7476ee7'}, {'status': 'ACTIVE',
'ipsec_site_connections': {u'70671513-e0cf-4bdf-845e-cb6ef084baea':
{'status': 'ACTIVE', 'updated_pending_status': True}},
'updated_pending_status': True, 'id': u'995ed22c-
00c3-4496-b590-b84787ba6caa'}]

Notice uid in ipsec_site_connections. Problem is that this uid is parsed
from subprocess and on python3 this subprocess return bytes instead of
string. Because of this whole output parsing i kind of screw up.

We was able to fix this issue via patching netns_wrapper:


diff --git a/neutron_vpnaas/services/vpn/common/netns_wrapper.py 
b/neutron_vpnaas/services/vpn/common/netns_wrapper.py
index 77378dcc7..35614a717 100644
--- a/neutron_vpnaas/services/vpn/common/netns_wrapper.py
+++ b/neutron_vpnaas/services/vpn/common/netns_wrapper.py
@@ -23,6 +23,7 @@ from neutron.common import utils
 from oslo_config import cfg
 from oslo_log import log as logging
 from oslo_rootwrap import wrapper
+from neutron_lib.utils import helpers
 import six
 
 from neutron_vpnaas._i18n import _
@@ -67,6 +68,8 @@ def execute(cmd):
  env=env)
 
 _stdout, _stderr = obj.communicate()
+_stdout = helpers.safe_decode_utf8(_stdout)
+_stderr = helpers.safe_decode_utf8(_stderr)
 msg = ('Command: %(cmd)s Exit code: %(returncode)s '
'Stdout: %(stdout)s Stderr: %(stderr)s' %
{'cmd': cmd,

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: python3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1794718

Title:
  Neutron VPNAAS don't update site connections on python3

Status in neutron:
  New

Bug description:
  Tested with StrongSwan but I hope this can cause issues on others
  drivers too.

  On python3 when new connection is created it's stuck in PENDING_CREATE
  state. Everything is working but connection state is never updated.
  Main reason is that agent send wrong id to server. On python3 message
  look like this:

  [{'updated_pending_status': False, 'ipsec_site_connections': {'a':
  {'updated_pending_status': False, 'status': 'ACTIVE'}}, 'status':
  'ACTIVE', 'id': 'a621a382-308d-4cd0-be0a-01c757064a13'},
  {'updated_pending_status': False, 'ipsec_site_connections': {'a':
  {'updated_pending_status': False, 'status': 'ACTIVE'}}, 'status':
  'ACTIVE', 'id': 'd004c466-cc36-4b6b-8aa3-84d7e45569ad'}]

  on python2

  [{'status': 'ACTIVE', 'ipsec_site_connections': {u'7e14400a-60df-48d8
  -91aa-ec97749555fc': {'status': 'ACTIVE', 'updated_pending_status':
  False}}, 'updated_pending_status': False, 'id': u'c903732e-
  67da-4363-baf1-0cdcb7476ee7'}, {'status': 'ACTIVE',
  'ipsec_site_connections': {u'70671513-e0cf-4bdf-845e-cb6ef084baea':
  {'status': 'ACTIVE', 'updated_pending_status': True}},
  'updated_pending_status': True, 'id': u'995ed22c-
  00c3-4496-b590-b84787ba6caa'}]

  Notice uid in ipsec_site_connections. Problem is that this uid is
  parsed from subprocess and on python3 this subprocess return bytes
  instead of string. Because of this whole output parsing i kind of
  screw up.

  We was able to fix this issue via patching netns_wrapper:

  
  diff --git a/neutron_vpnaas/services/vpn/common/netns_wrapper.py 
b/neutron_vpnaas/services/vpn/common/netns_wrapper.py
  index 77378dcc7..35614a717 100644
  --- a/neutron_vpnaas/services/vpn/common/netns_wrapper.py
  +++ b/neutron_vpnaas/services/vpn/common/netns_wrapper.py
  @@ -23,6 +23,7 @@ from neutron.common import utils
   from oslo_config import cfg
   from oslo_log import log as logging
   from oslo_rootwrap import wrapper
  +from neutron_lib.utils import helpers
   import six
   
   from neutron_vpnaas._i18n import _
  @@ -67,6 +68,8 @@ def execute(cmd):
env=env)
   
   _stdout, _stderr = obj.communicate()
  +_stdout = helpers.safe_decode_utf8(_stdout)
  +_stderr = helpers.safe_decode_utf8(_stderr)
   msg = ('Command: %(cmd)s Exit code: %(returncode)s '
  'St