[Yahoo-eng-team] [Bug 1709869] [NEW] test_convert_default_subnetpool_to_non_default fails: Subnet pool could not be found

2017-08-10 Thread Itzik Brown
Public bug reported:

Running test_convert_default_subnetpool_to_non_default it fails with the
following error:

Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/tempest/test.py", line 122, in wrapper
return func(*func_args, **func_kwargs)
  File 
"/usr/lib/python2.7/site-packages/neutron/tests/tempest/api/test_subnetpools.py",
 line 392, in test_convert_default_subnetpool_to_non_default
show_body = self.client.show_subnetpool(subnetpool_id)
  File 
"/usr/lib/python2.7/site-packages/neutron/tests/tempest/services/network/json/network_client.py",
 line 136, in _show
resp, body = self.get(uri)
  File "/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py", 
line 285, in get
return self.request('GET', url, extra_headers, headers)
  File "/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py", 
line 659, in request
self._error_checker(resp, resp_body)
  File "/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py", 
line 765, in _error_checker
raise exceptions.NotFound(resp_body, resp=resp)
tempest.lib.exceptions.NotFound: Object not found
Details: {u'message': u'Subnet pool 428fbe89-0282-4587-a59b-be3957d5c701 could 
not be found.', u'type': u'SubnetPoolNotFound', u'detail': u''}

Version
===
Pike
python-neutron-tests-11.0.0-0.20170804190459.el7ost.noarch

** Affects: neutron
 Importance: Undecided
 Assignee: Itzik Brown (itzikb1)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1709869

Title:
  test_convert_default_subnetpool_to_non_default fails: Subnet pool
  could not be found

Status in neutron:
  In Progress

Bug description:
  Running test_convert_default_subnetpool_to_non_default it fails with
  the following error:

  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/tempest/test.py", line 122, in 
wrapper
  return func(*func_args, **func_kwargs)
File 
"/usr/lib/python2.7/site-packages/neutron/tests/tempest/api/test_subnetpools.py",
 line 392, in test_convert_default_subnetpool_to_non_default
  show_body = self.client.show_subnetpool(subnetpool_id)
File 
"/usr/lib/python2.7/site-packages/neutron/tests/tempest/services/network/json/network_client.py",
 line 136, in _show
  resp, body = self.get(uri)
File "/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py", 
line 285, in get
  return self.request('GET', url, extra_headers, headers)
File "/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py", 
line 659, in request
  self._error_checker(resp, resp_body)
File "/usr/lib/python2.7/site-packages/tempest/lib/common/rest_client.py", 
line 765, in _error_checker
  raise exceptions.NotFound(resp_body, resp=resp)
  tempest.lib.exceptions.NotFound: Object not found
  Details: {u'message': u'Subnet pool 428fbe89-0282-4587-a59b-be3957d5c701 
could not be found.', u'type': u'SubnetPoolNotFound', u'detail': u''}

  Version
  ===
  Pike
  python-neutron-tests-11.0.0-0.20170804190459.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1709869/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1696664] [NEW] Order of the interfaces is not honored when using virt and SR-IOV interfaces

2017-06-08 Thread Itzik Brown
Public bug reported:

When launching an instance using the following:
# nova boot --flavor m1.small --image  --nic net-id= --nic 
port-id= vm1

Where the first interface is a non SR-IOV port and the second one is SR-
IOV port the order is not preserved ,i.e. The first interface of the
instance is the SR-IOV port.

Version:
openstack-nova-compute-15.0.3-3.el7ost.noarch

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1696664

Title:
  Order of the interfaces is not honored when using virt and SR-IOV
  interfaces

Status in OpenStack Compute (nova):
  New

Bug description:
  When launching an instance using the following:
  # nova boot --flavor m1.small --image  --nic net-id= --nic 
port-id= vm1

  Where the first interface is a non SR-IOV port and the second one is
  SR-IOV port the order is not preserved ,i.e. The first interface of
  the instance is the SR-IOV port.

  Version:
  openstack-nova-compute-15.0.3-3.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1696664/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1691004] [NEW] Adding a scenario test to check Security group rule with port ranges TCP and UDP Adding security group rules with port ranges for TCP and UDP

2017-05-15 Thread Itzik Brown
Public bug reported:

The scenario should be:
1. Launching an instance with services listening on TCP and UDP port ranges.
2. Checking there is no connectivity to these services
3. Adding security group rules with TCP and UDP ranges and check connectivity.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1691004

Title:
  Adding a scenario test to check  Security group rule with port ranges
  TCP and UDP Adding security group rules with port ranges for TCP and
  UDP

Status in neutron:
  New

Bug description:
  The scenario should be:
  1. Launching an instance with services listening on TCP and UDP port ranges.
  2. Checking there is no connectivity to these services
  3. Adding security group rules with TCP and UDP ranges and check connectivity.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1691004/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1690998] [NEW] Adding a test to check for using multiple security groups

2017-05-15 Thread Itzik Brown
Public bug reported:

The scenario:

1. Create a port
2. Attaching a security group with ICMP and SSH rules to a port and launching 
an instance with this port. 
3. Then adding another security group with UDP and ICMP rules.
4. Check ICMP/UDP and SSH.
5. Removing the first security group and Check SSH fails but ICMP/UDP works

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1690998

Title:
  Adding a test to check for using multiple security groups

Status in neutron:
  New

Bug description:
  The scenario:

  1. Create a port
  2. Attaching a security group with ICMP and SSH rules to a port and launching 
an instance with this port. 
  3. Then adding another security group with UDP and ICMP rules.
  4. Check ICMP/UDP and SSH.
  5. Removing the first security group and Check SSH fails but ICMP/UDP works

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1690998/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1690997] [NEW] Additional tests to check default sechurity group behaviour

2017-05-15 Thread Itzik Brown
Public bug reported:

There is a need to test the following when using the default security groups:
Having two instances with the default security group:
  a. Two instances of the same tenant can reach each other 
  b. When adding ssh rule - the tester node can ssh the instances but ping fails
  c. It's possible to ping the external network from the instances

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1690997

Title:
  Additional tests to check default sechurity group behaviour

Status in neutron:
  New

Bug description:
  There is a need to test the following when using the default security groups:
  Having two instances with the default security group:
a. Two instances of the same tenant can reach each other 
b. When adding ssh rule - the tester node can ssh the instances but ping 
fails
c. It's possible to ping the external network from the instances

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1690997/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1251224] Re: ICMP security group rules should have a type and code params instead of using "--port-range-min" and "--port-range-max"

2016-09-19 Thread Itzik Brown
Still relevant in Newton.

** Changed in: neutron
   Status: Expired => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1251224

Title:
  ICMP security group rules should have a type and code params instead
  of using "--port-range-min" and "--port-range-max"

Status in neutron:
  Confirmed

Bug description:
  Version
  ==
  Havana on rhel

  Description
  =
  I couldn't find a doc specifying whether icmp security group rules ignore the 
"--port-range-min" and "--port-range-max" params or use then as code and type 
as we know from nova security group rules.
  I think that it should be:

  i. Well documented.
  ii. prohibited for use of "--port-range-min" and "--port-range-max" in icmp 
rules context, new switches should be created for code and type.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1251224/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1536540] [NEW] AttributeError: 'NoneType' object has no attribute 'port_name' when deleting an instance with QoS policy attached

2016-01-21 Thread Itzik Brown
Public bug reported:

After deleting an instance with a port the has a QoS policy attached the
following Trace occurs in the OVS agent log:

2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager 
Traceback (most recent call last):
2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l2/extensions/manager.py", line 
77, in delete_port
2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager 
extension.obj.delete_port(context, data)
2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l2/extensions/qos.py", line 
239, in delete_port
2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager 
self._process_reset_port(port)
2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l2/extensions/qos.py", line 
254, in _process_reset_port
2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager 
self.qos_driver.delete(port)
2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l2/extensions/qos.py", line 89, 
in delete
2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager 
self._handle_rule_delete(port, rule_type)
2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager   File 
"/usr/lib/python2.7/site-packages/neutron/agent/l2/extensions/qos.py", line 
104, in _handle_rule_delete
2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager 
handler(port)
2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py",
 line 49, in delete_bandwidth_limit
2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager 
port_name = port['vif_port'].port_name
2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager 
AttributeError: 'NoneType' object has no attribute 'port_name'
2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager 
2016-01-21 04:02:22.636 21316 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-b605ce4f-c832-4d8c-a7a7-ee8b89f47e4a - - - - -] port_unbound(): net_uuid 
None not in local_vlan_map
2016-01-21 04:02:22.637 21316 INFO neutron.agent.securitygroups_rpc 
[req-b605ce4f-c832-4d8c-a7a7-ee8b8

How to reproduce
===
1. Enable QoS
2. Create a QoS policy and a rule
3. Launch an instance 
4. Attach the QoS policy to a the port of the instance
5. Delete the instance and check the OVS agent's log

Version
==
RHEL7.2
Liberty
python-neutron-7.0.1-6.el7ost.noarch
openstack-neutron-ml2-7.0.1-6.el7ost.noarch
openstack-neutron-openvswitch-7.0.1-6.el7ost.noarch
openstack-neutron-common-7.0.1-6.el7ost.noarch
openstack-neutron-7.0.1-6.el7ost.noarch

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: qos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1536540

Title:
  AttributeError: 'NoneType' object has no attribute 'port_name' when
  deleting an instance with QoS policy attached

Status in neutron:
  New

Bug description:
  After deleting an instance with a port the has a QoS policy attached
  the following Trace occurs in the OVS agent log:

  2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager 
Traceback (most recent call last):
  2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager   
File "/usr/lib/python2.7/site-packages/neutron/agent/l2/extensions/manager.py", 
line 77, in delete_port
  2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager 
extension.obj.delete_port(context, data)
  2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager   
File "/usr/lib/python2.7/site-packages/neutron/agent/l2/extensions/qos.py", 
line 239, in delete_port
  2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager 
self._process_reset_port(port)
  2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager   
File "/usr/lib/python2.7/site-packages/neutron/agent/l2/extensions/qos.py", 
line 254, in _process_reset_port
  2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager 
self.qos_driver.delete(port)
  2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager   
File "/usr/lib/python2.7/site-packages/neutron/agent/l2/extensions/qos.py", 
line 89, in delete
  2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager 
self._handle_rule_delete(port, rule_type)
  2016-01-21 04:02:22.634 21316 ERROR neutron.agent.l2.extensions.manager   
File "/usr/lib/python2.7/site-packages/neutron/agent/l2/extensions/qos

[Yahoo-eng-team] [Bug 1515533] [NEW] QoS policy is not enforced when using a previously used port

2015-11-12 Thread Itzik Brown
Public bug reported:

When using a port with a QoS policy that was already used the policy is
not enforced

Version
===
CentOS  7.1 RDO Liberty
openstack-neutron-7.0.1-dev28.el7.centos.noarch
openstack-neutron-common-7.0.1-dev28.el7.centos.noarch
python-neutron-7.0.1-dev28.el7.centos.noarch
openstack-neutron-ml2-7.0.1-dev28.el7.centos.noarch
openstack-neutron-openvswitch-7.0.1-dev28.el7.centos.noarch

How to reproduce

1. Create a QoS policy
2. Create a QoS bandwidth rule
3. Create a port
4. Associate the port with a policy
5. Launch the instance with the above port attached
6. Verify the policy is enforced
7. Delete the instance
8. Launch an instance with the above port attached
9. Verify that the QoS policy is not enforced

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1515533

Title:
  QoS policy is not enforced when using a previously used port

Status in neutron:
  New

Bug description:
  When using a port with a QoS policy that was already used the policy
  is not enforced

  Version
  ===
  CentOS  7.1 RDO Liberty
  openstack-neutron-7.0.1-dev28.el7.centos.noarch
  openstack-neutron-common-7.0.1-dev28.el7.centos.noarch
  python-neutron-7.0.1-dev28.el7.centos.noarch
  openstack-neutron-ml2-7.0.1-dev28.el7.centos.noarch
  openstack-neutron-openvswitch-7.0.1-dev28.el7.centos.noarch

  How to reproduce
  
  1. Create a QoS policy
  2. Create a QoS bandwidth rule
  3. Create a port
  4. Associate the port with a policy
  5. Launch the instance with the above port attached
  6. Verify the policy is enforced
  7. Delete the instance
  8. Launch an instance with the above port attached
  9. Verify that the QoS policy is not enforced

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1515533/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1497969] [NEW] L2 Agent doesn't expose loaded extenstions

2015-09-21 Thread Itzik Brown
Public bug reported:

Using Openvswitch L2 agent and configuring extensions=qos under agent
section in /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini.

Running 
# neutron agent-show 

Should show the loaded extensions..

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1497969

Title:
  L2 Agent doesn't expose loaded extenstions

Status in neutron:
  New

Bug description:
  Using Openvswitch L2 agent and configuring extensions=qos under agent
  section in /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini.

  Running 
  # neutron agent-show 

  Should show the loaded extensions..

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1497969/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490900] [NEW] Update onlink routes when subnet is added to an external network

2015-09-01 Thread Itzik Brown
Public bug reported:

When adding a new subnet to an external network that is connected to a router 
the onlink route is not added.
After restarting Neutron L3 agent - the route is added.

Please refer to 
https://bugs.launchpad.net/neutron/+bug/1312467 
for additional information regarding the onlink routes.

When adding an external network with multiple subnets to a router the
routes are added.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1490900

Title:
  Update onlink routes when subnet is added to an external network

Status in neutron:
  New

Bug description:
  When adding a new subnet to an external network that is connected to a router 
the onlink route is not added.
  After restarting Neutron L3 agent - the route is added.

  Please refer to 
  https://bugs.launchpad.net/neutron/+bug/1312467 
  for additional information regarding the onlink routes.

  When adding an external network with multiple subnets to a router the
  routes are added.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1490900/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1403003] Re: No error/warning raised when attempting to re-upload image data

2015-07-14 Thread Itzik Brown
jelly,
Thanks. You are right - it's fixed.

** Changed in: glance
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1403003

Title:
  No error/warning raised when attempting to re-upload image data

Status in Glance:
  Fix Released

Bug description:
  When modifying an image file and then updating the image by using glance  
image-update --file  
  doesn't update the image 

  How to reproduce
  
  Download an image and upload it:

  # wget 
http://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Base-20141203-21.x86_64.qcow2
  # glance image-create --name fedora21b --disk-format qcow2  
--container-format bare --is-public True --file 
/tmp/Fedora-Cloud-Base-20141203-21.x86_64.qcow2

  Create some dummy file in /tmp/dummy and modify the image
  #  virt-copy-in -a Fedora-Cloud-Base-20141203-21.x86_64.qcow2 /tmp/dummy /etc

  Update the image:
  #glance image-update --file Fedora-Cloud-Base-20141203-21.x86_64.qcow2  
fedora21

  Verify the image is not updated by comparing the checksum
  # md5sum /var/lib/glance/images/bd84ac96-c2a8-4268-a19c-a0e69c703baf
  # md5sum Fedora-Cloud-Base-20141203-21.x86_64.qcow2

  When using --checksum the checksum in the image properties is updated but the 
the image itself  not:
  #glance image-update --file Fedora-Cloud-Base-20141203-21.x86_64.qcow2 
--checksum 2c98b17b3f27d14e2e7a840fef464cfe fedora21

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1403003/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470363] [NEW] pci_passthrough_whitelist should support single quotes for keys and values

2015-07-01 Thread Itzik Brown
Public bug reported:

When having the following in /etc/nova/nova.conf

pci_passthrough_whitelist={'devname':'enp5s0f1',
'physical_network':'physnet2'}

Nova compute fails to start and I get the error:

2015-07-01 09:48:03.610 4791 ERROR nova.openstack.common.threadgroup 
[req-b86e5da5-a24e-4eb6-bebd-0ec36fc08021 - - - - -] Invalid PCI devices 
Whitelist config Invalid entry: '{'devname':'enp5s0f1', 
'physical_network':'physnet2'}'
2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup Traceback 
(most recent call last):
2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 
145, in wait
2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup 
x.wait()
2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 
47, in wait
2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup return 
self.thread.wait()
2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 175, in wait
2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup return 
self._exit_event.wait()
2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/eventlet/event.py", line 121, in wait
2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup return 
hubs.get_hub().switch()
2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 294, in switch
2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup return 
self.greenlet.switch()
2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 214, in main
2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup result 
= function(*args, **kwargs)
2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/service.py", line 497, 
in run_service
2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup 
service.start()
2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/service.py", line 183, in start
2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup 
self.manager.pre_start_hook()
2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1291, in 
pre_start_hook
2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup 
self.update_available_resource(nova.context.get_admin_context())
2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6246, in 
update_available_resource
2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup rt = 
self._get_resource_tracker(nodename)
2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 715, in 
_get_resource_tracker
2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup 
nodename)
2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/compute/resource_tracker.py", line 78, 
in __init__
2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup 
self.pci_filter = pci_whitelist.get_pci_devices_filter()
2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/pci/whitelist.py", line 109, in 
get_pci_devices_filter
2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup return 
PciHostDevicesWhiteList(CONF.pci_passthrough_whitelist)
2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/pci/whitelist.py", line 89, in __init__
2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup 
self.specs = self._parse_white_list_from_config(whitelist_spec)
2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/pci/whitelist.py", line 56, in 
_parse_white_list_from_config
2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup 
reason=_("Invalid entry: '%s'") % jsonspec)
2015-07-01 09:48:03.610 4791 TRACE nova.openstack.common.threadgroup 
PciConfigInvalidWhitelist: Invalid PCI devices Whitelist config Invalid entry: 
'{'devname':'enp5s0f1', 'physical_network':'physnet2'}'


When using double quotes there is no problem:
pci_passthrough_whitelist={"devname":"enp5s0f1", "physical_network":"physnet2"}

[Yahoo-eng-team] [Bug 1466451] [NEW] Nova should verify that devname in pci_passthrough_whitelist is not empty

2015-06-18 Thread Itzik Brown
Public bug reported:

According to https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking:
"The devname can be a valid PCI device name. The only device names that are 
supported are those displayed by the Linux utility ifconfig -a and correspond 
to either a PF or a VF on a vNIC"

However it's possible to supply an empty string as devname
e.g. pci_passthrough_whitelist = {"devname": "", "physical_network":"physnet2"}

It's also possible to have an entry:
pci_passthrough_whitelist = {"physical_network":"physnet2"} 
which shouldn't be valid.

Nova should verify that devname is not an empty string and that
devname,address or product_id/vendor_id are supplied.

Version
==
python-nova-2015.1.0-4.el7ost.noarch

Expected result
=
Nova compute should fail to start when specifying an empty string for devname 
when using physical_network or when not specifying devname,address or 
product_id/vendor_id

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1466451

Title:
  Nova should verify that devname in pci_passthrough_whitelist is not
  empty

Status in OpenStack Compute (Nova):
  New

Bug description:
  According to 
https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking:
  "The devname can be a valid PCI device name. The only device names that are 
supported are those displayed by the Linux utility ifconfig -a and correspond 
to either a PF or a VF on a vNIC"

  However it's possible to supply an empty string as devname
  e.g. pci_passthrough_whitelist = {"devname": "", 
"physical_network":"physnet2"}

  It's also possible to have an entry:
  pci_passthrough_whitelist = {"physical_network":"physnet2"} 
  which shouldn't be valid.

  Nova should verify that devname is not an empty string and that
  devname,address or product_id/vendor_id are supplied.

  Version
  ==
  python-nova-2015.1.0-4.el7ost.noarch

  Expected result
  =
  Nova compute should fail to start when specifying an empty string for devname 
when using physical_network or when not specifying devname,address or 
product_id/vendor_id

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1466451/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460408] [NEW] fip namespace is not created when doing migration from legacy router to DVR

2015-05-31 Thread Itzik Brown
Public bug reported:

When creating a legacy router and migrating to a distributed router
'fip' namespaces are not created on the compute nodes.


Error from L3 Agent log:
===
2015-05-31 13:35:55.935 103776 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'fip-9a0e39c3-97a1-4a93-8ce7-fd7d804fae2b', 'ip', '-o', 
'link', 'show', 'fpr-2965187d-4'] create_process 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:84
2015-05-31 13:35:55.991 103776 DEBUG neutron.agent.linux.utils [-]
Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'fip-9a0e39c3-97a1-4a93-8ce7-fd7d804fae2b', 'ip', '-o', 
'link', 'show', 'fpr-2965187d-4']
Exit code: 1
Stdin:
Stdout:
Stderr: Cannot open network namespace 
"fip-9a0e39c3-97a1-4a93-8ce7-fd7d804fae2b": No such file or directory
 execute /usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:134
2015-05-31 13:35:55.992 103776 DEBUG neutron.agent.l3.router_info [-] No 
Interface for floating IPs router: 2965187d-452c-4951-88eb-4053cea88dae 
process_floating_ip_addresses 
/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py:229

How to reproduce
===
1. Create a legacy router 
# neutron router-create --distributed=False router1

2. Associate the router with an internal network 
# neutron router-interface-add router1 

3. Set the router's gateway
# neutron router-gateway-set router1 

4. Launch an instnace
 # nova boot --flavor m1.small --image fedora --key-name cloudkey  --nic 
net-id= vm1

5. Associate the Instance with a floating IP

6. Check connectivity to an external network

7. Migrate the router to a distributed router1
# neutron router-update --admin_state_up=False router1
# neutron router-update --distributed=True router
# neutron router-update --admin_state_up=True router1

8. Verify the 'snat' namespace is created on the 'dvr_snat' node but
'fip' namespace aren't created on the compute nodes.

Version
==
RHEL 7.1
python-neutron-2015.1.0-1.el7ost.noarch

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460408

Title:
  fip namespace is not created when doing migration from legacy router
  to DVR

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When creating a legacy router and migrating to a distributed router
  'fip' namespaces are not created on the compute nodes.

  
  Error from L3 Agent log:
  ===
  2015-05-31 13:35:55.935 103776 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'fip-9a0e39c3-97a1-4a93-8ce7-fd7d804fae2b', 'ip', '-o', 
'link', 'show', 'fpr-2965187d-4'] create_process 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:84
  2015-05-31 13:35:55.991 103776 DEBUG neutron.agent.linux.utils [-]
  Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'fip-9a0e39c3-97a1-4a93-8ce7-fd7d804fae2b', 'ip', '-o', 
'link', 'show', 'fpr-2965187d-4']
  Exit code: 1
  Stdin:
  Stdout:
  Stderr: Cannot open network namespace 
"fip-9a0e39c3-97a1-4a93-8ce7-fd7d804fae2b": No such file or directory
   execute /usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:134
  2015-05-31 13:35:55.992 103776 DEBUG neutron.agent.l3.router_info [-] No 
Interface for floating IPs router: 2965187d-452c-4951-88eb-4053cea88dae 
process_floating_ip_addresses 
/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py:229

  How to reproduce
  ===
  1. Create a legacy router 
  # neutron router-create --distributed=False router1

  2. Associate the router with an internal network 
  # neutron router-interface-add router1 

  3. Set the router's gateway
  # neutron router-gateway-set router1 

  4. Launch an instnace
   # nova boot --flavor m1.small --image fedora --key-name cloudkey  --nic 
net-id= vm1

  5. Associate the Instance with a floating IP

  6. Check connectivity to an external network

  7. Migrate the router to a distributed router1
  # neutron router-update --admin_state_up=False router1
  # neutron router-update --distributed=True router
  # neutron router-update --admin_state_up=True router1

  8. Verify the 'snat' namespace is created on the 'dvr_snat' node but
  'fip' namespace aren't created on the compute nodes.

  Version
  ==
  RHEL 7.1
  python-neutron-2015.1.0-1.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460408/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More he

[Yahoo-eng-team] [Bug 1456624] [NEW] DVR Connection to external network lost when associating a floating IP

2015-05-19 Thread Itzik Brown
Public bug reported:

Having a distributed router with interfaces for an internal network and
external network.

When Launching a instance and pinging an external network and then associating 
a floating to the instance the connection is lost i.e.
 the ping fails. 
When running the ping command again - it's successful.

Version
==
RHEL 7.1
python-nova-2015.1.0-3.el7ost.noarch
python-neutron-2015.1.0-1.el7ost.noarch

How to reproduce
==
1. Create a distributed router and attach an internal and an external network 
to it.
# neutron router-create --distributed True router1
# neutron router-interface-add router1 
# neutron router-gateway-set 

2. Launch an instance and associate it with a floating IP.
# nova boot --flavor m1.small --image fedora --nic net-id= vm1

3. Go to the console of the instance and run ping to an external network:
 # ping 8.8.8.8

4.  Associate a floating IP to the instance:
 # nova floating-ip-associate vm1 

5. Verify that the ping fails.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1456624

Title:
  DVR Connection to external network lost when associating a floating IP

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Having a distributed router with interfaces for an internal network
  and external network.

  When Launching a instance and pinging an external network and then 
associating a floating to the instance the connection is lost i.e.
   the ping fails. 
  When running the ping command again - it's successful.

  Version
  ==
  RHEL 7.1
  python-nova-2015.1.0-3.el7ost.noarch
  python-neutron-2015.1.0-1.el7ost.noarch

  How to reproduce
  ==
  1. Create a distributed router and attach an internal and an external network 
to it.
  # neutron router-create --distributed True router1
  # neutron router-interface-add router1 
  # neutron router-gateway-set 

  2. Launch an instance and associate it with a floating IP.
  # nova boot --flavor m1.small --image fedora --nic net-id= vm1

  3. Go to the console of the instance and run ping to an external network:
   # ping 8.8.8.8

  4.  Associate a floating IP to the instance:
   # nova floating-ip-associate vm1 

  5. Verify that the ping fails.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1456624/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456600] [NEW] Admin should be able to set the Router type when creating a router for a tenant

2015-05-19 Thread Itzik Brown
Public bug reported:

Using the CLI it's possible to create a router for another tenant and specify 
if the router should be distributed or not:.
# neutron  router-create --tenant-id  --distributed True router1

The dashboard should provide the same functionality.

Version
==
openstack-dashboard-2015.1.0-3.el7ost.noarch

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1456600

Title:
  Admin should be able to set the Router type when creating a router for
  a tenant

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Using the CLI it's possible to create a router for another tenant and specify 
if the router should be distributed or not:.
  # neutron  router-create --tenant-id  --distributed True router1

  The dashboard should provide the same functionality.

  Version
  ==
  openstack-dashboard-2015.1.0-3.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1456600/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456151] [NEW] netns_cleanup utility doesn't remove 'qlbaas-' namespaces

2015-05-18 Thread Itzik Brown
Public bug reported:

netns_cleanup utility removes 'qrouter-',  'qdhcp-' , 'snat-' and 'fip' 
namespaces.
It should  be able to remove 'qlbaas-' namespace as well.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1456151

Title:
  netns_cleanup utility doesn't remove 'qlbaas-'  namespaces

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  netns_cleanup utility removes 'qrouter-',  'qdhcp-' , 'snat-' and 'fip' 
namespaces.
  It should  be able to remove 'qlbaas-' namespace as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1456151/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456073] [NEW] Connection to an instance with floating IP breaks during block migration when using DVR

2015-05-18 Thread Itzik Brown
Public bug reported:

During migration of  an instance, using block migration with a floating IP when 
the router is DVR the connection to the instance breaks (e.g. Having an SSH 
connection to the instance).
Reconnect to the instance is successful.

Version
==
RHEL 7.1
python-nova-2015.1.0-3.el7ost.noarch
python-neutron-2015.1.0-1.el7ost.noarch

How to reproduce
==
1. Create a distributed router and attach an internal and an external network 
to it.
# neutron router-create --distributed True router1
# neutron router-interface-add router1 
# neutron router-gateway-set 

2. Launch an instance and associate it with a floating IP.
# nova boot --flavor m1.small --image fedora --nic net-id= vm1

3. SSH into the instance which will be migrated and run a command "while
true; do echo "Hello"; sleep 1; done"

4. Migrate the instance using block migration 
 # nova live-migration --block-migrate 

5. Verify that the connection to the instance is lost.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1456073

Title:
  Connection to an instance with floating IP breaks during block
  migration when using DVR

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  During migration of  an instance, using block migration with a floating IP 
when the router is DVR the connection to the instance breaks (e.g. Having an 
SSH connection to the instance).
  Reconnect to the instance is successful.

  Version
  ==
  RHEL 7.1
  python-nova-2015.1.0-3.el7ost.noarch
  python-neutron-2015.1.0-1.el7ost.noarch

  How to reproduce
  ==
  1. Create a distributed router and attach an internal and an external network 
to it.
  # neutron router-create --distributed True router1
  # neutron router-interface-add router1 
  # neutron router-gateway-set 

  2. Launch an instance and associate it with a floating IP.
  # nova boot --flavor m1.small --image fedora --nic net-id= vm1

  3. SSH into the instance which will be migrated and run a command
  "while true; do echo "Hello"; sleep 1; done"

  4. Migrate the instance using block migration 
   # nova live-migration --block-migrate 

  5. Verify that the connection to the instance is lost.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1456073/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1455005] [NEW] DVR Cannot open network 'fip' namespace error when updating router's gateway

2015-05-14 Thread Itzik Brown
Public bug reported:

When updating a distributed router's gateway when one is already set
there is an error in l3 agent's log on the node where 'dvr_snat' is
running.

Cannot open network namespace "fip-dc7937bc-2627-422b-
8c71-6779aa675a81": No such file or directory

2015-05-14 11:39:15.862 21386 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'fip-dc7937bc-2627-422b-8c71-6779aa675a81', 'ip', '-o', 
'link', 'show', 'fpr-43e4f718-e'] create_process 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:84
2015-05-14 11:39:15.942 21386 DEBUG neutron.agent.linux.utils [-] 
Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'fip-dc7937bc-2627-422b-8c71-6779aa675a81', 'ip', '-o', 
'link', 'show', 'fpr-43e4f718-e']
Exit code: 1
Stdin: 
Stdout: 
Stderr: Cannot open network namespace 
"fip-dc7937bc-2627-422b-8c71-6779aa675a81": No such file or directory
 execute /usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:134
2015-05-14 11:39:15.943 21386 DEBUG neutron.agent.l3.router_info [-] No 
Interface for floating IPs router: 43e4f718-e0fc-435a-8144-445aa54eeecc 
process_floating_ip_addresses 
/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py:229

It seems that it's not related to just this action but also for
association of floating IP.

Version
==
python-neutron-2015.1.0-1.el7ost.noarch

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1455005

Title:
  DVR Cannot open network 'fip' namespace error when updating router's
  gateway

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When updating a distributed router's gateway when one is already set
  there is an error in l3 agent's log on the node where 'dvr_snat' is
  running.

  Cannot open network namespace "fip-dc7937bc-2627-422b-
  8c71-6779aa675a81": No such file or directory

  2015-05-14 11:39:15.862 21386 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'fip-dc7937bc-2627-422b-8c71-6779aa675a81', 'ip', '-o', 
'link', 'show', 'fpr-43e4f718-e'] create_process 
/usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:84
  2015-05-14 11:39:15.942 21386 DEBUG neutron.agent.linux.utils [-] 
  Command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'fip-dc7937bc-2627-422b-8c71-6779aa675a81', 'ip', '-o', 
'link', 'show', 'fpr-43e4f718-e']
  Exit code: 1
  Stdin: 
  Stdout: 
  Stderr: Cannot open network namespace 
"fip-dc7937bc-2627-422b-8c71-6779aa675a81": No such file or directory
   execute /usr/lib/python2.7/site-packages/neutron/agent/linux/utils.py:134
  2015-05-14 11:39:15.943 21386 DEBUG neutron.agent.l3.router_info [-] No 
Interface for floating IPs router: 43e4f718-e0fc-435a-8144-445aa54eeecc 
process_floating_ip_addresses 
/usr/lib/python2.7/site-packages/neutron/agent/l3/router_info.py:229

  It seems that it's not related to just this action but also for
  association of floating IP.

  Version
  ==
  python-neutron-2015.1.0-1.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1455005/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1440666] Re: When creating a stack with the 'roll back' option fails the stack should not be removed from the stacks list

2015-04-06 Thread Itzik Brown
** Project changed: heat => horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1440666

Title:
  When creating a stack with the 'roll back' option fails  the stack
  should not be removed from the stacks list

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When creating a stack with the 'roll back' option and running heat stack-list 
the stack disappears from the list if the stack creation fails.
  It may be hard to realize that a failure occurred and to analyse the cause of 
failure  so adding a status such as 'failed and rolled back' may be appropriate.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1440666/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1432905] [NEW] Launching an instance fails when using a port with vnic_type=direct

2015-03-16 Thread Itzik Brown
Public bug reported:

After Launching an instance with a port with vnic_type=direct the
instance fails to start.

In the nova compute log I see:
2015-03-16 17:51:34.432 3313 TRACE nova.compute.manager ValueError: Field 
`extra_info["numa_node"]' cannot be None 

Version
=
openstack-nova-compute-2014.2.2-18.el7ost.noarch
python-nova-2014.2.2-18.el7ost.noarch
openstack-nova-common-2014.2.2-18.el7ost.noarch

How to Reproduce
===
# neutron port-create tenant1-net1 --binding:vnic-type direct
# nova boot --flavor m1.small --image rhel7 --nic port-id= vm1

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: pci-passthrough

** Attachment added: "Nova compute log"
   
https://bugs.launchpad.net/bugs/1432905/+attachment/4347490/+files/instance_fails_sriov.txt

** Tags added: pci-passthrough

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1432905

Title:
  Launching an instance fails when using a port with vnic_type=direct

Status in OpenStack Compute (Nova):
  New

Bug description:
  After Launching an instance with a port with vnic_type=direct the
  instance fails to start.

  In the nova compute log I see:
  2015-03-16 17:51:34.432 3313 TRACE nova.compute.manager ValueError: Field 
`extra_info["numa_node"]' cannot be None 

  Version
  =
  openstack-nova-compute-2014.2.2-18.el7ost.noarch
  python-nova-2014.2.2-18.el7ost.noarch
  openstack-nova-common-2014.2.2-18.el7ost.noarch

  How to Reproduce
  ===
  # neutron port-create tenant1-net1 --binding:vnic-type direct
  # nova boot --flavor m1.small --image rhel7 --nic port-id= vm1

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1432905/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1430253] [NEW] netns_cleanup utility doesn't remove 'fip-' namespaces

2015-03-10 Thread Itzik Brown
Public bug reported:

netns_cleanup utility removes 'qrouter-' and 'qdhcp-' namespaces.
It should be able to remove 'fip-' namespaces.

** Affects: neutron
 Importance: Undecided
 Assignee: Itzik Brown (itzikb1)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Itzik Brown (itzikb1)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1430253

Title:
  netns_cleanup utility doesn't remove 'fip-' namespaces

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  netns_cleanup utility removes 'qrouter-' and 'qdhcp-' namespaces.
  It should be able to remove 'fip-' namespaces.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1430253/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1429740] [NEW] Using DVR - Instance with a floating IP can't reach other instances connected to a different network

2015-03-09 Thread Itzik Brown
Public bug reported:


A distributed router with interfaces connected to two private networks and to 
an external network.
Instances without floating IP connected to network A can reach other instances 
connected to network B 
but instances with a floating IP connected to network A can't reach other 
instances connected to network B.

Version
===
openstack-neutron-2014.2.2-1.el7ost.noarch
python-neutron-2014.2.2-1.el7ost.noarch
openstack-neutron-openvswitch-2014.2.2-1.el7ost.noarch
openstack-neutron-ml2-2014.2.2-1.el7ost.noarch

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1429740

Title:
  Using DVR - Instance with a floating IP can't reach other instances
  connected to a different network

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  
  A distributed router with interfaces connected to two private networks and to 
an external network.
  Instances without floating IP connected to network A can reach other 
instances connected to network B 
  but instances with a floating IP connected to network A can't reach other 
instances connected to network B.

  Version
  ===
  openstack-neutron-2014.2.2-1.el7ost.noarch
  python-neutron-2014.2.2-1.el7ost.noarch
  openstack-neutron-openvswitch-2014.2.2-1.el7ost.noarch
  openstack-neutron-ml2-2014.2.2-1.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1429740/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1428062] [NEW] ML2 plugin Plugin does not support updating provider attributes

2015-03-04 Thread Itzik Brown
Public bug reported:

When trying to update segmentation_id 
# neutron net-update --provider:segmentation_id=190 public1

The update fails and in Neutron error:
update failed (client error): Invalid input for operation: Plugin does not 
support updating provider attributes

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1428062

Title:
  ML2 plugin Plugin does not support updating provider attributes

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When trying to update segmentation_id 
  # neutron net-update --provider:segmentation_id=190 public1

  The update fails and in Neutron error:
  update failed (client error): Invalid input for operation: Plugin does not 
support updating provider attributes

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1428062/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427731] [NEW] Port is not deleted after instance removal (DVR)

2015-03-03 Thread Itzik Brown
Public bug reported:

Having a DVR with 1 port connected to a private network and 1 connected to an 
external network.
After removing an instance connected to the private network the port of the 
instance is not deleted.

Version
==
openstack-neutron-2014.2.2-1.el7ost.noarch
python-neutron-2014.2.2-1.el7ost.noarch
openstack-neutron-ml2-2014.2.2-1.el7ost.noarch

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1427731

Title:
  Port is not deleted after instance removal (DVR)

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Having a DVR with 1 port connected to a private network and 1 connected to an 
external network.
  After removing an instance connected to the private network the port of the 
instance is not deleted.

  Version
  ==
  openstack-neutron-2014.2.2-1.el7ost.noarch
  python-neutron-2014.2.2-1.el7ost.noarch
  openstack-neutron-ml2-2014.2.2-1.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1427731/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1425504] [NEW] DVR interface with device_owener network:router_centralized_snat can be deleted using port-delete

2015-02-25 Thread Itzik Brown
Public bug reported:

It's possible to delete the SNAT port belongs to a DVR by using port-
delete.

How to reproduce
===
After creating a distributed router list all the ports with device_owner  
network:router_centralized_snat
# neutron port-list --device_owner network:router_centralized_snat

Delete the port
# neutron port-delete 


Trying to delete a port with device_owner: network:router_interface_distributed

Version
==
python-neutron-2014.2.2-1.el7ost.noarch
openstack-neutron-2014.2.2-1.el7ost.noarch
openstack-neutron-openvswitch-2014.2.2-1.el7ost.noarch
python-neutronclient-2.3.9-1.el7ost.noarch

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1425504

Title:
  DVR interface with device_owener network:router_centralized_snat can
  be deleted using port-delete

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  It's possible to delete the SNAT port belongs to a DVR by using port-
  delete.

  How to reproduce
  ===
  After creating a distributed router list all the ports with device_owner  
network:router_centralized_snat
  # neutron port-list --device_owner network:router_centralized_snat

  Delete the port
  # neutron port-delete 

  
  Trying to delete a port with device_owner: 
network:router_interface_distributed

  Version
  ==
  python-neutron-2014.2.2-1.el7ost.noarch
  openstack-neutron-2014.2.2-1.el7ost.noarch
  openstack-neutron-openvswitch-2014.2.2-1.el7ost.noarch
  python-neutronclient-2.3.9-1.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1425504/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1419452] [NEW] nova-compute fails to start when there is an instance with port with binding:vif_type=binding_failed

2015-02-08 Thread Itzik Brown
Public bug reported:

Restarting nova-compute fails when there is an instance with a port
where binding:vif_type=binding_failed  nova-compute

Version
==

How to reproduce
===
1. Launch an instance.
2. Attach another interface to the instance using a second network
# nova interface-attach --net-id  

3. Restart nova-compute

Version
==
python-nova-2014.2.1-14.el7ost.noarch
python-novaclient-2.20.0-1.el7ost.noarch
openstack-nova-common-2014.2.1-14.el7ost.noarch
openstack-nova-compute-2014.2.1-14.el7ost.noarch

nova-compute Log
===
2015-02-08 16:00:27.695 5794 DEBUG nova.virt.libvirt.vif [-] 
vif_type=binding_failed 
instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone=None,cell_name=None,cleaned=False,config_drive='',created_at=2015-02-08T13:56:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,disable_terminate=False,display_description='vm1',display_name='vm1',ephemeral_gb=0,ephemeral_key_uuid=None,fault=,host='puma48.scl.lab.tlv.redhat.com',hostname='vm1',id=190,image_ref='c6e549fd-4ba2-4723-a7c9-c1fa13cd24f8',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,launch_index=0,launched_at=2015-02-08T13:56:35Z,launched_on='puma48.scl.lab.tlv.redhat.com',locked=False,locked_by=None,memory_mb=2048,metadata=,node='puma48.scl.lab.tlv.redhat.com',numa_topology=,os_type=None,pci_devices=,power_state=1,progress=0,project_id='a10d7b579ee546ada9a9e5b70cfb9a25',ramdisk_id='',r
 
eservation_id='r-bhwfpof7',root_device_name='/dev/vda',root_gb=20,scheduled_at=None,security_groups=,shutdown_terminate=False,system_metadata=,task_state=None,terminated_at=None,updated_at=2015-02-08T13:56:35Z,user_data=None,user_id='4a5d45f0d5cf402b99a6819e28a12466',uuid=a7106220-1d42-4c47-9c91-e2bc8ce0a2d3,vcpus=1,vm_mode=None,vm_state='active')
 vif=VIF({'profile': {}, 'ovs_interfaceid': None, 'network': Network({'bridge': 
None, 'subnets': [Subnet({'ips': [FixedIP({'meta': {}, 'version': 4, 'type': 
u'fixed', 'floating_ips': [], 'address': u'192.168.250.10'})], 'version': 4, 
'meta': {u'dhcp_server': u'192.168.250.3'}, 'dns': [], 'routes': [], 'cidr': 
u'192.168.250.0/24', 'gateway': IP({'meta': {}, 'version': 4, 'type': 
u'gateway', 'address': u'192.168.250.1'})})], 'meta': {u'injected': False, 
u'tenant_id': u'a10d7b579ee546ada9a9e5b70cfb9a25'}, 'id': 
u'70f730d8-aced-48e8-a08e-0395fa446e28', 'label': u'net3'}), 'devname': 
u'tap1e546c5b-f5', 'vnic_type': u'normal', 'qbh_params':
  None, 'meta': {}, 'details': {}, 'address': u'fa:16:3e:d2:bd:fb', 'active': 
False, 'type': u'binding_failed', 'id': 
u'1e546c5b-f5b5-4221-bfb2-7be46a323ea9', 'qbg_params': None}) plug 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/vif.py:531
2015-02-08 16:00:27.698 5794 ERROR nova.openstack.common.threadgroup [-] 
Unexpected vif_type=binding_failed
2015-02-08 16:00:27.698 5794 TRACE nova.openstack.common.threadgroup Traceback 
(most recent call last):
2015-02-08 16:00:27.698 5794 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 
125, in wait
2015-02-08 16:00:27.698 5794 TRACE nova.openstack.common.threadgroup 
x.wait()
2015-02-08 16:00:27.698 5794 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 
47, in wait
2015-02-08 16:00:27.698 5794 TRACE nova.openstack.common.threadgroup return 
self.thread.wait()
2015-02-08 16:00:27.698 5794 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 173, in wait
2015-02-08 16:00:27.698 5794 TRACE nova.openstack.common.threadgroup return 
self._exit_event.wait()
2015-02-08 16:00:27.698 5794 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/eventlet/event.py", line 121, in wait
2015-02-08 16:00:27.698 5794 TRACE nova.openstack.common.threadgroup return 
hubs.get_hub().switch()
2015-02-08 16:00:27.698 5794 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 293, in switch
2015-02-08 16:00:27.698 5794 TRACE nova.openstack.common.threadgroup return 
self.greenlet.switch()
2015-02-08 16:00:27.698 5794 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 212, in main
2015-02-08 16:00:27.698 5794 TRACE nova.openstack.common.threadgroup result 
= function(*args, **kwargs)
2015-02-08 16:00:27.698 5794 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/openstack/common/service.py", line 492, 
in run_service
2015-02-08 16:00:27.698 5794 TRACE nova.openstack.common.threadgroup 
service.start()
2015-02-08 16:00:27.698 5794 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.7/site-packages/nova/service.py", line 164, in start
2015-

[Yahoo-eng-team] [Bug 1417633] [NEW] OVS Agent should fail when enable_distributed_routing = True and l2_population = False

2015-02-03 Thread Itzik Brown
Public bug reported:

l2_population is required for DVR both as a mechanism driver and as an option 
in OVS Agent's configuration.
The agent should fail to start when enable_distributed_routing = True and 
l2_population = False otherwise the router won't behave as expected.

Version
==
RHEL 7.0
openstack-neutron-2014.2.1-6.el7ost.noarch
openstack-neutron-ml2-2014.2.1-6.el7ost.noarch
python-neutron-2014.2.1-6.el7ost.noarch
openstack-neutron-openvswitch-2014.2.1-6.el7ost.noarch

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1417633

Title:
  OVS Agent should fail when enable_distributed_routing = True and
  l2_population = False

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  l2_population is required for DVR both as a mechanism driver and as an option 
in OVS Agent's configuration.
  The agent should fail to start when enable_distributed_routing = True and 
l2_population = False otherwise the router won't behave as expected.

  Version
  ==
  RHEL 7.0
  openstack-neutron-2014.2.1-6.el7ost.noarch
  openstack-neutron-ml2-2014.2.1-6.el7ost.noarch
  python-neutron-2014.2.1-6.el7ost.noarch
  openstack-neutron-openvswitch-2014.2.1-6.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1417633/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1391816] Re: [pci-passthrough] PCI Clear message should be reported when there are no VFs left for allocation

2015-02-01 Thread Itzik Brown
You are right.
It's misconfiguration.

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1391816

Title:
  [pci-passthrough] PCI Clear message should be reported when there are
  no VFs left for allocation

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When launching an instance with a preconfigured port When there are no
  VFs available for allocation the error message  is not clear

  #neutron port-create tenant1-net1 --binding:vnic-type direct
  #nova boot --flavor m1.tiny --image  cirros --nic port-id=  vm100

  # nova show vm100

  (output truncated)



|
  | fault| {"message": "PCI device request 
({'requests': 
[InstancePCIRequest(alias_name=None,count=1,is_new=False,request_id=66b02b9b-600b-4c46-9f66-38ceb6cc2742,spec=[{physical_network='physnet2'}])],
 'code': 500}equests)s failed", "code": 500, "created": "2014-11-12T10:10:14Z"} 
|

  
  Expected:
  Clear message in the fault entry when issuing 'nova show' or when launching 
the Instance(Better)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1391816/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1408603] [NEW] OVS Agent creates a tunnel when local_ip is wrong

2015-01-08 Thread Itzik Brown
Public bug reported:


When specifying a wrong local_ip with tunnel type 'vxlan'  which doesn't belong 
to the host a tunnel is created where local_ip is the wrong one and 
the remote_ip is the right one.
There should be a sanity check to check that the IP address in local_ip belongs 
to the host.

Version

RHEL7.0
openstack-neutron-2014.2.1-5.el7ost

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1408603

Title:
  OVS Agent creates a tunnel when local_ip is wrong

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  
  When specifying a wrong local_ip with tunnel type 'vxlan'  which doesn't 
belong to the host a tunnel is created where local_ip is the wrong one and 
  the remote_ip is the right one.
  There should be a sanity check to check that the IP address in local_ip 
belongs to the host.

  Version
  
  RHEL7.0
  openstack-neutron-2014.2.1-5.el7ost

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1408603/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1407959] [NEW] 'Unable to retrieve the agent ip' when using l2population mechnism driver

2015-01-06 Thread Itzik Brown
Public bug reported:

If there is more than one agent per host even when there is just one
alive l2_population doesn't work i.e. the tunnel isn't brought up.

There is an error in Neutron's log:
Unable to retrieve the agent ip, check the agent configuration.

When deleting the dead agent using neutron agent-delete  and
restarting neutron-openvswitch-agent the tunnel is brought up.

Version
===
RHEL 7.0
openstack-neutron-2014.2.1-5.el7ost

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1407959

Title:
  'Unable to retrieve the agent ip' when using l2population mechnism
  driver

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  If there is more than one agent per host even when there is just one
  alive l2_population doesn't work i.e. the tunnel isn't brought up.

  There is an error in Neutron's log:
  Unable to retrieve the agent ip, check the agent configuration.

  When deleting the dead agent using neutron agent-delete  and
  restarting neutron-openvswitch-agent the tunnel is brought up.

  Version
  ===
  RHEL 7.0
  openstack-neutron-2014.2.1-5.el7ost

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1407959/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1406486] [NEW] Suspending an instance fails when using vnic_type=direct

2014-12-30 Thread Itzik Brown
Public bug reported:

When launching an instance with a pre-created port with 
binding:vnic_type='direct' and suspending the instance 
fails with error  'NoneType' object has no attribute 'encode'

Nova compute log:
http://paste.openstack.org/show/155141/

Version
==
openstack-nova-common-2014.2.1-3.el7ost.noarch
openstack-nova-compute-2014.2.1-3.el7ost.noarch
python-novaclient-2.20.0-1.el7ost.noarch
python-nova-2014.2.1-3.el7ost.noarch

How to Reproduce
===
# neutron port-create tenant1-net1 --binding:vnic-type direct
# nova boot --flavor m1.small --image rhel7 --nic port-id= vm1
# nova suspend 
# nova show 

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1406486

Title:
  Suspending an instance fails when using vnic_type=direct

Status in OpenStack Compute (Nova):
  New

Bug description:
  When launching an instance with a pre-created port with 
binding:vnic_type='direct' and suspending the instance 
  fails with error  'NoneType' object has no attribute 'encode'

  Nova compute log:
  http://paste.openstack.org/show/155141/

  Version
  ==
  openstack-nova-common-2014.2.1-3.el7ost.noarch
  openstack-nova-compute-2014.2.1-3.el7ost.noarch
  python-novaclient-2.20.0-1.el7ost.noarch
  python-nova-2014.2.1-3.el7ost.noarch

  How to Reproduce
  ===
  # neutron port-create tenant1-net1 --binding:vnic-type direct
  # nova boot --flavor m1.small --image rhel7 --nic port-id= vm1
  # nova suspend 
  # nova show 

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1406486/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1403003] [NEW] Updating an image by uploading a new file doesn't update the image

2014-12-16 Thread Itzik Brown
Public bug reported:

When modifying an image file and then updating the image by using glance  
image-update --file  
doesn't update the image 

How to reproduce

Download an image and upload it:

# wget 
http://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Base-20141203-21.x86_64.qcow2
# glance image-create --name fedora21b --disk-format qcow2  --container-format 
bare --is-public True --file /tmp/Fedora-Cloud-Base-20141203-21.x86_64.qcow2

Create some dummy file in /tmp/dummy and modify the image
#  virt-copy-in -a Fedora-Cloud-Base-20141203-21.x86_64.qcow2 /tmp/dummy /etc

Update the image:
#glance image-update --file Fedora-Cloud-Base-20141203-21.x86_64.qcow2  fedora21

Verify the image is not updated by comparing the checksum
# md5sum /var/lib/glance/images/bd84ac96-c2a8-4268-a19c-a0e69c703baf
# md5sum Fedora-Cloud-Base-20141203-21.x86_64.qcow2

When using --checksum the checksum in the image properties is updated but the 
the image itself  not:
#glance image-update --file Fedora-Cloud-Base-20141203-21.x86_64.qcow2 
--checksum 2c98b17b3f27d14e2e7a840fef464cfe fedora21

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1403003

Title:
  Updating an image by uploading a new file doesn't update the image

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  When modifying an image file and then updating the image by using glance  
image-update --file  
  doesn't update the image 

  How to reproduce
  
  Download an image and upload it:

  # wget 
http://download.fedoraproject.org/pub/fedora/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Base-20141203-21.x86_64.qcow2
  # glance image-create --name fedora21b --disk-format qcow2  
--container-format bare --is-public True --file 
/tmp/Fedora-Cloud-Base-20141203-21.x86_64.qcow2

  Create some dummy file in /tmp/dummy and modify the image
  #  virt-copy-in -a Fedora-Cloud-Base-20141203-21.x86_64.qcow2 /tmp/dummy /etc

  Update the image:
  #glance image-update --file Fedora-Cloud-Base-20141203-21.x86_64.qcow2  
fedora21

  Verify the image is not updated by comparing the checksum
  # md5sum /var/lib/glance/images/bd84ac96-c2a8-4268-a19c-a0e69c703baf
  # md5sum Fedora-Cloud-Base-20141203-21.x86_64.qcow2

  When using --checksum the checksum in the image properties is updated but the 
the image itself  not:
  #glance image-update --file Fedora-Cloud-Base-20141203-21.x86_64.qcow2 
--checksum 2c98b17b3f27d14e2e7a840fef464cfe fedora21

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1403003/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1402959] [NEW] Support Launching an instance with a port with vnic_type=direct

2014-12-15 Thread Itzik Brown
Public bug reported:

To support Launching instances with 'SR-IOV' interfaces using the
dashboard there is a need to:

1)Adding the ability to specify vnic_type to 'port create' operation
2)Adding option to create a port as a tenant (Right now only Admin can do this)
3)Adding the ability to launch an instance with a pre configured port

Duplicate bugs:
https://bugs.launchpad.net/horizon/+bug/1399252
https://bugs.launchpad.net/horizon/+bug/1399254

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1402959

Title:
  Support Launching an instance with a port with vnic_type=direct

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  To support Launching instances with 'SR-IOV' interfaces using the
  dashboard there is a need to:

  1)Adding the ability to specify vnic_type to 'port create' operation
  2)Adding option to create a port as a tenant (Right now only Admin can do 
this)
  3)Adding the ability to launch an instance with a pre configured port

  Duplicate bugs:
  https://bugs.launchpad.net/horizon/+bug/1399252
  https://bugs.launchpad.net/horizon/+bug/1399254

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1402959/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1400784] [NEW] Cold migration fails when using vnic_type=direct

2014-12-09 Thread Itzik Brown
Public bug reported:

When launching an instance with port with vnic_type direct and using nova 
migrate I get an error:
"Device :05:11.5 not found: could not access 
/sys/bus/pci/devices/:05:11.5/config: No such file or directory"

How to Reproduce
===
#neutron port-create tenant1-net1 --binding:vnic-type direct
#nova boot --flavor m1.small --image rhel7 --nic port-id= vm1
#nova migrate 

After a white run nova show  and the error should be in the
'fault' entry.

Version
===
RHEL7.0
openstack-nova-common-2014.2-2.el7ost.noarch
openstack-nova-compute-2014.2-2.el7ost.noarch
python-nova-2014.2-2.el7ost.noarch

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1400784

Title:
  Cold migration fails when using vnic_type=direct

Status in OpenStack Compute (Nova):
  New

Bug description:
  When launching an instance with port with vnic_type direct and using nova 
migrate I get an error:
  "Device :05:11.5 not found: could not access 
/sys/bus/pci/devices/:05:11.5/config: No such file or directory"

  How to Reproduce
  ===
  #neutron port-create tenant1-net1 --binding:vnic-type direct
  #nova boot --flavor m1.small --image rhel7 --nic port-id= vm1
  #nova migrate 

  After a white run nova show  and the error should be in the
  'fault' entry.

  Version
  ===
  RHEL7.0
  openstack-nova-common-2014.2-2.el7ost.noarch
  openstack-nova-compute-2014.2-2.el7ost.noarch
  python-nova-2014.2-2.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1400784/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1400037] [NEW] Launching an instance with muliple interfaces attached to same network by using --net-id fails

2014-12-07 Thread Itzik Brown
Public bug reported:

Fails to launch an instance with multiple interfaces attached to the
same network by using --nic net-id= :

# nova boot --flavor m1.small --image rhel7-new --nic net-id=52fc18b6
-397a-45d6-b8db-fb32accd00e5 --nic net-id=52fc18b6-397a-45d6-b8db-
fb32accd00e5 vm100

ERROR (BadRequest): Duplicate networks (52fc18b6-397a-45d6-b8db-
fb32accd00e5) are not allowed (HTTP 400) (Request-ID: req-
7b9bfaa5-a304-4b47-9acb-ee518ba220f8)

There is no problem when launching the instance using --port-id :
# nova boot --flavor m1.small --image rhel7-new --nic 
port-id=4479d8c5-3c1d-41ab-8994-e577280a9584 --nic 
port-id=ec218051-3819-4af5-8113-96c5d31442cc vm100

# neutron  port-show 4479d8c5-3c1d-41ab-8994-e577280a9584 -F network_id
++--+
| Field  | Value|
++--+
| network_id | 52fc18b6-397a-45d6-b8db-fb32accd00e5 |
++--+
# neutron  port-show ec218051-3819-4af5-8113-96c5d31442cc -F network_id
++--+
| Field  | Value|
++--+
| network_id | 52fc18b6-397a-45d6-b8db-fb32accd00e5 |
++--+


Version
===
openstack-nova-common-2014.2-2.el7ost.noarch
openstack-neutron-2014.2-11.el7ost.noarch

Expected result
=
Both using the --nic port-id  and --nic net-id should work

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1400037

Title:
  Launching an instance with muliple interfaces attached to same network
  by using --net-id fails

Status in OpenStack Compute (Nova):
  New

Bug description:
  Fails to launch an instance with multiple interfaces attached to the
  same network by using --nic net-id= :

  # nova boot --flavor m1.small --image rhel7-new --nic net-id=52fc18b6
  -397a-45d6-b8db-fb32accd00e5 --nic net-id=52fc18b6-397a-45d6-b8db-
  fb32accd00e5 vm100

  ERROR (BadRequest): Duplicate networks (52fc18b6-397a-45d6-b8db-
  fb32accd00e5) are not allowed (HTTP 400) (Request-ID: req-
  7b9bfaa5-a304-4b47-9acb-ee518ba220f8)

  There is no problem when launching the instance using --port-id :
  # nova boot --flavor m1.small --image rhel7-new --nic 
port-id=4479d8c5-3c1d-41ab-8994-e577280a9584 --nic 
port-id=ec218051-3819-4af5-8113-96c5d31442cc vm100

  # neutron  port-show 4479d8c5-3c1d-41ab-8994-e577280a9584 -F network_id
  ++--+
  | Field  | Value|
  ++--+
  | network_id | 52fc18b6-397a-45d6-b8db-fb32accd00e5 |
  ++--+
  # neutron  port-show ec218051-3819-4af5-8113-96c5d31442cc -F network_id
  ++--+
  | Field  | Value|
  ++--+
  | network_id | 52fc18b6-397a-45d6-b8db-fb32accd00e5 |
  ++--+

  
  Version
  ===
  openstack-nova-common-2014.2-2.el7ost.noarch
  openstack-neutron-2014.2-11.el7ost.noarch

  Expected result
  =
  Both using the --nic port-id  and --nic net-id should work

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1400037/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399252] [NEW] Missing port-create in Dashboard

2014-12-04 Thread Itzik Brown
Public bug reported:

Right now there is no option to create a port using the Dashboard.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1399252

Title:
  Missing port-create in Dashboard

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Right now there is no option to create a port using the Dashboard.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1399252/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1399254] [NEW] Missing the option to launch an instance with an existing port

2014-12-04 Thread Itzik Brown
Public bug reported:

When launching an instance using the Dashboard it's only possible to add 
Virtual interfaces by choosing networks.
The option to launch an instance with an existing port is missing.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1399254

Title:
  Missing the option to launch an instance with an existing port

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When launching an instance using the Dashboard it's only possible to add 
Virtual interfaces by choosing networks.
  The option to launch an instance with an existing port is missing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1399254/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1397675] [NEW] Updating admin_state_up for port with vnic_type doesn't have affect

2014-11-30 Thread Itzik Brown
Public bug reported:

Updating admin_state_up for port with binding:vnic_type='direct' doesn't
have affect:


Version
===
RHEL7.0
openstack-neutron-2014.2-11.el7ost

How to reproduce
===

1. Make sure you have a connectivity to the Instance with the port
attached

2. Run
#neutron port-update --admin_state_up=False 

3. Check connectivity - there is still connectivity to the Instance.

Expected resule
==
When updating the port admin_state_up to False there should no connectivity to 
the instance and when updating 
admin_state_up to True there should be connectivity to the Instance.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: pci-passthrough

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1397675

Title:
  Updating admin_state_up  for port with vnic_type doesn't have affect

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Updating admin_state_up for port with binding:vnic_type='direct'
  doesn't have affect:

  
  Version
  ===
  RHEL7.0
  openstack-neutron-2014.2-11.el7ost

  How to reproduce
  ===

  1. Make sure you have a connectivity to the Instance with the port
  attached

  2. Run
  #neutron port-update --admin_state_up=False 

  3. Check connectivity - there is still connectivity to the Instance.

  Expected resule
  ==
  When updating the port admin_state_up to False there should no connectivity 
to the instance and when updating 
  admin_state_up to True there should be connectivity to the Instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1397675/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1391827] Re: nova-manage service list should not be allowed for a tenant

2014-11-17 Thread Itzik Brown
Agree.

** Changed in: nova
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1391827

Title:
  nova-manage service list should not be allowed for a tenant

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  nova-manage service list is an administration command and a tenant should not 
be able to run as a tenant.
  When running as a tenant user ( role _member_) 'nova-manage service list ' 
shows the normal output as the one seen when running as 'admin'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1391827/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1391827] [NEW] nova-manage service list should not be allowed for a tenant

2014-11-12 Thread Itzik Brown
Public bug reported:

nova-manage service list is an administration command and should a tenant 
should not be able to run as a tenant.
When running as a tenant user ( role _member_) 'nova-manage service list ' 
shows the normal output as the one seen when running as 'admin'.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1391827

Title:
  nova-manage service list should not be allowed for a tenant

Status in OpenStack Compute (Nova):
  New

Bug description:
  nova-manage service list is an administration command and should a tenant 
should not be able to run as a tenant.
  When running as a tenant user ( role _member_) 'nova-manage service list ' 
shows the normal output as the one seen when running as 'admin'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1391827/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1391816] [NEW] [pci-passthrough] PCI Clear message should be reported when there are no VFs left for allocation

2014-11-12 Thread Itzik Brown
Public bug reported:

When launching an instance with a preconfigured port When there are no
VFs available for allocation the error message  is not clear

#neutron port-create tenant1-net1 --binding:vnic-type direct
#nova boot --flavor m1.tiny --image  cirros --nic port-id=  vm100

# nova show vm100

(output truncated)
  


  |
| fault| {"message": "PCI device request 
({'requests': 
[InstancePCIRequest(alias_name=None,count=1,is_new=False,request_id=66b02b9b-600b-4c46-9f66-38ceb6cc2742,spec=[{physical_network='physnet2'}])],
 'code': 500}equests)s failed", "code": 500, "created": "2014-11-12T10:10:14Z"} 
|


Expected:
Clear message in the fault entry when issuing 'nova show' or when launching the 
Instance(Better)

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1391816

Title:
  [pci-passthrough] PCI Clear message should be reported when there are
  no VFs left for allocation

Status in OpenStack Compute (Nova):
  New

Bug description:
  When launching an instance with a preconfigured port When there are no
  VFs available for allocation the error message  is not clear

  #neutron port-create tenant1-net1 --binding:vnic-type direct
  #nova boot --flavor m1.tiny --image  cirros --nic port-id=  vm100

  # nova show vm100

  (output truncated)



|
  | fault| {"message": "PCI device request 
({'requests': 
[InstancePCIRequest(alias_name=None,count=1,is_new=False,request_id=66b02b9b-600b-4c46-9f66-38ceb6cc2742,spec=[{physical_network='physnet2'}])],
 'code': 500}equests)s failed", "code": 500, "created": "2014-11-12T10:10:14Z"} 
|

  
  Expected:
  Clear message in the fault entry when issuing 'nova show' or when launching 
the Instance(Better)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1391816/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1390078] [NEW] 'Internal Server Error' When using the wrong lb_method in lb-pool-update command

2014-11-06 Thread Itzik Brown
Public bug reported:

When running the following command:

# neutron  lb-pool-update pool1 --lb_method dummy

The result is:
Internal Server Error (HTTP 500) (Request-ID: 
req-21a73d3c-dbe8-4782-8203-5e187112980c

Expected result:
Error Message about the wrong value used

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1390078

Title:
  'Internal Server Error' When using the wrong lb_method in lb-pool-
  update command

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When running the following command:

  # neutron  lb-pool-update pool1 --lb_method dummy

  The result is:
  Internal Server Error (HTTP 500) (Request-ID: 
req-21a73d3c-dbe8-4782-8203-5e187112980c

  Expected result:
  Error Message about the wrong value used

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1390078/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1387152] [NEW] Wrong Product ID for Intel NIC in supported_pci_vendor_devs

2014-10-29 Thread Itzik Brown
Public bug reported:

In /etc/neutron/plugins/ml2/ml2_conf_sriov.ini under [sriov_nic] section
supported_pci_vendor_devs = 15b3:1004, 8086:10c9

It should be:
supported_pci_vendor_devs = 15b3:1004, 8086:10ca

8086:10c9 is Vendor ID:Product ID of PF. It should be Vendor ID:Product
ID of VF.

** Affects: neutron
 Importance: Undecided
 Assignee: Irena Berezovsky (irenab)
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1387152

Title:
  Wrong Product ID for Intel NIC in supported_pci_vendor_devs

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  In /etc/neutron/plugins/ml2/ml2_conf_sriov.ini under [sriov_nic] section
  supported_pci_vendor_devs = 15b3:1004, 8086:10c9

  It should be:
  supported_pci_vendor_devs = 15b3:1004, 8086:10ca

  8086:10c9 is Vendor ID:Product ID of PF. It should be Vendor
  ID:Product ID of VF.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1387152/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1386660] [NEW] ML2 SR-IOV Mechanism driver configuration option agent_required should be False in ml2_conf_sriov.ini

2014-10-28 Thread Itzik Brown
Public bug reported:

The default value for agent_required under the section [ml2_sriov] should be 
the same as the default value for this option
# agent_required = False

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1386660

Title:
  ML2 SR-IOV Mechanism driver configuration option agent_required should
  be False in ml2_conf_sriov.ini

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The default value for agent_required under the section [ml2_sriov] should be 
the same as the default value for this option
  # agent_required = False

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1386660/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1386616] [NEW] LBaaS VIP creation via horizon requires a mandatory "IP address" argument although it's only optional in the cli command

2014-10-28 Thread Itzik Brown
Public bug reported:

 LBaaS VIP creation via horizon requires a mandatory IP address in the
text box "Specify a free IP address from the selected subnet" Although
it's only optional in the CLI command.

usage: neutron lb-vip-create [-h] [-f {shell,table,value}] [-c COLUMN]
 [--max-width ] [--prefix PREFIX]
 [--request-format {json,xml}]
 [--tenant-id TENANT_ID] [--address ADDRESS]
 [--admin-state-down]
 [--connection-limit CONNECTION_LIMIT]
 [--description DESCRIPTION] --name NAME
 --protocol-port PROTOCOL_PORT --protocol
 {TCP,HTTP,HTTPS} --subnet-id SUBNET
 POOL

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1386616

Title:
   LBaaS VIP creation via horizon requires a mandatory "IP address"
  argument although it's only optional in the cli command

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
   LBaaS VIP creation via horizon requires a mandatory IP address in the
  text box "Specify a free IP address from the selected subnet" Although
  it's only optional in the CLI command.

  usage: neutron lb-vip-create [-h] [-f {shell,table,value}] [-c COLUMN]
   [--max-width ] [--prefix PREFIX]
   [--request-format {json,xml}]
   [--tenant-id TENANT_ID] [--address ADDRESS]
   [--admin-state-down]
   [--connection-limit CONNECTION_LIMIT]
   [--description DESCRIPTION] --name NAME
   --protocol-port PROTOCOL_PORT --protocol
   {TCP,HTTP,HTTPS} --subnet-id SUBNET
   POOL

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1386616/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1386543] [NEW] FWaaS - New blocking rules has no affect for existing traffic

2014-10-28 Thread Itzik Brown
Public bug reported:

When building a firewall with a rule to block a specific Traffic - the
current traffic is not blocked.

For example:

Running a Ping to an instance and then building a firewall with a rule to block 
ICMP to this instance doesn't have affect while the ping command is still 
running.
Exiting the command and then trying pinging the Instance again shows the 
desired result - i.e. the traffic is blocked.

This is also the case for SSH.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1386543

Title:
  FWaaS - New blocking rules has no affect for existing traffic

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When building a firewall with a rule to block a specific Traffic - the
  current traffic is not blocked.

  For example:

  Running a Ping to an instance and then building a firewall with a rule to 
block ICMP to this instance doesn't have affect while the ping command is still 
running.
  Exiting the command and then trying pinging the Instance again shows the 
desired result - i.e. the traffic is blocked.

  This is also the case for SSH.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1386543/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385862] [NEW] LBaaS - Missing option to associate a floating IP to a VIP

2014-10-26 Thread Itzik Brown
Public bug reported:

There is an option to associate a floating IP to an Instance but not to a VIP.
There is such an option through the CLI by associating the VIP's port to a 
floating IP.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1385862

Title:
  LBaaS - Missing option to associate a floating IP to a VIP

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  There is an option to associate a floating IP to an Instance but not to a VIP.
  There is such an option through the CLI by associating the VIP's port to a 
floating IP.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1385862/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370348] Re: Using macvtap vnic_type is not working with vif_type=hw_veb

2014-09-22 Thread Itzik Brown
libvirt version is 1.2.2

Changed to:
def set_vif_host_backend_hw_veb(conf, net_type, devname, vlan,
tapname=None):
"""Populate a LibvirtConfigGuestInterface instance
with host backend details for an device that supports hardware
virtual ethernet bridge.
"""

conf.net_type = net_type
if net_type == 'direct':
conf.source_mode = 'passthrough'
conf.source_dev = pci_utils.get_ifname_by_pci_address(devname)
conf.driver_name = 'vhost'
else:
conf.source_dev = devname
conf.model = None
conf.vlan = vlan
if tapname:
conf.target_dev = tapname

And it works.
Can I push this fix?

There is a need to add a the VLAN set in the Neutron side (i.e. Neutron
Agent).

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1370348

Title:
  Using macvtap vnic_type is not working with vif_type=hw_veb

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Incomplete

Bug description:
  When trying to boot an instance with a port using vnic_type=macvtap
  and vif_type=hw_veb I get this error in Compute log:

  TRACE nova.compute.manager  mlibvirtError: unsupported configuration:
  an interface of type 'direct' is requesting a vlan tag, but that is
  not supported for this type of connection

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1370348/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370348] [NEW] Using macvtap vnic_type is not working with vif_type=hw_veb

2014-09-16 Thread Itzik Brown
Public bug reported:

When trying to boot an instance with a port using vnic_type=macvtap and
vif_type=hw_veb I get this error in Compute log:

TRACE nova.compute.manager  mlibvirtError: unsupported configuration: an
interface of type 'direct' is requesting a vlan tag, but that is not
supported for this type of connection

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1370348

Title:
  Using macvtap vnic_type is not working with vif_type=hw_veb

Status in OpenStack Compute (Nova):
  New

Bug description:
  When trying to boot an instance with a port using vnic_type=macvtap
  and vif_type=hw_veb I get this error in Compute log:

  TRACE nova.compute.manager  mlibvirtError: unsupported configuration:
  an interface of type 'direct' is requesting a vlan tag, but that is
  not supported for this type of connection

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1370348/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369877] [NEW] AttributeError: 'NoopFirewallDriver' object has no attribute 'update_security_group_rules'

2014-09-15 Thread Itzik Brown
Public bug reported:

When using NoopFirewallDriver there seems to be a missing method
'update_security_group_rule'

2014-09-15 16:54:38.167 1069 CRITICAL neutron 
[req-b671b223-1d0c-4545-a9e3-f1d84561762e None] AttributeError: 
'NoopFirewallDriver' object has no attribute 'update_security_group_rules'
2014-09-15 16:54:38.167 1069 TRACE neutron Traceback (most recent call last):
2014-09-15 16:54:38.167 1069 TRACE neutron   File 
"/usr/local/bin/neutron-linuxbridge-agent", line 10, in 
2014-09-15 16:54:38.167 1069 TRACE neutron sys.exit(main())
2014-09-15 16:54:38.167 1069 TRACE neutron   File 
"/opt/stack/neutron/neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py",
 line 1041, in main
2014-09-15 16:54:38.167 1069 TRACE neutron agent.daemon_loop()
2014-09-15 16:54:38.167 1069 TRACE neutron   File 
"/opt/stack/neutron/neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py",
 line 999, in daemon_loop
2014-09-15 16:54:38.167 1069 TRACE neutron device_info.get('updated', set())
2014-09-15 16:54:38.167 1069 TRACE neutron   File 
"/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 312, in 
setup_port_filters
2014-09-15 16:54:38.167 1069 TRACE neutron 
self.prepare_devices_filter(new_devices)
2014-09-15 16:54:38.167 1069 TRACE neutron   File 
"/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 207, in 
prepare_devices_filter
2014-09-15 16:54:38.167 1069 TRACE neutron security_groups, 
security_group_member_ips)
2014-09-15 16:54:38.167 1069 TRACE neutron   File 
"/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 213, in 
_update_security_group_info
2014-09-15 16:54:38.167 1069 TRACE neutron 
self.firewall.update_security_group_rules(sg_id, sg_rules)
2014-09-15 16:54:38.167 1069 TRACE neutron AttributeError: 'NoopFirewallDriver' 
object has no attribute 'update_security_group_rules'
2014-09-15 16:54:38.167 1069 TRACE neutron

** Affects: neutron
 Importance: Undecided
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1369877

Title:
  AttributeError: 'NoopFirewallDriver' object has no attribute
  'update_security_group_rules'

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  When using NoopFirewallDriver there seems to be a missing method
  'update_security_group_rule'

  2014-09-15 16:54:38.167 1069 CRITICAL neutron 
[req-b671b223-1d0c-4545-a9e3-f1d84561762e None] AttributeError: 
'NoopFirewallDriver' object has no attribute 'update_security_group_rules'
  2014-09-15 16:54:38.167 1069 TRACE neutron Traceback (most recent call last):
  2014-09-15 16:54:38.167 1069 TRACE neutron   File 
"/usr/local/bin/neutron-linuxbridge-agent", line 10, in 
  2014-09-15 16:54:38.167 1069 TRACE neutron sys.exit(main())
  2014-09-15 16:54:38.167 1069 TRACE neutron   File 
"/opt/stack/neutron/neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py",
 line 1041, in main
  2014-09-15 16:54:38.167 1069 TRACE neutron agent.daemon_loop()
  2014-09-15 16:54:38.167 1069 TRACE neutron   File 
"/opt/stack/neutron/neutron/plugins/linuxbridge/agent/linuxbridge_neutron_agent.py",
 line 999, in daemon_loop
  2014-09-15 16:54:38.167 1069 TRACE neutron device_info.get('updated', 
set())
  2014-09-15 16:54:38.167 1069 TRACE neutron   File 
"/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 312, in 
setup_port_filters
  2014-09-15 16:54:38.167 1069 TRACE neutron 
self.prepare_devices_filter(new_devices)
  2014-09-15 16:54:38.167 1069 TRACE neutron   File 
"/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 207, in 
prepare_devices_filter
  2014-09-15 16:54:38.167 1069 TRACE neutron security_groups, 
security_group_member_ips)
  2014-09-15 16:54:38.167 1069 TRACE neutron   File 
"/opt/stack/neutron/neutron/agent/securitygroups_rpc.py", line 213, in 
_update_security_group_info
  2014-09-15 16:54:38.167 1069 TRACE neutron 
self.firewall.update_security_group_rules(sg_id, sg_rules)
  2014-09-15 16:54:38.167 1069 TRACE neutron AttributeError: 
'NoopFirewallDriver' object has no attribute 'update_security_group_rules'
  2014-09-15 16:54:38.167 1069 TRACE neutron

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1369877/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1364411] [NEW] Missing Mellanox VIF Driver for SR-IOV

2014-09-02 Thread Itzik Brown
Public bug reported:

After vif_driver config option had been deprecated there is a need to include 
vif driver for SR-IOV to enable VM to use SR-IOV to connect to Infiniband 
fabric.
Until now there was an out of the tree vif driver. 

There is an effort to include SR-IOV networking in Juno but it address
Ethernet and not Infiniband.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1364411

Title:
  Missing Mellanox VIF Driver for SR-IOV

Status in OpenStack Compute (Nova):
  New

Bug description:
  After vif_driver config option had been deprecated there is a need to include 
vif driver for SR-IOV to enable VM to use SR-IOV to connect to Infiniband 
fabric.
  Until now there was an out of the tree vif driver. 

  There is an effort to include SR-IOV networking in Juno but it address
  Ethernet and not Infiniband.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1364411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1304872] Re: use vif_details to get physical_network for mellanox vif driver to support ML2 plugin

2014-04-09 Thread Itzik Brown
** Changed in: nova
   Status: New => Confirmed

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1304872

Title:
  use vif_details to get physical_network for mellanox vif driver to
  support ML2 plugin

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  Mellanox networking solution requires knowledge of physical network during 
VIF plugging.
  vif _details dictionary on bort:binding is filled with physical_network by 
neutron plugin (ML2 MechanismDriver). VIF Driver should use vif_details to get 
physical_network info.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1304872/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1247537] Re: Missing pyzmq for Mellanox Neutron Plugin

2013-12-04 Thread Itzik Brown
Resolved by:
https://review.openstack.org/#/c/53609/


** Changed in: neutron
   Status: Incomplete => Invalid

** Changed in: neutron
   Status: Invalid => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1247537

Title:
  Missing pyzmq for Mellanox Neutron Plugin

Status in OpenStack Neutron (virtual network service):
  Fix Committed

Bug description:
  Mellanox Neutron plugin requires pyzmq.
  There is a need to add it to test-requirements.txt.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1247537/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp