Hi Irena and Murali,

Thanks a lot for your reply!

Here is the output from pci_devices table of nova db:

select * from pci_devices;
+---------------------+------------+------------+---------+----+-----------------+--------------+------------+-----------+----------+------------------+-----------------+-----------+-----------------------------------+---------------+------------+
| created_at          | updated_at | deleted_at | deleted | id |
compute_node_id | address      | product_id | vendor_id | dev_type |
dev_id           | label           | status    |
extra_info                        | instance_uuid | request_id |
+---------------------+------------+------------+---------+----+-----------------+--------------+------------+-----------+----------+------------------+-----------------+-----------+-----------------------------------+---------------+------------+
| 2014-12-15 12:10:52 | NULL       | NULL       |       0 |  1
|               1 | 0000:03:10.0 | 10ed       | 8086      | type-VF  |
pci_0000_03_10_0 | label_8086_10ed | available | {"phys_function":
"0000:03:00.0"} | NULL          | NULL       |
| 2014-12-15 12:10:52 | NULL       | NULL       |       0 |  2
|               1 | 0000:03:10.2 | 10ed       | 8086      | type-VF  |
pci_0000_03_10_2 | label_8086_10ed | available | {"phys_function":
"0000:03:00.0"} | NULL          | NULL       |
| 2014-12-15 12:10:52 | NULL       | NULL       |       0 |  3
|               1 | 0000:03:10.4 | 10ed       | 8086      | type-VF  |
pci_0000_03_10_4 | label_8086_10ed | available | {"phys_function":
"0000:03:00.0"} | NULL          | NULL       |
| 2014-12-15 12:10:52 | NULL       | NULL       |       0 |  4
|               1 | 0000:03:10.6 | 10ed       | 8086      | type-VF  |
pci_0000_03_10_6 | label_8086_10ed | available | {"phys_function":
"0000:03:00.0"} | NULL          | NULL       |
| 2014-12-15 12:10:53 | NULL       | NULL       |       0 |  5
|               1 | 0000:03:10.1 | 10ed       | 8086      | type-VF  |
pci_0000_03_10_1 | label_8086_10ed | available | {"phys_function":
"0000:03:00.1"} | NULL          | NULL       |
| 2014-12-15 12:10:53 | NULL       | NULL       |       0 |  6
|               1 | 0000:03:10.3 | 10ed       | 8086      | type-VF  |
pci_0000_03_10_3 | label_8086_10ed | available | {"phys_function":
"0000:03:00.1"} | NULL          | NULL       |
| 2014-12-15 12:10:53 | NULL       | NULL       |       0 |  7
|               1 | 0000:03:10.5 | 10ed       | 8086      | type-VF  |
pci_0000_03_10_5 | label_8086_10ed | available | {"phys_function":
"0000:03:00.1"} | NULL          | NULL       |
| 2014-12-15 12:10:53 | NULL       | NULL       |       0 |  8
|               1 | 0000:03:10.7 | 10ed       | 8086      | type-VF  |
pci_0000_03_10_7 | label_8086_10ed | available | {"phys_function":
"0000:03:00.1"} | NULL          | NULL       |
+---------------------+------------+------------+---------+----+-----------------+--------------+------------+-----------+----------+------------------+-----------------+-----------+-----------------------------------+---------------+------------+

output from select hypervisor_hostname,pci_stats from compute_nodes; is:
+---------------------+-------------------------------------------------------------------------------------------+
| hypervisor_hostname |
pci_stats
|
+---------------------+-------------------------------------------------------------------------------------------+
| blade08             | [{"count": 8, "vendor_id": "8086",
"physical_network": "ext-net", "product_id": "10ed"}] |
+---------------------+-------------------------------------------------------------------------------------------+

Moreover, I have set agent_required = True in
/etc/neutron/plugins/ml2/ml2_conf_sriov.ini.
but still found no sriov agent running.
# Defines configuration options for SRIOV NIC Switch MechanismDriver
# and Agent

[ml2_sriov]
# (ListOpt) Comma-separated list of
# supported Vendor PCI Devices, in format vendor_id:product_id
#
#supported_pci_vendor_devs = 8086:10ca, 8086:10ed
supported_pci_vendor_devs = 8086:10ed
# Example: supported_pci_vendor_devs = 15b3:1004
#
# (BoolOpt) Requires running SRIOV neutron agent for port binding
agent_required = True

[sriov_nic]
# (ListOpt) Comma-separated list of <physical_network>:<network_device>
# tuples mapping physical network names to the agent's node-specific
# physical network device interfaces of SR-IOV physical function to be used
# for VLAN networks. All physical networks listed in network_vlan_ranges on
# the server should have mappings to appropriate interfaces on each agent.
#
physical_device_mappings = ext-net:br-ex
# Example: physical_device_mappings = physnet1:eth1
#
# (ListOpt) Comma-separated list of <network_device>:<vfs__to_exclude>
# tuples, mapping network_device to the agent's node-specific list of
virtual
# functions that should not be used for virtual networking.
# vfs_to_exclude is a semicolon-separated list of virtual
# functions to exclude from network_device. The network_device in the
# mapping should appear in the physical_device_mappings list.
# exclude_devices =
# Example: exclude_devices = eth1:0000:07:00.2; 0000:07:00.3
========================================================================================
pci_passthrough_whitelist from /etc/nova/nova.conf:
pci_passthrough_whitelist =
{"address":"*:03:10.*","physical_network":"ext-net"}
====================================================
/etc/neutron/plugins/ml2/ml2_conf.ini:

[ml2]
# (ListOpt) List of network type driver entrypoints to be loaded from
# the neutron.ml2.type_drivers namespace.
#
#type_drivers = local,flat,vlan,gre,vxlan
#Example: type_drivers = flat,vlan,gre,vxlan
#type_drivers = flat,gre, vlan
type_drivers = flat,vlan

# (ListOpt) Ordered list of network_types to allocate as tenant
# networks. The default value 'local' is useful for single-box testing
# but provides no connectivity between hosts.
#
# tenant_network_types = local
# Example: tenant_network_types = vlan,gre,vxlan
#tenant_network_types = gre, vlan
tenant_network_types = vlan

# (ListOpt) Ordered list of networking mechanism driver entrypoints
# to be loaded from the neutron.ml2.mechanism_drivers namespace.
mechanism_drivers = openvswitch, sriovnicswitch
# Example: mechanism_drivers = openvswitch,mlnx
# Example: mechanism_drivers = arista
# Example: mechanism_drivers = cisco,logger
# Example: mechanism_drivers = openvswitch,brocade
# Example: mechanism_drivers = linuxbridge,brocade

# (ListOpt) Ordered list of extension driver entrypoints
# to be loaded from the neutron.ml2.extension_drivers namespace.
# extension_drivers =
# Example: extension_drivers = anewextensiondriver

[ml2_type_flat]
# (ListOpt) List of physical_network names with which flat networks
# can be created. Use * to allow flat networks with arbitrary
# physical_network names.
#
flat_networks = ext-net
# Example:flat_networks = physnet1,physnet2
# Example:flat_networks = *

[ml2_type_vlan]
# (ListOpt) List of <physical_network>[:<vlan_min>:<vlan_max>] tuples
# specifying physical_network names usable for VLAN provider and
# tenant networks, as well as ranges of VLAN tags on each
# physical_network available for allocation as tenant networks.
network_vlan_ranges = ext-net:2:100
# Example: network_vlan_ranges = physnet1:1000:2999,physnet2

[ml2_type_gre]
# (ListOpt) Comma-separated list of <tun_min>:<tun_max> tuples enumerating
ranges of GRE tunnel IDs that are available for tenant network allocation
#tunnel_id_ranges = 1:1000

[ml2_type_vxlan]
# (ListOpt) Comma-separated list of <vni_min>:<vni_max> tuples enumerating
# ranges of VXLAN VNI IDs that are available for tenant network allocation.
#
# vni_ranges =

# (StrOpt) Multicast group for the VXLAN interface. When configured, will
# enable sending all broadcast traffic to this multicast group. When left
# unconfigured, will disable multicast VXLAN mode.
#
# vxlan_group =
# Example: vxlan_group = 239.1.1.1

[securitygroup]
# Controls if neutron security group is enabled or not.
# It should be false when you use nova security group.
enable_security_group = True

# Use ipset to speed-up the iptables security groups. Enabling ipset support
# requires that ipset is installed on L2 agent node.
enable_ipset = True
firewall_driver =
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[ovs]
local_ip = controller
enable_tunneling = True
bridge_mappings = external:br-ex

[agent]
tunnel_types = vlan

[ml2_sriov]
agent_required = True

Please tell me what is wrong in there plus what exactly "physnet1" should
be? Thanks again for all your help and suggestion....

Regards,

On Tue, Dec 16, 2014 at 10:42 AM, Irena Berezovsky <ire...@mellanox.com>
wrote:
>
>  Hi David,
>
> You error is not related to agent.
>
> I would suggest to check:
>
> 1.        nova.conf at your compute node for pci whitelist configuration
>
> 2.       Neutron server configuration for correct physical_network label
> matching the label in pci whitelist
>
> 3.       Nova DB tables containing PCI devices entries:
>
> "#echo 'use nova;select hypervisor_hostname,pci_stats from compute_nodes;'
> | mysql -u root"
>
>  You should not run SR-IOV agent in you setup. SR-IOV agent is an
> optional and currently does not add value if you use Intel NIC.
>
>
>
>
>
> Regards,
>
> Irena
>
> *From:* david jhon [mailto:djhon9...@gmail.com]
> *Sent:* Tuesday, December 16, 2014 5:54 AM
> *To:* Murali B
> *Cc:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] SRIOV-error
>
>
>
> Just to be more clear, command $lspci | grep -i Ethernet gives following
> output:
>
> 01:00.0 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port
> Backplane Connection (rev 01)
> 01:00.1 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port
> Backplane Connection (rev 01)
> 03:00.0 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port
> Backplane Connection (rev 01)
> 03:00.1 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port
> Backplane Connection (rev 01)
> 03:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller
> Virtual Function (rev 01)
> 03:10.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller
> Virtual Function (rev 01)
> 03:10.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller
> Virtual Function (rev 01)
> 03:10.3 Ethernet controller: Intel Corporation 82599 Ethernet Controller
> Virtual Function (rev 01)
> 03:10.4 Ethernet controller: Intel Corporation 82599 Ethernet Controller
> Virtual Function (rev 01)
> 03:10.5 Ethernet controller: Intel Corporation 82599 Ethernet Controller
> Virtual Function (rev 01)
> 03:10.6 Ethernet controller: Intel Corporation 82599 Ethernet Controller
> Virtual Function (rev 01)
> 03:10.7 Ethernet controller: Intel Corporation 82599 Ethernet Controller
> Virtual Function (rev 01)
> 04:00.0 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port
> Backplane Connection (rev 01)
> 04:00.1 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port
> Backplane Connection (rev 01)
> 04:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller
> Virtual Function (rev 01)
> 04:10.1 Ethernet controller: Intel Corporation 82599 Ethernet Controller
> Virtual Function (rev 01)
> 04:10.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller
> Virtual Function (rev 01)
> 04:10.3 Ethernet controller: Intel Corporation 82599 Ethernet Controller
> Virtual Function (rev 01)
> 04:10.4 Ethernet controller: Intel Corporation 82599 Ethernet Controller
> Virtual Function (rev 01)
> 04:10.5 Ethernet controller: Intel Corporation 82599 Ethernet Controller
> Virtual Function (rev 01)
> 04:10.6 Ethernet controller: Intel Corporation 82599 Ethernet Controller
> Virtual Function (rev 01)
> 04:10.7 Ethernet controller: Intel Corporation 82599 Ethernet Controller
> Virtual Function (rev 01)
>
> How can I make SR-IOV agent run and fix this bug?
>
>
>
>
>
> On Tue, Dec 16, 2014 at 8:36 AM, david jhon <djhon9...@gmail.com> wrote:
>
> Hi Murali,
>
> Thanks for your response, I did the same, it has resolved errors
> apparently but 1) neutron agent-list shows no agent for sriov, 2) neutron
> port is created successfully but creating vm is erred in scheduling as
> follows:
>
> result from neutron agent-list:
>
> +--------------------------------------+--------------------+---------+-------+----------------+---------------------------+
> | id                                   | agent_type         | host    |
> alive | admin_state_up | binary                    |
>
> +--------------------------------------+--------------------+---------+-------+----------------+---------------------------+
> | 2acc7044-e552-4601-b00b-00ba591b453f | Open vSwitch agent | blade08 |
> xxx   | True           | neutron-openvswitch-agent |
> | 595d07c6-120e-42ea-a950-6c77a6455f10 | Metadata agent     | blade08 |
> :-)   | True           | neutron-metadata-agent    |
> | a1f253a8-e02e-4498-8609-4e265285534b | DHCP agent         | blade08 |
> :-)   | True           | neutron-dhcp-agent        |
> | d46b29d8-4b5f-4838-bf25-b7925cb3e3a7 | L3 agent           | blade08 |
> :-)   | True           | neutron-l3-agent          |
>
> +--------------------------------------+--------------------+---------+-------+----------------+---------------------------+
>
> 2014-12-15 19:30:44.546 40249 ERROR oslo.messaging.rpc.dispatcher
> [req-c7741cff-a7d8-422f-b605-6a1d976aeb09 ] Exception during message
> handling: PCI $
> 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher
> Traceback (most recent call last):
> 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher   File
> "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line
> 13$
> 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher
> incoming.message))
> 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher   File
> "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line
> 17$
> 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher
> return self._do_dispatch(endpoint, method, ctxt, args)
> 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher   File
> "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line
> 12$
> 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher
> result = getattr(endpoint, method)(ctxt, **new_args)
> 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher   File
> "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/server.py", line 139,
> i$
> 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher
> return func(*args, **kwargs)
> 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher   File
> "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 175, in
> s$
> 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher
> filter_properties)
> 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher   File
> "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", line
> $
> 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher
> filter_properties)
> 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher   File
> "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", line
> $
> 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher
> chosen_host.obj.consume_from_instance(instance_properties)
> 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher   File
> "/usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py", line
> 246,$
> 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher
> self.pci_stats.apply_requests(pci_requests.requests)
> 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher   File
> "/usr/lib/python2.7/dist-packages/nova/pci/pci_stats.py", line 209, in
> apply$
> 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher
> raise exception.PciDeviceRequestFailed(requests=requests)
> 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher
> PciDeviceRequestFailed: PCI device request ({'requests':
> [InstancePCIRequest(alias_$
> 2014-12-15 19:30:44.546 40249 TRACE oslo.messaging.rpc.dispatcher.
>
> Moreover, no /var/log/sriov-agent.log file exists. Please help me to fix
> this issue. Thanks everyone!
>
>
>
> On Mon, Dec 15, 2014 at 5:18 PM, Murali B <mbi...@gmail.com> wrote:
>
> Hi David,
>
>
>
> Please add as per the Irena suggestion
>
>
>
> FYI: refer the below configuration
>
>
>
> http://pastebin.com/DGmW7ZEg
>
>
>
>
>
> Thanks
>
> -Murali
>
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to