> 
> Hi,
> 
> 
> On Tue, Jun 19, 2018 at 12:27 AM Ilya Maximets <i.maxim...@samsung.com>
> wrote:
> 
> > Hi,
> > According to your log, your NIC has limited size of tx queues:
> >
> >   2018-06-19T04:34:46.106Z|00089|dpdk|ERR|PMD: Unsupported size of TX
> queue
> >                                                (max size: 1024)
> >
> > This means that you have to configure 'n_txq_desc' <= 1024 in order to
> > configure your NIC properly. OVS uses 2048 by default and this is causes
> > device configuration failure.
> >
> > Try this:
> >
> >     ovs-vsctl set interface eth1 options:n_txq_desc=1024
> >
> > It also likely that you will have to configure the same value for
> > 'n_rxq_desc'.
> >
> >
> Thank you. It was indeed no. queue descriptors issue. Modifying the config
> to 1024 fixed it.
> 
> I am using 'pdump/pcap' features in dpdk for packet capture with OVS-DPDK
> and currently it doesn't seem to work. I used following link
> 
> https://software.intel.com/en-us/articles/dpdk-pdump-in-open-vswitch-
> with-dpdk
> 
> OVS is linked to DPDK 17.11.2 which has pdump library built. OVS has pdump
> server socket file created
> 
> ls -l /usr/local/var/run/openvswitch/
> total 8
> ...
> srwxr-x--- 1 root root 0 Jun 19 19:26 pdump_server_socket
> srwxr-x--- 1 root root 0 Jun 19 19:26 vhost-user-1
> srwxr-x--- 1 root root 0 Jun 19 19:26 vhost-user-2
> 
> ./x86_64-native-linuxapp-gcc/build/app/pdump/dpdk-pdump -- --pdump
> port=1,queue=*,rx-dev=/tmp/pkts.pcap
> --server-socket-path=/usr/local/var/run/openvswitch
> EAL: Detected 8 lcore(s)
> PANIC in rte_eal_config_attach():
> *Cannot open '/var/run/.rte_config' for rte_mem_config*
> 6: [./x86_64-native-linuxapp-gcc/build/app/pdump/dpdk-
> pdump(_start+0x29)
> [0x4472e9]]
> 5: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)
> [0x7ff0ae7ba830]]
> 4: [./x86_64-native-linuxapp-gcc/build/app/pdump/dpdk-
> pdump(main+0x155)
> [0x44aa76]]
> 3:
> [./x86_64-native-linuxapp-gcc/build/app/pdump/dpdk-
> pdump(rte_eal_init+0xc7d)
> [0x49ef0d]]
> 2:
> [./x86_64-native-linuxapp-gcc/build/app/pdump/dpdk-
> pdump(__rte_panic+0xc3)
> [0x43ebb3]]
> 1:
> [./x86_64-native-linuxapp-gcc/build/app/pdump/dpdk-
> pdump(rte_dump_stack+0x2b)
> [0x4a573b]]
> Aborted
> 
> Anything I am missing?

Hi,

I can't reproduce your pdump issue. For me using the same command it works ok.
Does /var/run/.rte_config exist and do you have sufficient permissions to 
access that file as the user executing the pdump app?
http://dpdk.readthedocs.io/en/v17.11/prog_guide/multi_proc_support.html#running-multiple-independent-dpdk-applications
 may also help.

Thanks,
Ciara

> 
> Thanks.
> 
> 
> > Note that OVS manages TX queues itself and it will try to allocate
> > separate TX queue for each PMD thread + 1 for non-PMD threads. So,
> > it's kind of impossible to force it to configure only one TX queue
> > in case HW supports multiqueue.
> >
> > > Hi,
> > >
> > > I am using above configuration on my testbed and when I try to add a
> port
> > > which is bound to igb_uio, I see following errors due to Tx queue
> > > configuration. I just want to use single Tx and Rx queue for my testing.
> > I
> > > looked at Documentation/intro/install/dpdk.rst, i see only "DPDK Physical
> > > Port Rx Queues" and nothing for "Tx Queues". Kindly let me know how
> can I
> > > use single tx/rx queues and if I have to use multiple Tx queues what
> > > configuration changes I need to do?
> > >
> > > ============ovs logs======================
> > > 2018-06-19T04:33:23.720Z|00075|bridge|INFO|ovs-vswitchd (Open
> vSwitch)
> > > 2.9.90
> > > 2018-06-19T04:33:32.735Z|00076|memory|INFO|127688 kB peak resident
> set
> > size
> > > after 10.1 seconds
> > > 2018-06-19T04:33:32.735Z|00077|memory|INFO|handlers:5 ports:1
> > > revalidators:3 rules:5
> > > 2018-06-19T04:33:40.857Z|00078|rconn|INFO|br0<->unix#0: connected
> > > 2018-06-19T04:33:40.858Z|00079|rconn|INFO|br0<->unix#1: connected
> > > 2018-06-19T04:33:40.859Z|00080|rconn|INFO|br0<->unix#2: connected
> > > 2018-06-19T04:34:46.094Z|00081|dpdk|INFO|EAL: PCI device
> 0000:00:06.0 on
> > > NUMA socket 0
> > > 2018-06-19T04:34:46.094Z|00082|dpdk|INFO|EAL:   probe driver:
> 1d0f:ec20
> > > net_ena
> > > 2018-06-19T04:34:46.095Z|00083|dpdk|INFO|PMD: eth_ena_dev_init():
> > > Initializing 0:0:6.0
> > > 2018-06-19T04:34:46.095Z|00084|netdev_dpdk|INFO|Device
> '0000:00:06.0'
> > > attached to DPDK
> > > 2018-06-19T04:34:46.099Z|00085|dpif_netdev|INFO|PMD thread on
> numa_id: 0,
> > > core id:  0 created.
> > > 2018-06-19T04:34:46.099Z|00086|dpif_netdev|INFO|There are 1 pmd
> threads
> > on
> > > numa node 0
> > > 2018-06-19T04:34:46.105Z|00087|netdev_dpdk|WARN|Rx checksum
> offload is
> > not
> > > supported on port 0
> > > 2018-06-19T04:34:46.105Z|00088|dpdk|INFO|PMD: Set MTU: 1500
> > > 2018-06-19T04:34:46.106Z|00089|dpdk|ERR|PMD: Unsupported size of
> TX queue
> > > (max size: 1024)
> > > 2018-06-19T04:34:46.106Z|00090|netdev_dpdk|INFO|Interface eth1
> unable to
> > > setup txq(0): Invalid argument
> > > 2018-06-19T04:34:46.106Z|00091|netdev_dpdk|ERR|Interface eth1(rxq:1
> txq:2
> > > lsc interrupt mode:false) configure error: Invalid argument
> > > 2018-06-19T04:34:46.106Z|00092|dpif_netdev|ERR|Failed to set
> interface
> > eth1
> > > new configuration
> > > 2018-06-19T04:34:46.106Z|00093|bridge|WARN|could not add network
> device
> > > eth1 to ofproto (No such device)
> > >
> > >
> > > ethtool -l eth1
> > > Channel parameters for eth1:
> > > Pre-set maximums:
> > > RX:        128
> > > TX:        128
> > > Other:        0
> > > Combined:    0
> > > Current hardware settings:
> > > RX:        8
> > > TX:        8
> > > Other:        0
> > > Combined:    0
> > >
> > > ovs-vsctl get  Open_vSwitch . other_config
> > > {dpdk-extra="--file-prefix=host", dpdk-hugepage-
> dir="/dev/hugepages_1G",
> > > dpdk-init="true"}
> > >
> > > ovs-vswitchd --version
> > > ovs-vswitchd (Open vSwitch) 2.9.90
> > > DPDK 17.11.2
> > >
> > > ovs-vsctl get Open_vSwitch . dpdk_version
> > > "DPDK 17.11.2"
> > >
> > > ovs-vsctl get Open_vSwitch . dpdk_initialized
> > > true
> > >
> > > Thanks.
> >
> >
> _______________________________________________
> dev mailing list
> d...@openvswitch.org
> https://mail.openvswitch.org/mailman/listinfo/ovs-dev
_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to