[dpdk-dev] Hotplug

2015-10-07 Thread Srikanth Akula
Hi Tetsuya ,

Thank you for your inputs .

I have thought about this API , but looks like it takes interface name as
argument ( which is a unique name from the rte_pci_dev instance) . But i am
looking to check if the device is attached based on the PCI address .

But ,I am going to test this too

Regards,
Srikanth

On Wed, Oct 7, 2015 at 4:45 PM, Tetsuya Mukawa  wrote:

> On 2015/10/07 22:16, Srikanth Akula wrote:
> > Thank you for the inputs .
> >
> > I was able to solve the problem of device notification from my control
> > plane.
> >
> > I would like to know if we have any way to know if the PCI device is
> > already attached before we try to attach it ( if the device is already
> > attached pci probe will result an error ) .
> > But i want to know before hand to verify if the device is already
> attached
> > or not .
>
> Hi  Srikanth,
>
> I guess below API may help you.
> (Unfortunately, I am out of office now, so I haven't checked it, but I
> guess it works.)
>
>  - struct rte_eth_dev *rte_eth_dev_allocated(const char *name)
>
> If none NULL value is returned, a device has been already attached.
> If you want to check a physical NIC, above 'name' parameter will be like
> below.
>
> snprintf(name, size, "%d:%d.%d",
> pci_dev->addr.bus, pci_dev->addr.devid,
> pci_dev->addr.function);
>
> Thanks,
> Tetsuya
>
> >
> > I came with small API which can be used to check if the pci device is
> > already bound to any driver .
> >
> > +int
> > +rte_eal_pci_is_attached(const char *devargs)
> > +{
> > +struct rte_pci_device *dev = NULL;
> > +struct rte_pci_addr addr;
> > +memset(&addr,0,sizeof(struct rte_pci_addr));
> > +
> > +if (eal_parse_pci_DomBDF(devargs, &addr) == 0)
> > +  {
> > +TAILQ_FOREACH(dev, &pci_device_list, next) {
> > +if (!rte_eal_compare_pci_addr(&dev->addr, &addr))
> > +  {
> > +if (dev->driver)
> > +  {
> > +/*pci_dump_one_device(stdout,dev);*/
> > +RTE_LOG(WARNING, EAL, "Requested device "
> PCI_PRI_FMT
> > +  " cannot be used\n", dev->addr.domain,
> dev->addr.bus,
> > +dev->addr.devid, dev->addr.function);
> > +   return -1;
> > +     }
> > +  }
> > +   }
> > + }
> > +   return 0;
> > +}
> > +
> >
> > Could you please let me know if it is good to have such APIs
> >
> > Regards,
> > _Srikanth_
> >
> >
> > On Mon, Sep 28, 2015 at 9:44 PM, Stephen Hemminger <
> > stephen at networkplumber.org> wrote:
> >
> >> On Mon, 28 Sep 2015 21:12:50 -0700
> >> Srikanth Akula  wrote:
> >>
> >>> Hello ,
> >>>
> >>> I am trying to write an application based on DPDK port hotplug feature
> .
> >> My
> >>> requirement is to get an event when a new PCI devices gets added to the
> >>> system on the go.
> >>>
> >>> Do we have any in-built mechanism in DPDK (UIO/e1000/vfio drivers )
> that
> >> i
> >>> can use to get notifications when a new device gets added . I know the
> >>> alternatives such as inotify etc .
> >>>
> >>> But i am more interested to get equivalent support in dpdk drivers .
> >>>
> >>> Please let me know .
> >>>
> >>> Srikanth
> >> Implementing hotplug requires integration with the OS more than any
> >> additional
> >> DPDK support. What the Brocade vRouter does is leverage the existing
> Linux
> >> udev infrastructure to send a message to the router application which
> then
> >> initializes and sets up the new hardware. Most of the DPDK changes are
> >> upstream
> >> already and involve being able to dynamically add ports on the fly.
> >>
> >>
>
>


[dpdk-dev] Hotplug

2015-10-07 Thread Srikanth Akula
Thank you for the inputs .

I was able to solve the problem of device notification from my control
plane.

I would like to know if we have any way to know if the PCI device is
already attached before we try to attach it ( if the device is already
attached pci probe will result an error ) .
But i want to know before hand to verify if the device is already attached
or not .

I came with small API which can be used to check if the pci device is
already bound to any driver .

+int
+rte_eal_pci_is_attached(const char *devargs)
+{
+struct rte_pci_device *dev = NULL;
+struct rte_pci_addr addr;
+memset(&addr,0,sizeof(struct rte_pci_addr));
+
+if (eal_parse_pci_DomBDF(devargs, &addr) == 0)
+  {
+TAILQ_FOREACH(dev, &pci_device_list, next) {
+if (!rte_eal_compare_pci_addr(&dev->addr, &addr))
+  {
+if (dev->driver)
+  {
+/*pci_dump_one_device(stdout,dev);*/
+RTE_LOG(WARNING, EAL, "Requested device " PCI_PRI_FMT
+  " cannot be used\n", dev->addr.domain, dev->addr.bus,
+dev->addr.devid, dev->addr.function);
+   return -1;
+ }
+  }
+   }
+ }
+   return 0;
+}
+

Could you please let me know if it is good to have such APIs

Regards,
_Srikanth_


On Mon, Sep 28, 2015 at 9:44 PM, Stephen Hemminger <
stephen at networkplumber.org> wrote:

> On Mon, 28 Sep 2015 21:12:50 -0700
> Srikanth Akula  wrote:
>
> > Hello ,
> >
> > I am trying to write an application based on DPDK port hotplug feature .
> My
> > requirement is to get an event when a new PCI devices gets added to the
> > system on the go.
> >
> > Do we have any in-built mechanism in DPDK (UIO/e1000/vfio drivers ) that
> i
> > can use to get notifications when a new device gets added . I know the
> > alternatives such as inotify etc .
> >
> > But i am more interested to get equivalent support in dpdk drivers .
> >
> > Please let me know .
> >
> > Srikanth
>
> Implementing hotplug requires integration with the OS more than any
> additional
> DPDK support. What the Brocade vRouter does is leverage the existing Linux
> udev infrastructure to send a message to the router application which then
> initializes and sets up the new hardware. Most of the DPDK changes are
> upstream
> already and involve being able to dynamically add ports on the fly.
>
>


[dpdk-dev] [PATCH] doc: updated release notes for r2.1

2015-09-28 Thread Srikanth Akula
Hello ,

I am trying to write an application based on DPDK port hotplug feature . My
requirement is to get an event when a new PCI devices gets added to the
system on the go.

Do we have any in-built mechanism in DPDK (UIO/e1000/vfio drivers ) that i
can use to get notifications when a new device gets added . I know the
alternatives such as inotify etc .

But i am more interested to get equivalent support in dpdk drivers .

Please let me know .

Srikanth



On Thu, Aug 13, 2015 at 6:02 AM, Iremonger, Bernard <
bernard.iremonger at intel.com> wrote:

> Hi John,
>
> 
>
> > +
> > +* **Added additional hotplug support.**
> > +
> > +  Port hotplug support was added to the following PMDs:
> > +
> > +  * e1000/igb.
> > +  * ixgbe.
> > +  * i40e.
> > +  * fm10k.
> > +  * Ring.
> > +  * Bonding.
> > +  * Virtio.
>
> ring, bonding and virtio should probably be all lowercase.
>
> > +
> > +  Port hotplug support was added to BSD.
> > +
> > +
> 
>
> Regards,
>
> Bernard.
>
>


[dpdk-dev] Fwd: OVS with DPDK ..Error packets

2015-07-29 Thread Srikanth Akula
(+DPDK dev team )


Hello ,

I am trying to test the OVS_DPDK performance and found that lot of packets
being treated as error packets .

ovs-vsctl get Interface dpdk0 statistics
{collisions=0, rx_bytes=38915076374, rx_crc_err=0, rx_dropped=0,
*rx_errors=3840287219
<3840287219>, *rx_frame_err=0, rx_over_err=0, rx_packets=292972799,
tx_bytes=38935883904, tx_dropped=0, tx_errors=0, tx_packets=293068162}

I am running DPDK application inside my VM .

Looks like there is a buffer issue ( 64Bytes - 10Gbps)

Could  somebody let me know if i have missed any configuration in DPDK/OVS ?

-Srikanth


[dpdk-dev] FIB aware L3 forwarding

2015-05-26 Thread Srikanth Akula
Hi Dev team ,

I am interested to know if DPDK supports multiple FIBs , and forwards L3
packets based on the FIB ID we are interested in ?

-Srikanth


[dpdk-dev] Max throughput Using QOS Scheduler

2014-11-06 Thread Srikanth Akula
Hi Cristian,


Thank you very much for your points and it should really help me in fixing
few issues we might have.

thanks again !

Regards,
srikanth


On Thu, Nov 6, 2014 at 12:37 PM, Dumitrescu, Cristian <
cristian.dumitrescu at intel.com> wrote:

>  Hi Srikanth,
>
>
>
> >>Is there any difference between scheduler behavior  for above two
> scenarios  while enqueing and de-queueing ??
>
> All the pipe queues share the bandwidth allocated to their pipe. The
> distribution of available pipe bandwidth between the pipe queues is
> governed by features like traffic class strict priority, bandwidth sharing
> between pipe traffic classes, weights of the queues within the same traffic
> class, etc. In the case you mention, you are just using one queue for each
> traffic class.
>
>
>
> Let?s take an example:
>
> -Configuration: pipe rate = 10 Mbps, pipe traffic class 0 .. 3
> rates = [20% of pipe rate = 2 Mbps, 30% of pipe rate = 3 Mbps, 40% of pipe
> rate = 4 Mbps, 100% of pipe rate = 10 Mbps]. Convention is that traffic
> class 0 is the highest priority.
>
> -Injected traffic per traffic class for this pipe: [3, 0, 0, 0]
> Mbps => Output traffic per traffic class for this pipe: [2 , 0, 0, 0] Mbps
>
> -Injected traffic per traffic class for this pipe: [0, 0, 0, 15]
> Mbps => Output traffic per traffic class for this pipe: [0, 0, 0, 10] Mbps
>
> -Injected traffic per traffic class for this pipe: [1, 10, 2, 15]
> Mbps => Output traffic per traffic class for this pipe: [1, 3, 2, 4] Mbps
>
>
>
> Makes sense?
>
>
>
> >>Queue size is 64 , and number of packets enqueued and dequeued is 64 as
> well.
>
> I strongly recommend you never use a dequeue burst size that is equal to
> enqueue burst size, as performance will be bad.
>
>
>
> In the qos_sched sample app, we use [enqueue burst size, dequeue burst
> size] set to [64, 32], other reasonable values could be [64, 48], [32, 16],
> etc. An enqueue burst bigger than dequeue burst will cause the big packet
> reservoir which is the traffic manager/port scheduler to fill up to a
> reasonable level that will allow dequeu to function optimally, and then the
> system regulates itself.
>
>
>
> The reason is: since we interlace enqueue and dequeue calls, if you push
> on every iteration e.g. 64 packets in and then look to get 64 packets out,
> you?ll only have 64 packets into the queues, then you?ll work hard to find
> them, and you get out exactly those 64 packets that you pushed in.
>
>
>
> >>And what is the improvements i would gain if i move to DPDK 1.7 w.r.t
> QOS ?
>
> The QoS code is pretty stable since release 1.4, not many improvements
> added (maybe it?s the right time to revisit this feature and push it to the
> next level ?), but there are improvements in other DPDK libraries that are
> dependencies for QoS (e.g. packet Rx/Tx).
>
>
>
> Hope this helps.
>
>
>
> Regards,
>
> Cristian
>
>
>
>
>
>
>
> *From:* Srikanth Akula [mailto:srikanth044 at gmail.com]
> *Sent:* Thursday, October 30, 2014 4:10 PM
> *To:* dev at dpdk.org; Dumitrescu, Cristian
> *Subject:* Max throughput Using QOS Scheduler
>
>
>
> Hello All ,
>
>
>
> I am currently trying to implement QOS scheduler using DPDK 1.6 . I have
> configured 1 subport , 4096 pipes for the sub port and 4 TC's and 4 Queues .
>
>
>
> Currently i am trying to send packets destined to single Queue of the
> available 16 queues of one of the pipe .
>
>
>
> Could some body explain what could be the throughput we can achieve using
> this scheme.  The reason for asking this is , i could sense different
> behavior each time when i send traffic destined to different destination
> Queues  .
>
>
>
> for example :
>
>
>
> 1. << Only one stream>>> Stream destined Q0 of TC0 ..
>
>
>
>
>
> 2. << 4 streams >>>> 1st Stream destined for Q3 of Tc3 ...
>
>  2nd stream destined for Q2 of Tc2
>
>  3rd stream destined for Q1 of TC1
>
>  4th Stream destined for Q0 of TC0
>
>
>
> Is there any difference between scheduler behavior  for above two
> scenarios  while enqueing and de-queueing ??
>
>
>
> Queue size is 64 , and number of packets enqueud and dequeued is 64 as
> well.
>
> And what is the improvements i would gain if i move to DPDK 1.7 w.r.t QOS ?
>
>
>
>
>
> Could you please clarify my queries ?
>
>
>
>
>
> Thanks & Regards,
> Srikanth
>
>
>
>
>
> --
> Intel Shannon Limited
> Registered in Ireland
> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
> Registered Number: 308263
> Business address: Dromore House, East Park, Shannon, Co. Clare
>
> This e-mail and any attachments may contain confidential material for the
> sole use of the intended recipient(s). Any review or distribution by others
> is strictly prohibited. If you are not the intended recipient, please
> contact the sender and delete all copies.
>
>


[dpdk-dev] Max throughput Using QOS Scheduler

2014-11-04 Thread Srikanth Akula
Hi all,

Can anybody answer my queries ?

thanks & Regards,
Srikanth

On Thu, Oct 30, 2014 at 9:09 AM, Srikanth Akula 
wrote:

> Hello All ,
>
> I am currently trying to implement QOS scheduler using DPDK 1.6 . I have
> configured 1 subport , 4096 pipes for the sub port and 4 TC's and 4 Queues .
>
> Currently i am trying to send packets destined to single Queue of the
> available 16 queues of one of the pipe .
>
> Could some body explain what could be the throughput we can achieve using
> this scheme.  The reason for asking this is , i could sense different
> behavior each time when i send traffic destined to different destination
> Queues  .
>
> for example :
>
> 1. << Only one stream>>> Stream destined Q0 of TC0 ..
>
>
> 2. << 4 streams >>>> 1st Stream destined for Q3 of Tc3 ...
>  2nd stream destined for Q2 of Tc2
>  3rd stream destined for Q1 of TC1
>  4th Stream destined for Q0 of TC0
>
> Is there any difference between scheduler behavior  for above two
> scenarios  while enqueing and de-queueing ??
>
> Queue size is 64 , and number of packets enqueud and dequeued is 64 as
> well.
> And what is the improvements i would gain if i move to DPDK 1.7 w.r.t QOS ?
>
>
> Could you please clarify my queries ?
>
>
> Thanks & Regards,
> Srikanth
>
>
>


[dpdk-dev] Max throughput Using QOS Scheduler

2014-10-30 Thread Srikanth Akula
Hello All ,

I am currently trying to implement QOS scheduler using DPDK 1.6 . I have
configured 1 subport , 4096 pipes for the sub port and 4 TC's and 4 Queues .

Currently i am trying to send packets destined to single Queue of the
available 16 queues of one of the pipe .

Could some body explain what could be the throughput we can achieve using
this scheme.  The reason for asking this is , i could sense different
behavior each time when i send traffic destined to different destination
Queues  .

for example :

1. << Only one stream>>> Stream destined Q0 of TC0 ..


2. << 4 streams  1st Stream destined for Q3 of Tc3 ...
 2nd stream destined for Q2 of Tc2
 3rd stream destined for Q1 of TC1
 4th Stream destined for Q0 of TC0

Is there any difference between scheduler behavior  for above two scenarios
 while enqueing and de-queueing ??

Queue size is 64 , and number of packets enqueud and dequeued is 64 as well.
And what is the improvements i would gain if i move to DPDK 1.7 w.r.t QOS ?


Could you please clarify my queries ?


Thanks & Regards,
Srikanth


[dpdk-dev] Running Qemu manually for vswitch

2014-06-04 Thread Srikanth Akula
Hi ,

I am currently working to install & run dpdk vswitch for our custom guest
operating system .

I have followed the steps mentioned in the docs section of the git dpdk-ovs
page.

1. I am trying to run Qemu by building an xml file and creating a domain by
using virsh. But it fails with the following error .

error : virCommandWait:2188 : internal error Child process (LC_ALL=C
PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin
/usr/local/bin/qemu-system-x86_64 -help) status unexpected: exit status 1

2. Qemu which is customized for dpdk , is overriding the existing help
strings , which i feel is causing the problems when i use virsh to
build/run guest OS.

Can anybody suggest me the configuration/steps i should use for the
following topology.

VM1   < ---> VM2  <---> VM3
|
|
 ---
   Host machine
 --
||
||
  ixia   ixia


Thanks & Regards,
Srikanth