Hi Connor,
On 12/20/21, 9:03 PM, "Min Hu (Connor)" wrote:
> Hi, Sanford,
> > There is *NO* benefit for the consumer thread (interrupt thread
> > executing tx_machine()) to have caches on per-slave LACPDU pools.
> > The interrupt thread is a control thread, i.e., a non-EAL thread.
> > Its lcore
Hello Connor,
Please see responses inline.
On 12/17/21, 10:44 PM, "Min Hu (Connor)" wrote:
> > When the number of used tx-descs (0..255) + number of mbufs in the
> > cache (0..47) reaches 257, then allocation fails.
> >
> > If I understand the LACP tx-burst code correctly, it would be
> > wor
Hello Connor,
Thank you for the questions and comments. I will repeat the questions, followed
by my answers.
Q: Could you be more detailed, why is mbuf pool caching not needed?
A: The short answer: under certain conditions, we can run out of
buffers from that small, LACPDU-mempool. We actually
You are correct. I just looked at it in Patchwork. Sorry about that. (I should
learn never to trust Outlook for Mac.)
On 6/11/19, 10:27 AM, "Stephen Hemminger" wrote:
On Tue, 11 Jun 2019 13:37:33 +0000
"Sanford, Robert" wrote:
> Hi Stephen,
>
> The code seems
Hi Stephen,
The code seems fine. My only comment is that there is not a blank line before
the new code, in both the .c and .h.
--
Regards,
Robert
On 6/10/19, 6:45 PM, "Stephen Hemminger" wrote:
It is useful to know when the next timer will expire when
using rte_epoll_wait (or sleep when idle
Hi Erik,
I have a few questions and comments on this patch series.
1. Don't you think we need new tests (in test/test/) to verify the
secondary-process APIs?
2. I suggest we define default_data_id as const, and explicitly set it to 0.
3. The outer for-loop in rte_timer_alt_manage() touches beyon
Yes, this change makes sense. I ran timer tests and they passed.
Acked-by: Robert Sanford
Thanks,
Robert
On 9/29/16, 10:27 AM, "Karmarkar Suyash" wrote:
Hello,
Can you please review the changes and suggest next steps? Thanks
Regards
Suyash Karmarkar
-Original Message-
From: Karma
Sorry, just saw this. I will take a look and get back shortly.
--
Regards,
Robert
On 10/4/16, 3:31 PM, "Karmarkar Suyash" wrote:
Hello Robert/Thomas,
Can you please review the changes in V2 of the Patch and suggest next steps?
Thanks
Regards
Suyash Karmarkar
-Original Message-
Fro
Hi Olivier,
On 8/23/16, 11:09 AM, "Olivier MATZ" wrote:
Hi Robert,
On 08/01/2016 10:42 PM, Robert Sanford wrote:
> The following log message may appear after a slave is idle (or nearly
> idle) for a few minutes: "PMD: Failed to allocate LACP packet from
> pool".
>
>
Hi Olivier,
On 8/23/16, 11:09 AM, "Olivier MATZ" wrote:
Hi Robert,
On 08/01/2016 10:42 PM, Robert Sanford wrote:
> Rename macros that calculate a mempool cache flush threshold, and
> move them from rte_mempool.c to rte_mempool.h, so that the bonding
> driver can accurately c
On 7/23/16 4:49 AM, "Thomas Monjalon" wrote:
>2016-07-23 0:14 GMT+02:00 Sanford, Robert :
>> Acked-by: Robert Sanford
>>
>> I tested the three timer patches with app/test timer_autotest and
>> timer_racecond_autotest, and additional private tests.
>
>
On 7/17/16 2:08 PM, "Hiroyuki Mikita" wrote:
>When timer_cb resets another running timer on the same lcore,
>the list of expired timers is chained to the pending-list.
>This commit prevents a running timer from being reset
>by not its own timer_cb.
>
>Signed-off-by: Hiroyuki Mikita
>---
> lib/
On 7/17/16 1:35 PM, "Hiroyuki Mikita" wrote:
>When timer_set_running_state() fails in rte_timer_manage(),
>the failed timer is put back on pending-list.
>In this case, another core tries to reset or stop the timer.
>It does not need to be on pending-list
>
>Signed-off-by: Hiroyuki Mikita
>---
On 7/17/16 10:35 AM, "Hiroyuki Mikita" wrote:
>This commit fixes incorrect pending-list manipulation
>when getting list of expired timers in rte_timer_manage().
>
>When timer_get_prev_entries() sets pending_head on prev,
>the pending-list is broken.
>The next of pending_head always becomes NULL
Hi Hiroyuki,
I am reviewing your 3 timer patches.
Can you please explain in more detail your use-case that results in a
problem?
For example, is it when timer A's callback tries to reset
(rte_timer_reset) timer B?
If yes, is timer B in PENDING state and likely to expire soon?
--
Thanks,
Robert
Hi Cristian,
Yes, I mostly agree with your suggestions:
1. We should fix the two obvious bugs (1a and 1b) right away. Jasvinder's
patches look fine.
2. We should take no immediate action on the issue I raised about PMDs
(vector IXGBE) not enqueuing more than 32 packets. We can discuss and
debate;
Hi Jay,
I won't try to interpret your kernel stack trace. But, I'll tell you about
a KNI-related problem that we once experienced, and the symptom was a
kernel hang.
The problem was that we were passing mbufs allocated out of one mempool,
to a KNI context that we had set up with a different mempo
We don't need to change this line, because we never access more than
RTE_PORT_IN_BURST_SIZE_MAX (64) elements in this array:
- struct rte_mbuf *mbuf[RTE_PORT_IN_BURST_SIZE_MAX];
+ struct rte_mbuf *mbuf[2 * RTE_PORT_IN_BURST_SIZE_MAX];
--
Robert
>Add code to send two 60-packet bursts
Hi Cristian,
In hindsight, I was overly agressive in proposing the same change
(approach #2, as you call it below) for rte_port_ring and rte_port_sched.
Changing local variable bsz_mask to uint64_t should be sufficient.
Please see additional comments inline below.
On 3/31/16 11:41 AM, "Dumitre
Hi Cristian,
Please see my comments inline.
>
>
>> -Original Message-
>> From: Robert Sanford [mailto:rsanford2 at gmail.com]
>> Sent: Monday, March 28, 2016 9:52 PM
>> To: dev at dpdk.org; Dumitrescu, Cristian
>> Subject: [PATCH 4/4] port: fix ethdev writer burst too big
>>
>> For f_tx
>
>
>> -Original Message-
>> From: Robert Sanford [mailto:rsanford2 at gmail.com]
>> Sent: Monday, March 28, 2016 9:52 PM
>> To: dev at dpdk.org; Dumitrescu, Cristian
>> Subject: [PATCH 2/4] port: fix ring writer buffer overflow
>>
>> Ring writer tx_bulk functions may write past the end
>>> [Robert:]
>>> 1. The 82599 device supports up to 128 queues. Why do we see trouble
>>> with as few as 5 queues? What could limit the system (and one port
>>> controlled by 5+ cores) from receiving at line-rate without loss?
>>>
>>> 2. As far as we can tell, the RX path only touches the device
I'm hoping that someone (perhaps at Intel) can help us understand
an IXGBE RX packet loss issue we're able to reproduce with testpmd.
We run testpmd with various numbers of cores. We offer line-rate
traffic (~14.88 Mpps) to one ethernet port, and forward all received
packets via the second port.
Hi Thomas,
> Please, could you re-send this serie after having added the description
>of
> each patch in the commit messages?
Yes, I will move the paragraphs that begin with "Patch n" from patch 0 to
their respective patches.
> It seems you fix 2 bugs in the first patch. It may be clearer to spl
I just noticed a few minor typos in comments:
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 917dd59..6352c32 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -680,14 +680,14 @@ extern "C" {
/**
* Check if the (outer) L3 header is IPv4. To
Hi Stephen,
Any thoughts/plans about updating rte_eth_dev_info members rx_offload_capa and
tx_offload_capa in vmxnet3_dev_info_get()?
The reason I ask: We would like to use TX/RX burst callout hooks, but only for
eth-devs that don't support desired features (e.g., VLAN insert/strip, TCP
checks
Silly questions: Why use rte_pktmbuf_clone()? Assuming that one is not
going to modify the mbuf at all, why not just increment the reference
count with rte_mbuf_refcnt_update()?
--
Thanks,
Robert
>Keith speaks truth. If I were going to do what you're describing, I would
>do the following:
>
>1.
Hi Zhigang,
Can you add this one-line X86 change (below)? When included by C++ code,
'key' needs an explicit type cast.
You may want to put the 'const' keyword in your non-X86 change, too.
--
Thanks,
Robert
diff --git a/lib/librte_hash/rte_jhash.h b/lib/librte_hash/rte_jhash.h
index e230449.
When one adds multiple post-RX-burst callbacks to a queue, their execution
order is the opposite of the order in which they are added. For example, we add
callback A( ), and then we add callback B( ). When we call rte_eth_rx_burst,
after invoking the device's rx_pkt_burst function, it will invok
Hello Dirk,
We recently resolved a similar problem with rte_cpuflags.h.
I'm attaching a git-diff of how we worked around it.
We may submit a patch for this, eventually.
--
Regards,
Robert
>Hello,
>the g++ complains the following problems when rte_.h header files
>are included:
> x86_
Hi Venkat,
Perhaps your DPDK application needs to slow-poll KNI devices via
rte_kni_handle_request( ).
http://dpdk.org/doc/api/rte__kni_8h.html
--
Regards,
Robert
>Hi,
>
>I am testing DPDK 2.0 release.
>I am not able to bring the KNI interface up.
>It always gives the following error.
>
>SIOCS
Never mind ... I cancel the previous suggestion. After further reading on
RNGs, I believe we should abandon the use of lrand48() and implement our
own RNG based on the so-called KISS family of RNGs originally proposed by
the late George Marsaglia. In his excellent paper, "Good Practice in
(Pseudo)
Hi Stephen,
Have you considered supporting LSC interrupts in vmxnet3?
--
Thanks,
Robert
>From: Stephen Hemminger
>
>The Intel version of VMXNET3 driver does not handle link state properly.
>The VMXNET3 API returns 1 if connected and 0 if disconnected.
>Also need to return correct value to indi
Signed-off-by: Robert Sanford
---
MAINTAINERS |1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index 2eb7761..a2b53b3 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -371,6 +371,7 @@ F: examples/vm_power_manager/
F: doc/guides/sample_app_ug/vm_power_
Hello Thomas,
I want to discuss a small enhancement to KNI that we developed. We wanted
to send/receive mbufs between one KNI device and multiple cores, but we
wanted to keep the changes simple. So, here were our requirements:
1. Don't use heavy synchronization when reading/writing the FIFOs in
s
We ran into a similar problem when migrating to 1.7.
Here are the subtle flags, in dpdk/mk/rte.app.mk, that we needed:
LDLIBS += --whole-archive
...
LDLIBS += --no-whole-archive
This apparently tells the linker to pull in whole archive(s), even if it
thinks that we don't n
>
>> This is what we came up with. It works for us. In our kernel headers'
>> linux/pci.h, pci_num_vf is enclosed within "#ifdef
>>CONFIG_PCI_IOV/#endif";
>> pci_intx_mask_supported and pci_check_and_mask_intx are enclosed within
>> "#ifdef HAVE_PCI_SET_MWI/#endif".
>>
>> What do you think?
>
>Ma
Hi Thomas,
Not that I would *like* to fix this, but I *need* to fix it. We are using
CentOS 6.5, which I believe is based on RHEL. We have kernel
2.6.32-431.3.1.el6.x86_64.
I realize that we need to add/change ifdefs around pci_num_vf,
pci_intx_mask_supported, and pci_check_and_mask_intx in igb_u
Hi Declan,
I have a problem with the TX-burst logic of this patch. I believe that for
packets that we *don't* enqueue to the device, we should *NOT* free them.
The API expects that the caller will free them or try again to send them.
Here is one way to accomplish selective freeing: Move mbuf poin
Reviewed-by: Robert Sanford
>Renaming struct bond_dev_pritvate *dev_private to internals to match
>convention used in other pmds
>
>Signed-off-by: Declan Doherty
>---
> lib/librte_pmd_bond/rte_eth_bond_pmd.c | 10 +-
> 1 files changed, 5 insertions(+), 5 deletions(-)
>
>diff --git a/li
Reviewed-by: Robert Sanford
>Splitting rx burst function into seperate functions to avoid the need for
>a switch statement and also to match the structure of the tx burst
>functions.
>
>Signed-off-by: Declan Doherty
>---
> lib/librte_pmd_bond/rte_eth_bond_pmd.c | 62
>++---
Reviewed-by: Robert Sanford
>Adding support for lsc interrupt from bonded device to link
>bonding library with supporting unit tests in the test application.
>
>Signed-off-by: Declan Doherty
>---
> app/test/test_link_bonding.c | 216
>+++-
> lib/librte_pmd_
Hi Thomas,
>Some patches like this one are not yet reviewed because efforts were
>focused
>on release 1.6.0r2. This enhancement must be integrated in 1.7.0.
>I know that patchwork service is desired and I hope it will be available
>soon.
I realized that you guys had been very busy with 1.6.0r2.
I haven't heard anything regarding this patch.
In general, how does your team keep track of outstanding patch requests or
RFCs?
Thanks,
Robert
>Here is our proposed patch:
>
>Lib rte_malloc stores free blocks in singly-linked lists.
>This results in O(n), i.e., poor performance when freeing memo
Here is our proposed patch:
Lib rte_malloc stores free blocks in singly-linked lists.
This results in O(n), i.e., poor performance when freeing memory
to a heap that contains thousands of free blocks.
A simple solution is to store free blocks on doubly-linked lists.
- Space-wise, we add one new p
45 matches
Mail list logo