Hi Patrick,
Thanks a lot for the quick response.
Thank you for adding me in the discussion meeting.


Thank you,
Bharati.
________________________________
From: Patrick Robb <pr...@iol.unh.edu>
Sent: Friday, November 22, 2024 10:29:18 PM
To: Bharati Bhole - Geminus <c_bhara...@xsightlabs.com>
Cc: d...@dpdk.org <d...@dpdk.org>; Nicholas Pratte <npra...@iol.unh.edu>; Dean 
Marx <dm...@iol.unh.edu>; Paul Szczepanek <paul.szczepa...@arm.com>; Luca 
Vizzarro <luca.vizza...@arm.com>; NBU-Contact-Thomas Monjalon (EXTERNAL) 
<tho...@monjalon.net>; dev <dev@dpdk.org>
Subject: Re: Doubts in JumboFrames and stats_checks tests in DTS.

Hi Bharati,

Welcome to the DTS mailing list. I will try to provide some answers based on my 
experience running DTS at the DPDK Community Lab at UNH. I will also flag that 
this "legacy" version of DTS is deprecated and getting minimal maintenance. The 
majority of the current efforts for DTS are directed towards the rewrite which 
exists within the /dts dir of the DPDK repo: https://git.dpdk.org/dpdk/tree/dts

With that being said, of course the legacy repo is still useful and I encourage 
you to use it, so I will provide some comments inline below:

On Fri, Nov 22, 2024 at 9:43 AM Bharati Bhole - Geminus 
<c_bhara...@xsightlabs.com<mailto:c_bhara...@xsightlabs.com>> wrote:
Hi,

I am Bharati Bhole. I am a new member of DTS mailing list.
I have recently started working on DTS for my company and facing some 
issues/failures while running the DTS.
Please help me with understanding the test cases and expected behaviours.

I am trying to understand the DTS behaviour for following TCs:

1. JumboFrames :

  1.
When the test set the max_pkt_len for testpmd and calculate the expected 
acceptable packet size, does it consider NICs supporting 2 VLANS? (In case of 
MTU update test, I have seen that 2 VLANs NIC are being considered while 
calculating acceptable packets size but in JumboFrames I dont see it).

No, 2 VLANs is not properly accounted for in the Jumboframes testsuite. And, 
this is actually highly topical, as this is an ongoing point of discussion in 
rewriting jumboframes and mtu_update for the new DTS framework (the testcases 
are getting combined into 1 testsuite).  I will paste the function from 
mtu_update of legacy DTS which you may be referring to:

------------------------------

    def send_packet_of_size_to_port(self, port_id: int, pktsize: int):

        # The packet total size include ethernet header, ip header, and payload.
        # ethernet header length is 18 bytes, ip standard header length is 20 
bytes.
        # pktlen = pktsize - ETHER_HEADER_LEN
        if self.kdriver in ["igb", "igc", "ixgbe"]:
            max_pktlen = pktsize + ETHER_HEADER_LEN + VLAN
            padding = max_pktlen - IP_HEADER_LEN - ETHER_HEADER_LEN - VLAN
        else:
            max_pktlen = pktsize + ETHER_HEADER_LEN + VLAN * 2
            padding = max_pktlen - IP_HEADER_LEN - ETHER_HEADER_LEN
        out = self.send_scapy_packet(
            port_id,
            f'Ether(dst=dutmac, 
src="52:00:00:00:00:00")/IP()/Raw(load="\x50"*{padding})',

------------------------------

One difference between legacy DTS and the "new" DTS is that in legacy DTS a 
master list of devices/drivers was maintained, and there were an endless amount 
of conditions like this where a device list would be checked, and then some 
behavior modified based on that list. Because this strategy leads to bugs, it's 
unresponsive to changes in driver code, hard to maintain, and for other 
reasons, we are no longer follow this approach in new DTS. Now, if we want to 
toggle different behavior (like determine max_pkt_len for a given MTU for a 
given device) that needs to be accomplished by querying testpmd for device info 
(there are various testpmd runtime commands for this). And, in situations where 
testpmd doesn't expose the information we need for checking device behavior in 
a particular testsuite - testpmd needs to be updated to allow for this.

I am CC'ing Nick who is the person writing the new jumboframes + MTU testsuite, 
which (work in progress) is on patchwork here: 
https://patchwork.dpdk.org/project/dpdk/patch/20240726141307.14410-3-npra...@iol.unh.edu/

Nick, maybe you can include the mailing list threads Thomas linke you, and 
explain your current understanding of how to handle this issue? This won't 
really help Bharati in the short term, but at least it will clarify to him how 
this issue will be handled in the new DTS framework, which presumably he will 
upgrade to using at some point.


  1.

  2.
In function jumboframes_send_packet() -
--<snip>--
if received:
            if self.nic.startswith("fastlinq"):
                self.verify(
                    self.pmdout.check_tx_bytes(tx_pkts, rx_pkts)
                    and (self.pmdout.check_tx_bytes(tx_bytes, pktsize))
                    and (rx_bytes == pktsize),
                    "packet pass assert error",
                )
            else:
                self.verify(
                    self.pmdout.check_tx_bytes(tx_pkts, rx_pkts)
                    and (self.pmdout.check_tx_bytes(tx_bytes + 4, pktsize))
                    and ((rx_bytes + 4) == pktsize),
                    "packet pass assert error",
                )
        else:
            self.verify(rx_err == 1 or tx_pkts == 0, "packet drop assert error")
        return out
--<snip>--

      Can someone please tell me why these tx_butes and rx_bytes calculations 
are different for Qlogic NICs and other NICs?

I don't know the reason why fastlinq has this behavior in DPDK, so I'm CCing 
the dev mailing list - maybe someone there will have the historical knowledge 
to answer.

Otherwise, in terms of DTS, this is again an example of a workflow which we do 
not allow in new DTS.



  1.


2. TestSuite_stats_checks.py :
       The test, test_stats_checks is sending 2 packets of ETH/IP/RAW(30) and 
ETH/IP/RAW(1500).

      In function send_packet_of_size_to_tx_port()  line no. 174 to 185
      --<snip>--

  if received:
            self.verify(tx_pkts_difference >= 1, "No packet was sent")
            self.verify(
                tx_pkts_difference == rx_pkts_difference,
                "different numbers of packets sent and received",
            )
            self.verify(
                tx_bytes_difference == rx_bytes_difference,
                "different number of bytes sent and received",
            )
            self.verify(tx_err_difference == 1, "unexpected tx error")
            self.verify(rx_err_difference == 0, "unexpected rx error")

      --<snip>--

      This test expects packets with payload size 30 to pass RX and TX which is 
working fine and for packet with payload size 1500, the test expecting RX and 
to pass and TX to fail?
      I did not get this part. The defailt MTU size is 1500. When scapy sends 
the packet with ETH+IP+1500 the packet size is 18+20+1500 = 1538. And even if 
the NIC supports 2 VLAN the max it can accept is MTU+ETH+CRC+2*VLAN = 1526
      So according the to my understanding the packets should be dropped and 
rx_error counter should increase and there should not be any increment in 
good/error packet for TX port.

This is not a testsuite that we run at our lab but I have read through the 
testplan and test file. I think your math makes sense and I would expect that 
rx_err_difference would be 1 in this scenario. When we rework this testsuite, 
obviously we will need to start testpmd with various NICs, send packets with 
RAW(1500) and see if port stats shows rx_err 1 or 0. I am curious to see if 
this is the universal behavior in DPDK, or just some unique behavior from Intel 
700 series (legacy DTS was often written towards the behavior of this device). 
A goal in rewriting our tests is ensuring that DPDK apis (which we reach 
through testpmd) truly return the same behavior across different NICs.

Sorry about the half answer. Maybe someone else from the dev mailing list can 
provide a response about how this RAW(1500) packet can be received on rx port 
on any DPDK device.

I can say that we do have this stats_checks testsuite marked as a candidate to 
rewrite for new DTS in this current development cycle (DPDK 25.03). Maybe we 
can loop you into these conversations, since you have an interest in the 
subject? And, there's no pressure on this, but I will just add you to the 
invite list for the DPDK DTS meetings (meets once every 2 weeks) in case you 
want to join and discuss.


Can someone please tell what is the gap/missing part in my understanding?
    
 
Thanks,
Bharati Bhole.


Thanks for getting involved - I'm glad to see more companies making use of DTS.

Reply via email to