Hello, Sam Reiter.

Thank you for your reply.I have a test with benchmark, 2x2 @122.88M is ok(maybe 
the cpu “performance” is diabled in the initial test).I give the command with 
3x3@122.88M <mailto:3x3@122.88M> . The output is in the attachment.

     /usr/local/lib/uhd/examples/benchmark_rate  \

    --args 
"type=n3xx,mgmt_addr=192.168.2.230,addr=192.168.10.2,second_addr=192.168.20.2,master_clock_rate=122.88e6,use_dpdk=1"
 \

    --duration 60 \

    --channels "0,1,2" \

    --rx_rate 122.88e6 \

    --rx_subdev "A:0 A:1 B:0" \

    --tx_rate 122.88e6 \

    --tx_subdev "A:0 A:1 B:0"

 

Best,

Panny Wang

 

发件人: Sam Reiter <sam.rei...@ettus.com> 
发送时间: 2019年11月8日 4:41
收件人: 王盼 <ruoyi...@126.com>
抄送: usrp-users@lists.ettus.com
主题: Re: [USRP-users] questions about uhd-dpdk with n310

 

Panny Wang,

 

The cpufreq-info looks good, but the ifconfig at the bottom is a bit confusing 
with what you've sent over up to this point. Can you send the exact 
./benchmark_rate command that you're using (with all args included) to produce 
the output you sent over initially? The MPMD info in the last couple doesn't 
seem consistent with this ifconfig output:

 

enp7s0f0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.73  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 fe80::9604:9cff:fed2:b1a3  prefixlen 64  scopeid 0x20<link>
        ether 94:04:9c:d2:b1:a3  txqueuelen 1000  (Ethernet)
        RX packets 114457  bytes 8586410 (8.5 MB)
        RX errors 0  dropped 3  overruns 0  frame 0
        TX packets 179513  bytes 37029298 (37.0 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device memory 0x95e80000-95efffff  

enp7s0f1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 8000
        inet 192.168.2.254  netmask 255.255.255.0  broadcast 192.168.2.255
        inet6 fe80::9604:9cff:fed2:b1a4  prefixlen 64  scopeid 0x20<link>
        ether 94:04:9c:d2:b1:a4  txqueuelen 1000  (Ethernet)
        RX packets 3404  bytes 296849 (296.8 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2196  bytes 243446 (243.4 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device memory 0x95e00000-95e7ffff  

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 63270  bytes 4016936 (4.0 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 63270  bytes 4016936 (4.0 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

 

Sam Reiter 

 

On Mon, Nov 4, 2019 at 7:32 PM 王盼 <ruoyi...@126.com <mailto:ruoyi...@126.com> > 
wrote:

Hey Sam Reiter,

The output of "cpufreq-info && ifconfig" is in the attchment. At the same time 
I put more information about my system there. 

Both 10GbE links are binded to dpdk, so ifconfig can not output them.My cpu 
clock is 2.7GHz,maybe it is not powerfull enough.

It would be great If you can help me confirm my configuration . 


  <https://mail-online.nosdn.127.net/qiyelogo/defaultAvatar.png> 

王盼


ruoyi...@126.com <mailto:ruoyi...@126.com> 

签名由  <https://mail.163.com/dashi/dlpro.html?from=mail81> 网易邮箱大师 定制 

On 11/5/2019 04:13, <mailto:sam.rei...@ettus.com> Sam 
Reiter<sam.rei...@ettus.com> wrote: 

Hey Panny Wang,

 

You're correct, you should specify a second address with addr/second_addr, 
rather than addr0/addr1 - my bad. [1]

 

Assuming you're using both 10GbE links correctly, my next step would be to 
investigate the processor you're using. Something with a higher clock speed is 
generally recommended for higher streaming rates. 

 

Would you be able to send over the output of:

 

cpufreq-info && ifconfig

 

Best,

 

Sam Reiter 

 

[1] https://kb.ettus.com/Using_Dual_10_Gigabit_Ethernet_on_the_USRP_X300/X310

 

On Sun, Nov 3, 2019 at 8:53 PM 王盼 <ruoyi...@126.com <mailto:ruoyi...@126.com> > 
wrote:

Hello,

Sam Reiter. When leveraging dual 10GbE links,I  specify" 
addr=192.168.20.2,second_addr=192.168.10.2",last email I didn't give the 
example . The result is not much diffrent with use a single 10GbE link.

I think it is  "addr=<xxx.xxx.xxx.xxx>,second_addr=<xxx.xxx.xxx.xxx>" but not 
"addr0=<xxx.xxx.xxx.xxx>,addr1=<xxx.xxx.xxx.xxx>". when use  
"addr0=<xxx.xxx.xxx.xxx>,addr1=<xxx.xxx.xxx.xxx>",I get errors:

[INFO] [MPMD] Initializing 3 device(s) in parallel with args: 
mgmt_addr0=192.168.2.230,type0=n3xx,product0=n310,serial0=316645B,claimed0=False,mgmt_addr1=192.168.2.230,type1=n3xx,product1=n310,serial1=316645B,claimed1=False,mgmt_addr2=192.168.2.230,type2=n3xx,product2=n310,serial2=316645B,claimed2=False,type=n3xx,mgmt_addr=192.168.2.230,addr1=192.168.10.2,addr2=192.168.20.2,master_clock_rate=122.88e6,use_dpdk=1

[ERROR] [RPC] Someone tried to claim this device again (From: 192.168.2.254)

[WARNING] [MPM.RPCServer] Someone tried to claim this device again (From: 
192.168.2.254)

Error: RuntimeError: Error during RPC call to `claim'. Error message: Someone 
tried to claim this device again (From: 192.168.2.254)

root@seu73:/home/seu# 

On 11/2/2019 02:30, <mailto:sam.rei...@ettus.com> Sam 
Reiter<sam.rei...@ettus.com> wrote:

Panny Wang,

 

I notice that you're only specifying a single streaming address in your call to 
benchmark rate, implying that you're only leveraging a single 10GbE link. You 
can specify "addr0=<xxx.xxx.xxx.xxx>,addr1=<xxx.xxx.xxx.xxx>" in your device 
args. 

 

Best,

Sam Reiter
SDR Applications Engineer
Ettus Research

 

On Wed, Oct 30, 2019 at 3:20 AM 王盼 via USRP-users <usrp-users@lists.ettus.com 
<mailto:usrp-users@lists.ettus.com> > wrote:

Hello,

 Nate.I want to use DPDK in UHD with N310 follow  
<https://files.ettus.com/manual/page_dpdk.html.> 
https://files.ettus.com/manual/page_dpdk.html,but the result is not 
satisfactory.I got you have some research about this from the user-list 
emails(With an i7-4790k / Intel x520-DA2 and N310, to stream at full duplex, 
over two channels at 125 MS/s, the lowest I can run my CPU clock freq at 
without flow control errors is 3.8 GHz using benchmark_rate and the native 
networking stack. Using DPDK I can run 2x2 @ 125 MS/s with my CPU freq locked 
at 1.5 GHz with no flow control errors. ).

May be you can do me a favor and have some idea about my quesion.

(1) I use benchmark_rate to test the streaming performance, I only got 
122.88MS/s for 1channel, or 61.44MS/s for 2x2. run with 2x2@122.88MS 
<mailto:2x2@122.88MS> /s , a lot of samples dropped. 

Unfortuately, my destination is 4x4@122.88MS <mailto:4x4@122.88MS> /s. I don't 
know is it possible for my present host machine, or what configuration of host 
machine should have?

ubuntu server 18.04    uhd:3.14.1.1  dpdk 17.11.6   dual 10GbE links (XG image 
loaded)

host machine: 4 node, 8 cores in each node, tota 32 cores, cpu: Intel(R) 
Xeon(R) CPU E5-4650 0 @ 2.70GHz

more informations about my host machine is in the attchachment.(hypethread 
closed, cpufrequtils GOVERNOR="perfomance")

   --args 
"type=n3xx,mgmt_addr=192.168.1.104,addr=192.168.20.2,master_clock_rate=122.88e6,use_dpdk=1"
 \

   --duration 60 \

   --channels "0,1" \

   --rx_rate 122.88e6 \

   --rx_subdev "A:0 A:1" \

   --tx_rate 122.88e6 \

   --tx_subdev "A:0 A:1" 

   Benchmark rate summary:

     Num received samples:     2744145668

  Num dropped samples:      6030320380

  Num overruns detected:    921

  Num transmitted samples:  14684137560

  Num sequence errors (Tx): 0

  Num sequence errors (Rx): 0

  Num underruns detected:   67231

  Num late commands:        0

  Num timeouts (Tx):        0

  Num timeouts (Rx):        0


 


(2) In the  txrx_loopback_to_file test ,when I use the default --setting for 
4*4channels, there is a error UUUUError: Receiver error ERROR_CODE_LATE_COMMAND 
.

I change it to --setting 1 ,it works.

I want to know the influence to my streaming or sample datas if 1 increase  
--setting?

(--settling arg (=0.20000000000000001) settling time (seconds) before receiving)

 

Much appreciated.

 

Regards,

Panny Wang

 

 

_______________________________________________
USRP-users mailing list
USRP-users@lists.ettus.com <mailto:USRP-users@lists.ettus.com> 
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com

Attachment: cpufreqinfo.log
Description: Binary data

Attachment: benchmark.log
Description: Binary data

_______________________________________________
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com

Reply via email to