Re: [ath5k-devel] Regression - adhoc mode awful throughput
Hello all, I had no time to test this as my link situation is very complicated but I've found a trace through reading some other regression: https://bugzilla.kernel.org/show_bug.cgi?id=31922 This seems to be it. Any help? On Sat, Apr 30, 2011 at 2:45 PM, Denis Periša de...@si-wifi.org wrote: (Forgot to add CC to devel) btw. I have very critical link so no testing can be made.. I can try in late hours but I run nodes on slow machines with USB Stick instead of HDD so compilation of kernel is sometimes close to 2 hours. I know I went to .38-rc1 to apply ath9k patches for AP mode... and there was problem with ath5k.. so I couldn't use both cards in same machine... I long waited for stable .38 or 39-rc ... but no progress and I cant find anyone else complaining.. but it's easy to reproduce.. I have few links with different chipsets so It's ath5k issue for sure. Can someone try it? one link.. both ath5k ... I have link with rt61pci--ath5k which seems to work fine on 2.6.38.4 .. but ath5k-ath5k --- no go... I guess it's something with rate control but I'm not much of a coder or debugger for that matter. Thanks for any help. On Sat, Apr 30, 2011 at 4:59 AM, Denis Periša de...@si-wifi.org wrote: No, I tried next stable version... why sould be it stable in first place? Have Anyone tried it? Thankx On Sat, Apr 30, 2011 at 3:09 AM, Adrian Chadd adr...@freebsd.org wrote: The obvious question - have you bisected the kernel versions to find which one introduced this regression? Adrian On 30 April 2011 03:13, Denis Periša de...@si-wifi.org wrote: Hello to all, I have problem since 2.6.38 kernel. I use link in ad-hoc mode between two nods. Link is 5ghz (channel doesn't seem to matter.. let's say 120). On 2.6.37 kernel I have link speeds up to 25mbit/s.. with 2.6.38 (and latest wireless-testing.git) I have like 1,2mbit/s When I force 54M rate, then it goes up to maximum 7mbit !! This is disaster... I've been waiting long time for someone to fix it in wireless-testing but nothing so far.. I'm I first to report this? Thank you! ___ ath5k-devel mailing list ath5k-devel@lists.ath5k.org https://lists.ath5k.org/mailman/listinfo/ath5k-devel ___ ath5k-devel mailing list ath5k-devel@lists.ath5k.org https://lists.ath5k.org/mailman/listinfo/ath5k-devel
Re: [ath5k-devel] Regression - adhoc mode awful throughput
Hi, Denis Takayuki, have you tested it on 5ghz band? blow shows my test result with channel 48, which is 5.24GHz. I also got the pretty good throughput for 5GHz with 2.6.39-rc5-wl. BTW, did you use fixed frequency in your test ? In my case, the channel is fixed to 48 as show below. root@RMR1:/# ifconfig wlan0 wlan0 Link encap:Ethernet HWaddr 00:0E:8E:13:A9:F6 inet addr:10.0.1.72 Bcast:10.0.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:18513 errors:0 dropped:0 overruns:0 frame:0 TX packets:10 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:27980008 (26.6 MiB) TX bytes:3672 (3.5 KiB) root@RMR1:/# iwconfig wlan0 wlan0 IEEE 802.11abg ESSID:test48 Mode:Ad-Hoc Frequency:5.24 GHz Cell: 56:3E:BD:4D:5D:91 Tx-Power=20 dBm RTS thr:off Fragment thr:off Encryption key:off Power Management:off root@RMR1:/# iperf -s -u -i 3.0 (== command at the peer node : iperf -c 10.0.1.72 -u -b 20M -t 100 ) Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 108 KByte (default) [ 3] local 10.0.1.72 port 5001 connected with 192.168.1.242 port 54899 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 0.0- 3.0 sec 7.16 MBytes 20.0 Mbits/sec 0.170 ms0/ 5105 (0%) [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 3.0- 6.0 sec 7.15 MBytes 20.0 Mbits/sec 0.250 ms0/ 5103 (0%) [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 6.0- 9.0 sec 7.14 MBytes 20.0 Mbits/sec 0.499 ms0/ 5093 (0%) [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 9.0-12.0 sec 7.17 MBytes 20.0 Mbits/sec 0.244 ms0/ 5112 (0%) [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 12.0-15.0 sec 7.15 MBytes 20.0 Mbits/sec 0.107 ms0/ 5103 (0%) [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 15.0-18.0 sec 7.15 MBytes 20.0 Mbits/sec 0.253 ms0/ 5101 (0%) [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 18.0-21.0 sec 7.15 MBytes 20.0 Mbits/sec 0.239 ms0/ 5103 (0%) [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 21.0-24.0 sec 7.15 MBytes 20.0 Mbits/sec 0.218 ms0/ 5103 (0%) [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 24.0-27.0 sec 7.15 MBytes 20.0 Mbits/sec 0.172 ms0/ 5102 (0%) [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 27.0-30.0 sec 7.14 MBytes 20.0 Mbits/sec 0.483 ms0/ 5095 (0%) [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 30.0-33.0 sec 7.16 MBytes 20.0 Mbits/sec 0.307 ms0/ 5110 (0%) [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 33.0-36.0 sec 7.15 MBytes 20.0 Mbits/sec 0.175 ms0/ 5103 (0%) [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 36.0-39.0 sec 7.15 MBytes 20.0 Mbits/sec 0.285 ms0/ 5102 (0%) [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 39.0-42.0 sec 7.15 MBytes 20.0 Mbits/sec 0.210 ms0/ 5103 (0%) [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 42.0-45.0 sec 7.15 MBytes 20.0 Mbits/sec 0.250 ms0/ 5102 (0%) [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 45.0-48.0 sec 7.15 MBytes 20.0 Mbits/sec 0.218 ms0/ 5102 (0%) [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 48.0-51.0 sec 7.15 MBytes 20.0 Mbits/sec 0.368 ms0/ 5097 (0%) [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 51.0-54.0 sec 7.03 MBytes 19.7 Mbits/sec 0.182 ms 91/ 5107 (1.8%) [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 54.0-57.0 sec 7.16 MBytes 20.0 Mbits/sec 0.250 ms0/ 5104 (0%) [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 57.0-60.0 sec 7.15 MBytes 20.0 Mbits/sec 0.308 ms0/ 5099 (0%) [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 60.0-63.0 sec 7.16 MBytes 20.0 Mbits/sec 0.250 ms0/ 5105 (0%) [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 63.0-66.0 sec 7.15 MBytes 20.0 Mbits/sec 0.277 ms0/ 5102 (0%) [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 66.0-69.0 sec 7.15 MBytes 20.0 Mbits/sec 0.247 ms0/
Re: [ath5k-devel] Regression - adhoc mode awful throughput
Hi, Today, I pulled out most recent wireless-testing(2.6.39-rc5-wl) and tried to verify IBSS throughput between two ar5414 cards. (Profile : two boards are net4826 ) I just got up to 20Mbps throughput between those, this looks normal with this environment. this was not so bad. Back to last Feb. I also got reasonable throughput with 2.6.38 on the similar test bed. basic-rate parameter value (iw ibss command) can make some impact on the throughput, but 1Mbps sounds too low regarding this parameter impact. . But, JFYI, let me report that I faced different problem while doing these tests. The receiver side got frequently(always, I can say) crashed when traffic is pretty high (20Mbps in my case ) Sometimes, Ethernet driver seems got into the crash when it received too many packets. I did not spend much time for the crashing problem so far, but I feel this is irrelevant to the throughput issue. regards Takayuki Kaiso -- root@RMR1:/# iperf -s -u -i 3.0 Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 108 KByte (default) [ 3] local 10.0.1.71 port 5001 connected with 192.168.3.242 port 42610 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 0.0- 3.0 sec 7.08 MBytes 19.8 Mbits/sec 0.701 ms 44/ 5096 (0.86%) [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 3.0- 6.0 sec 7.12 MBytes 19.9 Mbits/sec 0.800 ms4/ 5084 (0.079%) [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 6.0- 9.0 sec 7.17 MBytes 20.0 Mbits/sec 0.339 ms 10/ 5122 (0.2%) [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 9.0-12.0 sec 6.96 MBytes 19.5 Mbits/sec 0.556 ms 73/ 5041 (1.4%) [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 12.0-15.0 sec 7.13 MBytes 19.9 Mbits/sec 0.830 ms 59/ 5145 (1.1%) [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 15.0-18.0 sec 7.19 MBytes 20.1 Mbits/sec 0.292 ms2/ 5128 (0.039%) [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 18.0-21.0 sec 7.14 MBytes 20.0 Mbits/sec 0.285 ms 10/ 5102 (0.2%) [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 21.0-24.0 sec 7.07 MBytes 19.8 Mbits/sec 0.834 ms 42/ 5085 (0.83%) [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 24.0-27.0 sec 7.16 MBytes 20.0 Mbits/sec 0.344 ms 12/ 5120 (0.23%) BUG: unable to handle kernel NULL pointer dereference at 0002 IP: [c895fc3b] ieee80211_rx_handlers+0x6ab/0x1a90 [mac80211] *pde = Oops: [#1] PREEMPT last sysfs file: /sys/devices/pci:00/:00:0f.0/ieee80211/phy1/index Modules linked in: scx200_wdt xt_NOTRACK iptable_raw xt_state nf_defrag_ipv4 nf_conntrack pppoe pppox ipt_REJECT xt_TCPMSS ipt_LOG xt_multiport xt_mac xt_limit iptable_mangle iptable_filte] Pid: 1303, comm: iperf Not tainted 2.6.39-rc5-wl #1 EIP: 0060:[c895fc3b] EFLAGS: 00010282 CPU: 0 EIP is at ieee80211_rx_handlers+0x6ab/0x1a90 [mac80211] EAX: EBX: c79a8000 ECX: 0001 EDX: c7a3a634 ESI: c7809ee4 EDI: c7b9c022 EBP: c7809e54 ESP: c7809dd4 DS: 007b ES: 007b FS: GS: SS: 0068 Process iperf (pid: 1303, ti=c7808000 task=c7a21090 task.ti=c7bfa000) Stack: 0001 0002 c128f863 0292 c7809e34 c7809e40 c79a8000 c7809e0c 0008 c7a30088 0088 00025220 0003 c7809e20 0691 0030 c7809e20 c102cdc3 c7ae8200 c7a3a360 c7b78318 Call Trace: [c128f863] ? do_IRQ+0x43/0x8d [c102cdc3] ? irq_exit+0x43/0x60 [c1206914] ? skb_queue_tail+0x54/0x60 [c8961266] ieee80211_prepare_and_rx_handle+0x246/0x8a0 [mac80211] [c1208802] ? __alloc_skb+0x32/0x120 [c8961c24] ieee80211_rx+0x2e4/0x970 [mac80211] [c1208831] ? __alloc_skb+0x61/0x120 [c89e5ce8] ath5k_tasklet_rx+0x308/0x850 [ath5k] [c1003f02] ? handle_irq+0x12/0x80 [c1003f02] ? handle_irq+0x12/0x80 (Forgot to add CC to devel) btw. I have very critical link so no testing can be made.. I can try in late hours but I run nodes on slow machines with USB Stick instead of HDD so compilation of kernel is sometimes close to 2 hours. I know I went to .38-rc1 to apply ath9k patches for AP mode... and there was problem with ath5k.. so I couldn't use both cards in same machine... I long waited for stable .38 or 39-rc ... but no progress and I cant find anyone else complaining.. but it's easy to reproduce.. I have few links with different chipsets so It's ath5k issue for sure. Can someone try it? one link.. both ath5k ... I have link with rt61pci--ath5k which seems to work fine on
Re: [ath5k-devel] Regression - adhoc mode awful throughput
Takayuki, have you tested it on 5ghz band? Thanx ___ ath5k-devel mailing list ath5k-devel@lists.ath5k.org https://lists.ath5k.org/mailman/listinfo/ath5k-devel
[ath5k-devel] Regression - adhoc mode awful throughput
Hello to all, I have problem since 2.6.38 kernel. I use link in ad-hoc mode between two nods. Link is 5ghz (channel doesn't seem to matter.. let's say 120). On 2.6.37 kernel I have link speeds up to 25mbit/s.. with 2.6.38 (and latest wireless-testing.git) I have like 1,2mbit/s When I force 54M rate, then it goes up to maximum 7mbit !! This is disaster... I've been waiting long time for someone to fix it in wireless-testing but nothing so far.. I'm I first to report this? Thank you! ___ ath5k-devel mailing list ath5k-devel@lists.ath5k.org https://lists.ath5k.org/mailman/listinfo/ath5k-devel
Re: [ath5k-devel] Regression - adhoc mode awful throughput
The obvious question - have you bisected the kernel versions to find which one introduced this regression? Adrian On 30 April 2011 03:13, Denis Periša de...@si-wifi.org wrote: Hello to all, I have problem since 2.6.38 kernel. I use link in ad-hoc mode between two nods. Link is 5ghz (channel doesn't seem to matter.. let's say 120). On 2.6.37 kernel I have link speeds up to 25mbit/s.. with 2.6.38 (and latest wireless-testing.git) I have like 1,2mbit/s When I force 54M rate, then it goes up to maximum 7mbit !! This is disaster... I've been waiting long time for someone to fix it in wireless-testing but nothing so far.. I'm I first to report this? Thank you! ___ ath5k-devel mailing list ath5k-devel@lists.ath5k.org https://lists.ath5k.org/mailman/listinfo/ath5k-devel ___ ath5k-devel mailing list ath5k-devel@lists.ath5k.org https://lists.ath5k.org/mailman/listinfo/ath5k-devel