Hello list,
     I discover strange behaviour of shaping traffic that i setup from 
Shorewall-4.0.2.
I know that this is not Shorewall problem but may be somebody from list 
can help me
or explain this situation.
     I have follow interfaces in 'tcdevices' files:

#INTERFACE      IN-BANDWITH     OUT-BANDWIDTH
#
$EXT_IF         500kbit         248kbit
$INT1_IF        500mbit         500mbit
$INT2_IF        500mbit         500mbit
$DMZ_IF         500mbit         500mbit

     follow rules in 'tcrules' file for tested interface (INT1_IF):

31:F    $EXT_IF         $INT1_IF:$ADM_IP        all
32:F    $EXT_IF         $INT1_IF:$PRV_IP        all
33:F    $EXT_IF         $INT1_IF:$MY_NET        all


     and follow traffic classes in 'tcclasses':

$INT1_IF        31      70kbit  250kbit     2
$INT1_IF        32      50kbit  250kbit     3
$INT1_IF        33      50kbit  250kbit     4
$INT1_IF        30      10mbit  10mbit      5               default

     Then when i test bandwidth for default class i have such
result:

lpc:~ # wget -v http://192.168.5.3:80/file.xyz
--16:33:59--  http://192.168.5.3/file.xyz
            => `file.xyz.18'
Connecting to 192.168.5.3:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 102,400,000 (98M) [chemical/x-xyz]

  4% [====>       ...            ] 4,720,189      1.04M/s    ETA 01:27

     But when i increase default class RATE and CEIL i get very
strange results:

$INT1_IF        31      70kbit  250kbit     2
$INT1_IF        32      50kbit  250kbit     3
$INT1_IF        33      50kbit  250kbit     4
$INT1_IF        30      100mbit 100mbit     5               default


lpc:~ # wget -v http://192.168.5.3:80/file.xyz
--16:34:17--  http://192.168.5.3/file.xyz
            => `file.xyz.19'
Connecting to 192.168.5.3:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 102,400,000 (98M) [chemical/x-xyz]

  5% [=====>      ...            ] 5,693,245    473.91K/s    ETA 02:27


     I hope to get increase bandwidth at 10 times but have decrease
on half.
     From 'tc' output i see that traffic flow go through default class
with correct 'rate' and 'ceil' parameters (class htb 2:130 parent 2:1 leaf 
130):


   # shorewall show tc
Device eth0:
qdisc htb 2: r2q 10 default 130 direct_packets_stat 0 ver 3.17
  Sent 6110261 bytes 4077 pkt (dropped 0, overlimits 0 requeues 0)
  rate 0bit 0pps backlog 0b 0p requeues 0
qdisc ingress ffff: ----------------
  Sent 242226 bytes 4210 pkt (dropped 0, overlimits 0 requeues 0)
  rate 0bit 0pps backlog 0b 0p requeues 0
qdisc sfq 131: parent 2:131 limit 128p quantum 1514b flows 128/1024 
perturb 10sec
  Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
  rate 0bit 0pps backlog 0b 0p requeues 0
qdisc sfq 132: parent 2:132 limit 128p quantum 1514b flows 128/1024 
perturb 10sec
  Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
  rate 0bit 0pps backlog 0b 0p requeues 0
qdisc sfq 133: parent 2:133 limit 128p quantum 1514b flows 128/1024 
perturb 10sec
  Sent 4009 bytes 39 pkt (dropped 0, overlimits 0 requeues 0)
  rate 0bit 0pps backlog 0b 0p requeues 0
qdisc sfq 130: parent 2:130 limit 128p quantum 1514b flows 128/1024 
perturb 10sec
  Sent 6106252 bytes 4038 pkt (dropped 0, overlimits 0 requeues 0)
  rate 0bit 0pps backlog 0b 0p requeues 0
class htb 2:132 parent 2:1 leaf 132: prio 3 quantum 1500 rate 50000bit 
ceil 500000bit burst 1662b/8 mpu 0b overhead 0b cburst 2225b/8 mpu 0b 
overhead 0b leve
  Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
  rate 0bit 0pps backlog 0b 0p requeues 0
  lended: 0 borrowed: 0 giants: 0
  tokens: 265920 ctokens: 35600

class htb 2:1 root rate 500000Kbit ceil 500000Kbit burst 626562b/8 mpu 0b 
overhead 0b cburst 626562b/8 mpu 0b overhead 0b level 7
  Sent 6110261 bytes 4077 pkt (dropped 0, overlimits 0 requeues 0)
  rate 763776bit 63pps backlog 0b 0p requeues 0
  lended: 0 borrowed: 0 giants: 0
  tokens: 10025 ctokens: 10025

class htb 2:133 parent 2:1 leaf 133: prio 4 quantum 1500 rate 50000bit 
ceil 500000bit burst 1662b/8 mpu 0b overhead 0b cburst 2225b/8 mpu 0b 
overhead 0b leve
  Sent 4009 bytes 39 pkt (dropped 0, overlimits 0 requeues 0)
  rate 496bit 0pps backlog 0b 0p requeues 0
  lended: 39 borrowed: 0 giants: 0
  tokens: 242880 ctokens: 33296

class htb 2:130 parent 2:1 leaf 130: prio 5 quantum 5000 rate 100000Kbit 
ceil 100000Kbit burst 126600b/8 mpu 0b overhead 0b cburst 126600b/8 mpu 0b 
overhead
  Sent 6106252 bytes 4038 pkt (dropped 0, overlimits 0 requeues 0)
  rate 763280bit 63pps backlog 0b 0p requeues 0
  lended: 4038 borrowed: 0 giants: 0
  tokens: 10125 ctokens: 10125

class htb 2:131 parent 2:1 leaf 131: prio 2 quantum 1500 rate 70000bit 
ceil 500000bit burst 1687b/8 mpu 0b overhead 0b cburst 2225b/8 mpu 0b 
overhead 0b leve
  Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
  rate 0bit 0pps backlog 0b 0p requeues 0
  lended: 0 borrowed: 0 giants: 0
  tokens: 192800 ctokens: 35600


     When i stop Shorewall i get full speed for my network (different IP 
so as i use DNAT in Shorewall):

lpc:~ # wget -v http://172.16.254.10:80/file.xyz
--16:50:57--  http://172.16.254.10/file.xyz
            => `file.xyz.20'
Connecting to 172.16.254.10:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 102,400,000 (98M) [chemical/x-xyz]

25% [===========================>    ...       ] 25,843,613    11.20M/s


     Thank you for any help and advises.
     Shubnik Aleksandr
      


----------
Международные экзамены на знание языка для жизни и карьеры 
в школе International House, тел. 293-65-55, 293-06-68, 609-89-90, 
777-73-18,  http://www.ih.by

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
_______________________________________________
Shorewall-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/shorewall-users

Reply via email to