Re: [LARTC] can all internet traffic be directed thru 1 computer on aRouter?

2006-02-14 Thread Nataniel Klug
Ian,

Let me try to understand.

You have a local network were you have many computers that have access to
the internet. They all go through one modem/router. So now you want to put a
gateway server betwen your LAN and the outside world so you can manage the
traffic?

It course can be done. If there is anything else you can serve us to make an
analisys.

Att,

Nataniel Klug
Gerente Cyber Nett
Brazil

- Original Message - 
From: Ian stuart Turnbull [EMAIL PROTECTED]
To: lartc@mailman.ds9a.nl
Sent: Monday, February 13, 2006 5:32 PM
Subject: [LARTC] can all internet traffic be directed thru 1 computer on
aRouter?


 Hello al,
 Is it possible [indeed is this the right place] to add  iptables  to force
 all internet traffic to go thru a particular computer on a LAN?
 I have a 4 port Router/modem that contains a Busybox v0.61 Linux system. I
 am able to add entries to the iptables tho' I don't really know what it
does
 yet. I want to be able to use Ethereal on this one computer to check what
 web pages my children are visiting - being fairly strict I don't want them
 visiting some of the more perverse sites.
 A friend told me this is possible.
 Can anyone help please.

 _
 The new MSN Search Toolbar now includes Desktop search!
 http://toolbar.msn.co.uk/

 ___
 LARTC mailing list
 LARTC@mailman.ds9a.nl
 http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc

___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] filter performance/optimization questions

2006-02-14 Thread Imre Gergely


Jakub Wartak wrote:
 Dnia środa, 8 lutego 2006 18:29, Imre Gergely napisał:
 hi

 i'm using htb + u32 filters, and i was wondering if there is something one
 can optimize at this stage. i have a lot of filters (~ 50.000 / interface,
 and there are two interfaces), and around 4500 classes / interface. the
 traffic going through this machine is something around 210-230mbit/s at
 50kpps. as you can imagine, the load is pretty high. in fact (as it's a
 dual xeon at 2.4ghz), one CPU is always at 100% when the traffic increases.

 i did some tests with esfq (that brought down the classes to around 150),
 but the filters remained, and the load was still 100%. and i get some
 packet loss because of that. not much, around 1-2%, but it's enough :)

 is there something i could do to bring the load down? short of replacing
 the whole system? i didn't find anything performance-related on the net, or
 in any documentation.

 thanks.
 
 Show your dmesg, cat /proc/interrupts ( or use itop to determine which 
 card/interface is hogging ), lsmod and .config from kernel compilation
 Also show us ip -s link

[EMAIL PROTECTED] root]# cat /proc/interrupts
   CPU0   CPU1
  0:   55921457  383025821IO-APIC-edge  timer
  1:342259IO-APIC-edge  i8042
  2:  0  0  XT-PIC  cascade
  8:  0  0IO-APIC-edge  rtc
 14:  1 13IO-APIC-edge  ide0
 24: 23261179891473249   IO-APIC-level  ioc0, eth1
 25: 305396 1034030719   IO-APIC-level  ioc1, eth2
 28:  625322546645   IO-APIC-level  eth0
NMI: 111277 253384
LOC:  438830354  438830358
ERR:  0
MIS:  0

(eth1 is the download interface. eth2 is the upload, on which currently is no 
htb)

dmesg attached.

[EMAIL PROTECTED] root]# lsmod
Module  Size  Used by
bcm5700   132208  0
e100   34304  0
mii 5440  1 e100

.config, ip -s link output attached.

 What ethcards do you have, is NAPI enabled on them ?

02:09.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5704 Gigabit
Ethernet (rev 03)
02:09.1 Ethernet controller: Broadcom Corporation NetXtreme BCM5704 Gigabit
Ethernet (rev 03)

 You could also disable connection tracking if that's not done already. 

iptables is used only on INPUT, for firewall.

 And finally, are you using any libpcap based application ?

only occasionaly, for a couple of seconds.

note: the initial system as of starting the thread was replaced with this one.
Bootdata ok (command line is root=/dev/md0 nousb)
Linux version 2.6.9-2.ast-smp ([EMAIL PROTECTED]) (gcc version 3.3.3 20040412 
(Red Hat Linux 3.3.3-7)) #1 SMP Sat Dec 18 13:31:32 EET 2004
BIOS-provided physical RAM map:
 BIOS-e820:  - 0009fc00 (usable)
 BIOS-e820: 0009fc00 - 000a (reserved)
 BIOS-e820: 000e - 0010 (reserved)
 BIOS-e820: 0010 - 4000 (usable)
 BIOS-e820: ff7c - 0001 (reserved)
No mptable found.
On node 0 totalpages: 262144
  DMA zone: 4096 pages, LIFO batch:1
  Normal zone: 258048 pages, LIFO batch:16
  HighMem zone: 0 pages, LIFO batch:1
ACPI: Unable to locate RSDP
Intel MultiProcessor Specification v1.4
Virtual Wire compatibility mode.
OEM ID: TYAN 6Product ID: S28806APIC at: 0xFEE0
Processor #0 15:5 APIC version 16
Processor #1 15:5 APIC version 16
I/O APIC #2 Version 17 at 0xFEC0.
I/O APIC #3 Version 17 at 0xFEBFE000.
I/O APIC #4 Version 17 at 0xFEBFF000.
Processors: 2
Built 1 zonelists
Kernel command line: root=/dev/md0 nousb console=tty0
Initializing CPU#0
PID hash table entries: 4096 (order: 12, 131072 bytes)
time.c: Using 1.193182 MHz PIT timer.
time.c: Detected 1793.890 MHz processor.
Console: colour VGA+ 80x25
Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes)
Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes)
Memory: 1026520k/1048576k available (1809k kernel code, 21300k reserved, 664k 
data, 176k init)
Calibrating delay loop... 3522.56 BogoMIPS (lpj=1761280)
Mount-cache hash table entries: 256 (order: 0, 4096 bytes)
CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 bytes/line)
CPU: L2 Cache: 1024K (64 bytes/line)
Using local APIC NMI watchdog using perfctr0
CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 bytes/line)
CPU: L2 Cache: 1024K (64 bytes/line)
CPU0: AMD Opteron(tm) Processor 244 stepping 08
per-CPU timeslice cutoff: 1024.01 usecs.
task migration cache decay timeout: 2 msecs.
Booting processor 1/1 rip 6000 rsp 10037f25f58
Initializing CPU#1
Calibrating delay loop... 3579.90 BogoMIPS (lpj=1789952)
CPU: L1 I Cache: 64K (64 bytes/line), D cache 64K (64 bytes/line)
CPU: L2 Cache: 1024K (64 bytes/line)
AMD Opteron(tm) Processor 244 stepping 08
Total of 2 processors activated (7102.46 BogoMIPS).
Using IO-APIC 2
Using IO-APIC 3
Using IO-APIC 4
Using local APIC timer interrupts.
Detected 12.457 MHz APIC timer.

[LARTC] Re: filter performance/optimization questions (Imre Gergely)

2006-02-14 Thread Paweł Staszewski

Can you also post :

mpstat -P ALL 1 20

iostat -x 1 10

and
opreport --symbols

??

___
LARTC mailing list
LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc


Re: [LARTC] Guarantee ICMP respond time ?

2006-02-14 Thread Stanislav Nedelchev
Hi Robin ,
I didn'd want to fake ICMP echo_reply i forgot to mention that in this
test i'm pinging my gateway to be shure that ping response is not bigger
for some other reasonds
i find  that ping response is getting bigger some times with about 10ms
but some times it;s doubles or even more but in most time is like constant.

Here is some data if you find it intresting

with shaper enabled
64 octets from 213.91.166.1: icmp_seq=22 ttl=254 time=30.9 ms
64 octets from 213.91.166.1: icmp_seq=23 ttl=254 time=40.9 ms
64 octets from 213.91.166.1: icmp_seq=24 ttl=254 time=14.3 ms
64 octets from 213.91.166.1: icmp_seq=25 ttl=254 time=14.4 ms
64 octets from 213.91.166.1: icmp_seq=26 ttl=254 time=34.2 ms
64 octets from 213.91.166.1: icmp_seq=27 ttl=254 time=14.2 ms
64 octets from 213.91.166.1: icmp_seq=28 ttl=254 time=14.2 ms
64 octets from 213.91.166.1: icmp_seq=29 ttl=254 time=14.2 ms
64 octets from 213.91.166.1: icmp_seq=30 ttl=254 time=31.1 ms
64 octets from 213.91.166.1: icmp_seq=31 ttl=254 time=14.3 ms
64 octets from 213.91.166.1: icmp_seq=32 ttl=254 time=14.2 ms
64 octets from 213.91.166.1: icmp_seq=33 ttl=254 time=130.9 ms
without shaper enabled
64 octets from 213.91.166.1: icmp_seq=10 ttl=254 time=517.2 ms
64 octets from 213.91.166.1: icmp_seq=11 ttl=254 time=545.4 ms
64 octets from 213.91.166.1: icmp_seq=12 ttl=254 time=573.8 ms
64 octets from 213.91.166.1: icmp_seq=13 ttl=254 time=628.6 ms
64 octets from 213.91.166.1: icmp_seq=14 ttl=254 time=635.3 ms
64 octets from 213.91.166.1: icmp_seq=15 ttl=254 time=666.0 ms
64 octets from 213.91.166.1: icmp_seq=16 ttl=254 time=694.3 ms
64 octets from 213.91.166.1: icmp_seq=17 ttl=254 time=718.1 ms
64 octets from 213.91.166.1: icmp_seq=18 ttl=254 time=746.2 ms
64 octets from 213.91.166.1: icmp_seq=19 ttl=254 time=749.8 ms
64 octets from 213.91.166.1: icmp_seq=20 ttl=254 time=778.1 ms


Hammond, Robin-David%KB3IEN wrote:
 well if you want the line to look less conjested to a casual observer
 you can fake the ICMP echo_reply. (best know which hosts are infact
 on-line first). Faking the reply does not preclude actualy sending the
 echo request, but allowing a duplicate (real) reply might look weird...


 On Tue, 14 Feb 2006, Stanislav Nedelchev wrote:

 Date: Tue, 14 Feb 2006 22:35:40 +0200
 From: Stanislav Nedelchev [EMAIL PROTECTED]
 To: lartc@mailman.ds9a.nl
 Subject: [LARTC] Guarantee ICMP respond time ?

 Hello to all people there .
 Can i guarantee ICMP respond time no metter how loaded is internet
 line .
 i have typical NATed enviroiment   like

 External IP |linux router| LAN - 192.168.0.0/24

 i have example setup with IMQ but is it possible to be done also if i
 attache htb to eth0 and eth1 for example .

 if i start shaper ping i better that without shaper but it's not
 guarantted i mean response  time is not like constant.

 Maybe i'm missing something.
 Is it possible with HTB ot with something else like CBQ ?
 here is my example setup




 echo Loading Traffic Shaper IMQ0 Upload
 tc qdisc  del dev imq0 root
 tc qdisc  add dev imq0 root handle 2: htb default 333 r2q 1

 tc class  add dev imq0 parent 2: classid 2:2 htb rate 192kbit

 #ICMP
 tc class  add dev imq0 parent 2:2 classid 2:30 htb rate 32kbit prio 0
 tc filter add dev imq0 parent  2:0 protocol ip handle 5 fw classid 2:30
 tc qdisc  add dev imq0 parent 2:30 handle 30: sfq perturb 1



 tc class  add dev imq0 parent 2:2 classid 2:24 htb rate 96kbit ceil
 160kbit prio 1
 tc filter add dev imq0 parent  2:0 protocol ip handle 1 fw classid 2:24

 tc qdisc  add dev imq0 parent 2:24 handle 24: sfq perturb 10

 tc class  add dev imq0 parent 2:2 classid 2:26 htb rate 32kbit ceil
 128kbit prio 3
 tc filter add dev imq0 parent 2:0 protocol ip handle 2 fw classid 2:26
 #tc qdisc  add dev imq0 parent 2:26 handle 26: sfq perturb 10

 tc class  add dev imq0 parent 2:2 classid 2:28 htb rate 16kbit ceil
 64kbit prio 5
 tc filter add dev imq0 parent  2:0 protocol ip handle 3 fw classid 2:28
 #tc qdisc  add dev imq0 parent 2:28 handle 28: sfq perturb 10

 tc  class  add dev imq0 parent  2:2 classid 2:333 htb rate 16kbit ceil
 128kbit prio 7
 tc  qdisc  add dev imq0 parent  2:333 handle 333: sfq perturb 10

 echo Done

 #-

 #-



 echo Loading Traffic Shaper imq1 Upload
 tc qdisc  del dev imq1 root
 tc qdisc  add dev imq1 root handle 2: htb default 333 r2q 1

 tc class  add dev imq1 parent 2: classid 2:2 htb rate 192kbit

 #ICMP
 tc class  add dev imq1 parent 2:2 classid 2:30 htb rate 32kbit prio 0
 tc filter add dev imq1 parent  2:0 protocol ip handle 5 fw classid 2:30
 tc qdisc  add dev imq1 parent 2:30 handle 30: sfq perturb 1



 tc class  add dev imq1 parent 2:2 classid 2:24 htb rate 96kbit ceil
 160kbit prio 1
 tc filter add dev imq1 parent  2:0 protocol ip handle 1 fw classid 2:24



 tc qdisc  add dev imq1 parent 2:24 handle 24: sfq perturb 10

 tc class  add