Re[8]: high load system do not take all CPU time

2012-02-25 Thread Коньков Евгений
Здравствуйте, Robert.

Вы писали 26 декабря 2011 г., 23:54:59:


RB [drivelectomy -- 200+ lines]

RB You've been told the following, *repeatedly*:

RB Your hardware is not capable of keeping up with the level of network traffic
RB it is being subjected to.

RB Reaaltek cards and the 're' device driver are a *BAD*CHOICE* for systems 
with
RB heavy network traffic.  They're merely 'medium lousy' on a lightly-loaded
RB system, but you don't notice the problems under light loads.

RB You have two choices:
RB   1) live with the crappy performance
RB   2) get a better quality network card.

better card do not change situation http://youtu.be/f90nMtNdKB8

full video you can download
http://www.filehosting.org/file/details/316076/1.rar

igb3@pci0:1:0:3:class=0x02 card=0x00018086 chip=0x15218086 rev=0x01 
hdr=0x00
vendor = 'Intel Corporation'
class  = network
subclass   = ethernet

   12 root   -32- 0K   624K WAIT1   1:38  0.00% intr{swi4: cloc
0 root   -680 0K   384K -   3   1:08  0.00% kernel{dummynet
0 root   -160 0K   384K sched   2   0:44  0.00% kernel{swapper}
   12 root   -44- 0K   624K WAIT3   0:31  0.00% intr{swi1: neti
   12 root   -44- 0K   624K WAIT3   0:11  0.00% intr{swi1: neti
   12 root   -68- 0K   624K WAIT2   0:07  0.00% intr{irq263: ig
   12 root   -44- 0K   624K WAIT2   0:06  0.00% intr{swi1: neti
   13 root   -16- 0K64K sleep   1   0:06  0.00% ng_queue{ng_que
   13 root   -16- 0K64K sleep   1   0:06  0.00% ng_queue{ng_que
   13 root   -16- 0K64K sleep   1   0:06  0.00% ng_queue{ng_que
   13 root   -16- 0K64K sleep   1   0:06  0.00% ng_queue{ng_que
   12 root   -68- 0K   624K WAIT3   0:05  0.00% intr{irq276: re
   12 root   -44- 0K   624K WAIT3   0:04  0.00% intr{swi1: neti
   14 root   -16- 0K16K -   0   0:04  0.00% yarrow
   12 root   -68- 0K   624K WAIT1   0:01  0.00% intr{irq262: ig
   12 root   -68- 0K   624K WAIT1   0:01  0.00% intr{irq257: ig
   12 root   -68- 0K   624K WAIT0   0:01  0.00% intr{irq261: ig
   12 root   -68- 0K   624K WAIT2   0:01  0.00% intr{irq258: ig
   12 root   -68- 0K   624K WAIT3   0:01  0.00% intr{irq274: ig
   12 root   -68- 0K   624K WAIT3   0:01  0.00% intr{irq264: ig




-- 
С уважением,
 Коньков  mailto:kes-...@yandex.ru

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re[8]: high load system do not take all CPU time

2012-01-05 Thread Коньков Евгений

I get it!!

When one of netisr take 100% of CPU other netisr threads did not get
free CPU time.

http://piccy.info/view3/2444937/25e978a34d1da6b62e4e4602dee53d8b/

In this case network works without any problem

last pid: 23632;  load averages:  5.53,  5.76,  5.72up 
6+00:09:50  20:37:43
292 processes: 12 running, 265 sleeping, 15 waiting
CPU 0:  9.4% user,  0.0% nice, 18.4% system, 36.1% interrupt, 36.1% idle
CPU 1:  2.4% user,  0.0% nice, 12.2% system, 62.4% interrupt, 23.1% idle
CPU 2:  1.6% user,  0.0% nice,  4.3% system, 85.1% interrupt,  9.0% idle
CPU 3:  1.6% user,  0.0% nice,  3.9% system, 86.3% interrupt,  8.2% idle
Mem: 613M Active, 2788M Inact, 315M Wired, 122M Cache, 112M Buf, 59M Free
Swap: 4096M Total, 30M Used, 4065M Free

  PID USERNAME   PRI NICE   SIZERES STATE   C   TIME   WCPU COMMAND
   12 root   -72- 0K   160K CPU33  43.4H 100.00% {swi1: netisr 
3}
   12 root   -72- 0K   160K CPU22  28.6H 93.60% {swi1: netisr 1}
   12 root   -72- 0K   160K CPU11 954:56 50.68% {swi1: netisr 2}
   11 root   155 ki31 0K32K RUN 0  96.7H 36.91% {idle: cpu0}
   12 root   -72- 0K   160K RUN 0 757:29 31.10% {swi1: netisr 0}
   11 root   155 ki31 0K32K RUN 1  98.4H 21.44% {idle: cpu1}
   12 root   -92- 0K   160K WAIT0  17.8H 12.94% {irq256: re0}
   11 root   155 ki31 0K32K RUN 2  97.7H 11.08% {idle: cpu2}
   11 root   155 ki31 0K32K RUN 3  98.6H 10.16% {idle: cpu3}
   13 root   -16- 0K32K sleep   0 411:30  4.25% {ng_queue0}
   13 root   -16- 0K32K sleep   0 411:54  4.20% {ng_queue3}
   13 root   -16- 0K32K RUN 0 411:05  4.10% {ng_queue1}
   13 root   -16- 0K32K RUN 2 411:16  3.81% {ng_queue2}
 5588 root210   222M   106M select  0 116:59  0.93% {mpd5}
 5588 root210   222M   106M select  1   0:00  0.93% {mpd5}
 5588 root210   222M   106M select  3   0:00  0.83% {mpd5}
 5588 root210   222M   106M select  0   0:00  0.83% {mpd5}
 5588 root210   222M   106M select  3   0:00  0.83% {mpd5}
 5588 root210   222M   106M select  3   0:00  0.83% {mpd5}
 5588 root210   222M   106M select  3   0:00  0.83% {mpd5}
32882 root210 15392K  5492K select  1 313:35  0.63% snmpd
 5588 root210   222M   106M select  3   0:00  0.54% {mpd5}
 5588 root200   222M   106M select  0   0:00  0.15% {mpd5}
 5588 root200   222M   106M select  2   0:00  0.15% {mpd5}
 5588 root200   222M   106M select  3   0:00  0.15% {mpd5}


# netstat -W 1 re0
input(Total)   output
   packets  errs idrops  bytespackets  errs  bytes colls
 96245 0 0   65271414 115828 0   80031246 0
104903 0 0   70367758 121943 0   85634456 0
102693 0 0   69663018 118800 0   83847075 0
108654 0 0   73776089 125368 0   88487518 0
100216 0 0   68186983 118522 0   80985757 0
 94819 0 0   63001720 107334 0   73020011 0
108428 0 0   73849974 127976 0   88674709 0


# vmstat -i
interrupt  total   rate
irq14: ata0 13841687 26
irq16: ehci0  782489  1
irq23: ehci1 1046847  2
cpu0:timer2140530447   4122
irq256: re0374422787721
cpu1:timer2132118859   4106
cpu3:timer2108526888   4061
cpu2:timer2131292574   4105
Total 8902562578  17147


1 usersLoad  5.87  5.62  5.65  Jan  5 20:41

Mem:KBREALVIRTUAL   VN PAGER   SWAP PAGER
Tot   Share  TotShareFree   in   out in   out
Act  737696   12284  315494038552  186192  count
All  946508   19628  540325288672  pages
Proc:Interrupts
  r   p   d   s   w   Csw  Trp  Sys  Int  Sof  Flt760 cow   38031 total
227  128k 7771  14k  21k  13k 3996   1624 zfod 18 ata0 14
  189 ozfod 1 ehci0 16
 7.8%Sys  68.3%Intr  4.2%User  0.0%Nice 19.7%Idle  11%ozfod 2 ehci1 23
|||||||||||   daefr  4123 cpu0:timer
++ 1439 prcfr 21603 re0 256
   184 dtbuf 3275 totfr  4102 cpu1:timer
Namei Name-cache   Dir-cache142271 desvn  react  4083 cpu3:timer
   Callshits   %hits   % 46214 numvn  pdwak  4099 cpu2:timer
   26140   24882  95

Re[8]: high load system do not take all CPU time

2011-12-28 Thread Коньков Евгений

RB [drivelectomy -- 200+ lines]

RB You've been told the following, *repeatedly*:

RB Your hardware is not capable of keeping up with the level of network traffic
RB it is being subjected to.

RB Reaaltek cards and the 're' device driver are a *BAD*CHOICE* for systems 
with
RB heavy network traffic.  They're merely 'medium lousy' on a lightly-loaded
RB system, but you don't notice the problems under light loads.

RB You have two choices:
RB   1) live with the crappy performance
RB   2) get a better quality network card.

without only one ipfw fw rule:

 queue 54 config pipe 54 queue 50  mask dst-ip 0x gred 
0.002/10/30/0.1

 275 queue 54 all from any not 80,110 to any in recv re0

works more! better:

http://piccy.info/view3/2418620/59aa576c1006bbb046a13554d8468a6c/


with igb cards I get problems too! it put pptp traffice only to one
queue0 instead to spread to all: queue0 queue1 queue2 queue3 =`(



-- 
С уважением,
 Коньков  mailto:kes-...@yandex.ru

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org