Keith,

    Don't ever listen to a sales person.  Ever!  What is the ratio of
collisions to frames output on that interface to the provider?  Cisco
recommends limiting collisions to 1 out of every 1000 frames, although 1 out
of every 100 isn't bad.  If it's worse than 1 out of every 100, definitely
get
them to make it full duplex.  Frames queueing up on this interface could be
causing problems with the others.  Definitely turn on CEF.  If they want to
limit your network speed it should occur on their interface to their own
equipment, not yours.  NBAR (Network Based Application Recognition) is
available on 12.2 and does a lot of what Packeteer can do.  Assuming you've
got adequate memory (do a 'sh mem', check how much is free), I'd bump up both
the buffers a bit and the queues on the interfaces.  Shouldn't be too much
more CPU load.  Do 200/300 per/max for small buffers, 100/150 for middle, and
75/150 for big.  Double the size of the interface queues that have drops.  Go
with this for a day, and see how it looks.  Also, do a 'sh int stat' to see
the ratio of process to fast switched packets.  This ratio should improve
with
CEF.  Hope this helps.  Let me know if you need more help.

Chuck Church
CCIE #8776, MCNE, MCSE

Date: Sat, 23 Nov 2002 18:18:16 GMT
From: "Keith Woodworth" 
Subject: Re: Apparent packet loss... [7:57922]

On Sat, 23 Nov 2002, The Long and Winding Road wrote:

|->> They have told us to config our ethernet port to half duplex so packets
|->> will be retransmitted if they get lost in their ATM cloud so we have a
|->> fairly high collison rate on this port. I dont know enough about ATM to
|->> say if this is good or bad...?
|->
|->
|->CL: huh? the retransmission is determined from and between the source and
|->destination hosts, not by routers along the way. this half duplex
|->instruction doesn't make sense to me.

Nor does it to me either but before we put in the 7206, we had their 7204
as the gateway connected to a switch and it was set half-duplex even
before I started here. I'm going to dig more into this.

The part of this that annoys me is when I asked my boss about this he said
the provider would charge us an xtra $2k/month to run the port
full-duplex....telus is hurting and are trying to squeeze as much as they
can from us and everyone else.

|->CL: have you considered doing traffic studies to determine if any qos type
|->services could be of benefit? anything like traffic shaping, random early
|->detect, things like that?

We have started doing that because we started noticing that outbound
traffic higher than inbound. About 6 weeks ago we moved the routers to a
switch as a start just to look at sniffing the traffic via port spanning.
4pm in the afternoon we started and within an hour, we found that 50-60%
of traffic outbound was riding on port 1214 (Kazaa etc) At that time
outbound traffic was pushing 18Megs, inbound was about 15Megs.

Historically traffic was 8-10Megs out and 15-18Megs in. P2P is killing us.

A few simple ACL's have been put to rate-limit outgoing traffic on that
port for P2P, which has helped. And we are looking at packet shaping
possiblities. My boss wants a Packeteer....but I'd like to see if I can do
something with the router instead of spending 20 grand.

|->CL: according to the following link, up to 400,000 pps
|->
|->http://www.cisco.com/warp/public/cc/pd/rt/7200/prodlit/c7200_ds.htm
|->
|->your description doesn't indicate you have oversubscribed the back plane.
|->

Yea I dont think we are either now that Ive seen some numbers. I was
looking for specs on the NSE1 not the 7206. Thanks for the link.

|->> Anyway to acutally tell for certain if the router is dropping packets?
|->
|->show buffers
|->show queueing
|->show queue interface etc.

Showing misses/failures on all buffers but these have the most:

Small buffers, 104 bytes (total 50, permanent 50, peak 201 @ 7w0d):
     44 in free list (20 min, 150 max allowed)
     1991931468 hits, 98395 misses, 43142 trims, 43142 created
     2371 failures (0 no memory)
Middle buffers, 600 bytes (total 25, permanent 25, peak 92 @ 3d20h):
     23 in free list (10 min, 150 max allowed)
     43042905 hits, 2828 misses, 2508 trims, 2508 created
     703 failures (0 no memory)
Big buffers, 1524 bytes (total 50, permanent 50, peak 68 @ 6d12h):
     50 in free list (5 min, 150 max allowed)
     12398616 hits, 359 misses, 81 trims, 81 created
     79 failures (0 no memory)

so according to docs on CCO about buffers, misses/failures usually lead to
dropped packets. This leads me to believe that data is coming in at a rate
higher than the RP can keep up though. Will have to look at upping the #
of permenant buffers and see if that helps.

Thanks,
Keith




Message Posted at:
http://www.groupstudy.com/form/read.php?f=7&i=57957&t=57957
--------------------------------------------------
FAQ, list archives, and subscription info: http://www.groupstudy.com/list/cisco.html
Report misconduct and Nondisclosure violations to [EMAIL PROTECTED]

Reply via email to