I would also reset the counters on hourly intervals when I'm tracking a
big ish problem this way and keep track of the statistics. You might
find that errors peak at certain times of the day. Counters that are
very old is not really very useful.

-----Original Message-----
From: Priscilla Oppenheimer [mailto:[EMAIL PROTECTED]] 
Sent: 18 December 2002 22:35
To: [EMAIL PROTECTED]
Subject: RE: Acceptable Amount of CRC Errors [7:59477]


On shared Ethernet, CRC errors are often the result of a collision.
Let's
leave that aside, however, and assume that you are referring to CRC
errors
on full-duplex Ethernet or serial links. CRC errors are caused by noise,
signal reflections, impedance mismatches, improperly installed demarcs,
faulty  hardware, and other bad things that really shouldn't happen. The
number should be really low. That's helpful, eh? :-)

CRC errors should be less on fiber-optic cabling compared to copper
cabling.
According to industry standards, fiber-optic cabling should not
experience
more than one bit error per 10^11 bits. Copper cabling should not
experience
more than one bit error per 10^6 bits.

Some documents from Cisco and other vendors specify a threshold of one
bad
frame per megabyte of data. In other words, an interface should not
experience more than one CRC error per megabyte of data received. (The
"megabyte of data" threshold comes from the industry standards that
state
that copper cables should not have a bit error rate that exceeds 1 in
10^6.)
This method is better than simply calculating a percentage of bad frames
compared to good frames, which does not account for the variable size of
frames. (If you have a constant flow of 64-byte frames, for example, and
a
percentage of them is getting damaged, that probably represents a more
serious problem than the same percentage of 1500-byte frames getting
damaged. So, it's better to use a total number of bytes rather than a
total
number of frames in the calculation.)

When troubleshooting at the Data Link Layer, which deals with frames
rather
than bits, you can't actually determine a bit error rate, but you can at
least get a rough estimate by considering the number of CRC errors
compared
to the number of megabytes received.

Some Cisco documentation simply states that a problem exists if input
errors
are in excess of 1 percent of total interface traffic. This is easier to
remember, but it's actually just as hard to comprehend. The documents
don't
specify whether you should compare the input errors to the number of
frames
or the number of bytes received. If they means frames, then we have the
problem already mentioned (no accounting for variable frame sizes). If
they
mean bytes, then 1 percent is very high. On a loaded network, 1 percent
of
total bytes represents a very high bit-error rate. You may want to use a
number less than 1 percent.

When troubleshooting input errors, you should also consider a timeframe
and
whether there's been a burst of errors and how long the burst has
lasted.
The telco practice is to report total errors along with errored seconds,
for
example.

_______________________________

Priscilla Oppenheimer
www.troubleshootingnetworks.com
www.priscilla.com


Lupi, Guy wrote:
> 
> I remember looking at a link on Cisco's web site that stated an
> acceptable
> threshold for CRC errors on an interface.  I believe it was
> something like
> CRCs could not exceed .001% of the total input packets on the
> interface.
> Has anyone else seen this link, or one like it?  I am trying to
> determine
> the threshold for an alarm notification when polling for
> iferrors.
> 
> Guy H. Lupi




Message Posted at:
http://www.groupstudy.com/form/read.php?f=7&i=59509&t=59477
--------------------------------------------------
FAQ, list archives, and subscription info: http://www.groupstudy.com/list/cisco.html
Report misconduct and Nondisclosure violations to [EMAIL PROTECTED]

Reply via email to