I don't know if this made it earlier but here it goes again. Sorry if I repeat myself.
Hi,
I'm an old guy. It has always been my understanding that something needs to control
that is synchronise the signals across a given link. This needs to be controlled from
one end only as if there was a slight variation in the clock speed of one device to
the other you would get a signal drift and the bits would not coincide. In a
simplistic sense where the clock pulse both hits together all is fine when they miss
hit we are in no mans lands.
In order to acheive this most old systems used a master - slave relationship ie. The
master provides the clocking and sends it as a constant pulse to the remote end. Both
ends then use this to time the bit pulses. The idea here is to measure a bit
somewhere in the middle of its peek signal period. OK I know is is a bit and
therefore the signal is a square pulse but due to attenuation in the cable
(Capacitance, resistance, refection etc.) it is no longer square when the remote end
sees it.
If the clockrate is not set correctly to the bit rate sent on the wire we get a drift
of the signal and a failure may occur.
Cisco use the DCE end to set the clock rate. Setting it to the appropriate speed the
remote en knows what to do with it.
Most Telco's and or Modem type equipment supply the clocking and the router becomes a
DTE device. This provides the time.
The "bandwith" statement in the router has no effect on the data rate going out of an
interface. It is used for metrics and various management tasks and is totally
indepentant of the clock speed on a port.
For example in Australia we often get routers where the bri ports show 56K and the
serial ports by default are the 1544 or whatever some use. The ports work just dandy
to our 2048 and 64K services but if we do not change the bandwith statements we can
get 115% or more utilisation an a link. (This is a huge cost saver ;) ) The 115%
often creates heated discussion with clients as they get bill for using links over the
capacity the see. This is not true only the graphs are out.
I'm sorry for being long winded but this topic comes up a lot with various people I
deal with. This is a simplistic view and not meant to be totally accurate but give
the general gihst.
Teunis,
Hobart, Tasmania
Australia
On Wednesday, December 13, 2000 at 07:19:34 AM, Pierre-Alex wrote:
> Those questions may sound silly to some of you, but I have not found
> satisfactory answers in the literature or on the archives.
>
> 0. How do you choose the clock rate on an serial interface?
> 1. What is the relationship (if any) between the wire rate and the clock
> rate?
> 2. What is the relationship if any between the clock rate and the bandwidth?
> 3. How could clock rate speed be "gentle on cables"? (See archive bellow)
>
> THANKS
>
>
> FROM THE ARCHIVES I FOUND:
>
>
> >How does one know the proper clock rate to set on a DCE interface.
> >I understand
> >that in real world apps, this would be provided by the
> >Telco....but....in a lab situation,
> >or any other for that matter that requires to routers to be linked
> >through their serial interfaces,
> >what is the best way to determine the proper clock rate?
> >
> >Thanks,
> Roman
>
>
> Well, in the Cisco training labs, we generally used 56 or 64 Kbps. A
> conservative speed that was gentle on cable requirements.
>
> In a lab, you'll frequently find that slower is better, if, for
> example, you are running a debug and want to see events.
>
> _________________________________
> FAQ, list archives, and subscription info: http://www.groupstudy.com/list/cisco.html
> Report misconduct and Nondisclosure violations to [EMAIL PROTECTED]
>
>
--
www.tasmail.com
_________________________________
FAQ, list archives, and subscription info: http://www.groupstudy.com/list/cisco.html
Report misconduct and Nondisclosure violations to [EMAIL PROTECTED]