RE: Built-in CSU capacity [7:60022]

2002-12-31 Thread s vermill
Brett Johnson wrote:
> 
> I am running a test on two 3660 routers with multiple CSU cards
> and
> cross-over t1 cables between the two routers.  I am unable to
> exceed 75%
> capacity on any t1 no matter how much data I pump into the
> router.  

How are you measuring this?  The interface load?  Also, what are you pushing
at the routers?  Are you sure your source is generating what you think it
is?  What size of data blocks are you dealing with?  Are you seeing drops
due to the congestion that you should be creating?

> Below is
> a sample config for one of the interfaces, the rest are
> duplicates with
> different addresses:
> 
>   
> controller t1 1/0
>   framing esf
>   clock source internal
>   channel-group 0 timeslots 1-24 speed 64
> 
> interface serial 1/0:0
>   ip address 10.0.0.1 255.255.255.0
>   encapsulation ppp
>   no ip route cache
>   no ip mroute cache
> 
> ip route 0.0.0.0 0.0.0.0 serial 1/0:0
> 
> Is there a way to use the full bandwidth (CEF, 7200 router with
> CEF and
> multiport CSU, external CSUs...) or is this a limit of the
> hardware and
> setup?

There will be some TCP/IP/PPP overhead, but 25% sounds high.  

> 
> Thank you,
> 
> Brett Johnson
> 
> 




Message Posted at:
http://www.groupstudy.com/form/read.php?f=7&i=60024&t=60022
--
FAQ, list archives, and subscription info: http://www.groupstudy.com/list/cisco.html
Report misconduct and Nondisclosure violations to [EMAIL PROTECTED]



RE: Built-in CSU capacity [7:60022]

2002-12-31 Thread Priscilla Oppenheimer
Brett Johnson wrote:
> 
> I am running a test on two 3660 routers with multiple CSU cards
> and
> cross-over t1 cables between the two routers.  I am unable to
> exceed 75%
> capacity on any t1 no matter how much data I pump into the
> router.  Below is
> a sample config for one of the interfaces, the rest are
> duplicates with
> different addresses:
> 
>   
> controller t1 1/0
>   framing esf
>   clock source internal
>   channel-group 0 timeslots 1-24 speed 64
> 
> interface serial 1/0:0
>   ip address 10.0.0.1 255.255.255.0
>   encapsulation ppp
>   no ip route cache
>   no ip mroute cache
> 
> ip route 0.0.0.0 0.0.0.0 serial 1/0:0
> 
> Is there a way to use the full bandwidth (CEF, 7200 router with
> CEF and
> multiport CSU, external CSUs...) or is this a limit of the
> hardware and
> setup?
> 
> Thank you,
> 
> Brett Johnson
> 
> 




Message Posted at:
http://www.groupstudy.com/form/read.php?f=7&i=60031&t=60022
--
FAQ, list archives, and subscription info: http://www.groupstudy.com/list/cisco.html
Report misconduct and Nondisclosure violations to [EMAIL PROTECTED]



RE: Built-in CSU capacity [7:60022]

2002-12-31 Thread s vermill
Try again Priscilla.  We didn't get that last post from you.  


Message Posted at:
http://www.groupstudy.com/form/read.php?f=7&i=60032&t=60022
--
FAQ, list archives, and subscription info: http://www.groupstudy.com/list/cisco.html
Report misconduct and Nondisclosure violations to [EMAIL PROTECTED]



RE: Built-in CSU capacity [7:60022]

2002-12-31 Thread Priscilla Oppenheimer
Oops. Sorry for the null message. See below for what I really wanted to say.

Priscilla Oppenheimer wrote:
> 
> Brett Johnson wrote:
> > 
> > I am running a test on two 3660 routers with multiple CSU
> cards
> > and
> > cross-over t1 cables between the two routers.  I am unable to
> > exceed 75%
> > capacity on any t1 no matter how much data I pump into the
> > router.  Below is
> > a sample config for one of the interfaces, the rest are
> > duplicates with
> > different addresses:
> > 
> >   
> > controller t1 1/0
> > framing esf
> > clock source internal
> > channel-group 0 timeslots 1-24 speed 64
> > 
> > interface serial 1/0:0
> > ip address 10.0.0.1 255.255.255.0
> > encapsulation ppp
> > no ip route cache
> > no ip mroute cache
> > 
> > ip route 0.0.0.0 0.0.0.0 serial 1/0:0
> > 
> > Is there a way to use the full bandwidth (CEF, 7200 router
> with
> > CEF and
> > multiport CSU, external CSUs...) or is this a limit of the
> > hardware and
> > setup?

It's partly a limitation of the protocols. To start with, ESF uses 8,000 bps
of your 1.544 Mbps for the framing bits. So you really only have 1.536 Mbps
for data.

Also, upper-layer protocols leave gaps between packets. Sure you can attempt
to send incessant pings with as little gap as possible, but there will
probably be some gap no matter what you do. Try to use frames as large as
the MTU to avoid too many gaps. Don't go above the MTU or you'll force IP
fragmentation which will worsen your results. Try increasing the interface
MTU for best results.

You could try testing with FTP instead of ping, but then you would want to
make sure the TCP window is also maxed out so that the sender doesn't have
to stop and wait for an ACK. With FTP, even more so than with Ping, you're
going to be affected by OS and host hardware constraints, however. How
quickly can the OS get data off the hard drive? How quickly can the other
side flush the buffer? How big is the buffer? How quickly can it write to
the hard drive and ACK? How long does it take to set up the control and data
sessions (3-way handshake?) Is it using slow start. To maximize your
numbers, start recording after the handshakes and slow start.

Also, where are you doing the testing from? Are there intermediate devices
between the end points that are adding some delay, such as switches and
routers, or are you doing the testing right from the router? A faster router
might help. (You asked whether using a 7200 might help and it might, but
probably not much?)

Anyway, we would have to know more about your testing setup to know why
you're only getting 75%, but it's not too unexpected considering the typical
testing setups we all tend to use.
___

Priscilla Oppenheimer
www.troubleshootingnetworks.com
www.priscilla.com


> > 
> > Thank you,
> > 
> > Brett Johnson
> > 
> > 
> 
> 




Message Posted at:
http://www.groupstudy.com/form/read.php?f=7&i=60034&t=60022
--
FAQ, list archives, and subscription info: http://www.groupstudy.com/list/cisco.html
Report misconduct and Nondisclosure violations to [EMAIL PROTECTED]



RE: Built-in CSU capacity [7:60022]

2002-12-31 Thread Brett Johnson
While I agree that upper-layer protocols do add overhead and we have to take
into account the inter-frame gap, I still think I should be able to achieve
greater then 75%.  Additionally, doesn't the byte count from the snmp string
include the upper-layer protocol bytes?  I might be mistaken on that, but I
thought I read that the byte count is accumulated with the total packet size
including data and protocol headers.  

What I take from the conversation so far is that I should be able to achieve
greater throughput if there are no problems with windowing, mtu,
retransmissions,  Since these are test routers I am playing with, I will
make sure there are no issues with them.  These were the first 3660s I
played with that have internal CSUs. My thought was using the multiple lines
as one virtual pipe and load balancing through static routes was causing the
problem.  

WS---\  _T1_ 
WSSwitch---Router1_T1_Router2-Server
WS---/_T1_

If I am able to optimize both packet size and window size what would be the
theoretical max throughput I could expect across a T-1 line (not taking into
account end-device issues or switch latency)?  This is more for my own
knowledge.

Thanks for all the responses.

Brett Johnson 

-Original Message-
From: Priscilla Oppenheimer [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, December 31, 2002 1:10 PM
To: [EMAIL PROTECTED]
Subject: RE: Built-in CSU capacity [7:60022]


Oops. Sorry for the null message. See below for what I really wanted to say.

Priscilla Oppenheimer wrote:
> 
> Brett Johnson wrote:
> > 
> > I am running a test on two 3660 routers with multiple CSU
> cards
> > and
> > cross-over t1 cables between the two routers.  I am unable to
> > exceed 75%
> > capacity on any t1 no matter how much data I pump into the
> > router.  Below is
> > a sample config for one of the interfaces, the rest are
> > duplicates with
> > different addresses:
> > 
> >   
> > controller t1 1/0
> > framing esf
> > clock source internal
> > channel-group 0 timeslots 1-24 speed 64
> > 
> > interface serial 1/0:0
> > ip address 10.0.0.1 255.255.255.0
> > encapsulation ppp
> > no ip route cache
> > no ip mroute cache
> > 
> > ip route 0.0.0.0 0.0.0.0 serial 1/0:0
> > 
> > Is there a way to use the full bandwidth (CEF, 7200 router
> with
> > CEF and
> > multiport CSU, external CSUs...) or is this a limit of the
> > hardware and
> > setup?

It's partly a limitation of the protocols. To start with, ESF uses 8,000 bps
of your 1.544 Mbps for the framing bits. So you really only have 1.536 Mbps
for data.

Also, upper-layer protocols leave gaps between packets. Sure you can attempt
to send incessant pings with as little gap as possible, but there will
probably be some gap no matter what you do. Try to use frames as large as
the MTU to avoid too many gaps. Don't go above the MTU or you'll force IP
fragmentation which will worsen your results. Try increasing the interface
MTU for best results.

You could try testing with FTP instead of ping, but then you would want to
make sure the TCP window is also maxed out so that the sender doesn't have
to stop and wait for an ACK. With FTP, even more so than with Ping, you're
going to be affected by OS and host hardware constraints, however. How
quickly can the OS get data off the hard drive? How quickly can the other
side flush the buffer? How big is the buffer? How quickly can it write to
the hard drive and ACK? How long does it take to set up the control and data
sessions (3-way handshake?) Is it using slow start. To maximize your
numbers, start recording after the handshakes and slow start.

Also, where are you doing the testing from? Are there intermediate devices
between the end points that are adding some delay, such as switches and
routers, or are you doing the testing right from the router? A faster router
might help. (You asked whether using a 7200 might help and it might, but
probably not much?)

Anyway, we would have to know more about your testing setup to know why
you're only getting 75%, but it's not too unexpected considering the typical
testing setups we all tend to use.
___

Priscilla Oppenheimer
www.troubleshootingnetworks.com
www.priscilla.com


> > 
> > Thank you,
> > 
> > Brett Johnson




Message Posted at:
http://www.groupstudy.com/form/read.php?f=7&i=60038&t=60022
--
FAQ, list archives, and subscription info: http://www.groupstudy.com/list/cisco.html
Report misconduct and Nondisclosure violations to [EMAIL PROTECTED]



RE: Built-in CSU capacity [7:60022]

2002-12-31 Thread s vermill
Brett Johnson wrote:
> 
> While I agree that upper-layer protocols do add overhead and we
> have to take
> into account the inter-frame gap, I still think I should be
> able to achieve
> greater then 75%.  Additionally, doesn't the byte count from
> the snmp string
> include the upper-layer protocol bytes?  I might be mistaken on
> that, but I
> thought I read that the byte count is accumulated with the
> total packet size
> including data and protocol headers.  
> 
> What I take from the conversation so far is that I should be
> able to achieve
> greater throughput if there are no problems with windowing, mtu,
> retransmissions,  Since these are test routers I am playing
> with, I will
> make sure there are no issues with them.  These were the first
> 3660s I
> played with that have internal CSUs. My thought was using the
> multiple lines
> as one virtual pipe and load balancing through static routes
> was causing the
> problem.  
> 
> WS---\_T1_ 
> WSSwitch---Router1_T1_Router2-Server
> WS---/_T1_
> 
> If I am able to optimize both packet size and window size what
> would be the
> theoretical max throughput I could expect across a T-1 line
> (not taking into
> account end-device issues or switch latency)?  This is more for
> my own
> knowledge.

A T1 is capable of moving 192k bytes per second.  However, you still haven't
answered the question of how you are sourcing test data.  Without knowing
that, there isn't much more we can tell you.  It's pretty easy to visualize
two T1s between a pair of routers.  It's impossible to visualize your test
scenario without some help.

> 
> Thanks for all the responses.
> 
> Brett Johnson 
> 



Message Posted at:
http://www.groupstudy.com/form/read.php?f=7&i=60040&t=60022
--
FAQ, list archives, and subscription info: http://www.groupstudy.com/list/cisco.html
Report misconduct and Nondisclosure violations to [EMAIL PROTECTED]



RE: Built-in CSU capacity [7:60022]

2002-12-31 Thread Priscilla Oppenheimer
Brett Johnson wrote:
> 
> While I agree that upper-layer protocols do add overhead and we
> have to take
> into account the inter-frame gap, I still think I should be
> able to achieve
> greater then 75%. 

I was assuming we were counting all bytes, protocol headers included, in
order to get a network throughput measurement, rather than an application
throughput measurement. The only protocol overhead I was considering was the
ESF framing bits, which to be honest don't amount for much. In fact, if you
use 1.536 Mbps instead of 1.544 Mbps in the denominator, than you don't have
to worry about ESF. But the inter-frame gaps will still byte you, so to speak.

> Additionally, doesn't the byte count from
> the snmp string
> include the upper-layer protocol bytes?  I might be mistaken on
> that, but I
> thought I read that the byte count is accumulated with the
> total packet size
> including data and protocol headers.  

I'm not sure which byte count you're referring to or where you're measuring
it. But you will need to know what it counts in order to determine if you
should get better than 75%.

What does the load on the router show? That might be a better measurement
for what you're doing? It counts all bytes.

> 
> What I take from the conversation so far is that I should be
> able to achieve
> greater throughput if there are no problems with windowing, mtu,
> retransmissions,  Since these are test routers I am playing
> with, I will
> make sure there are no issues with them.  These were the first
> 3660s I
> played with that have internal CSUs. My thought was using the
> multiple lines
> as one virtual pipe and load balancing through static routes
> was causing the
> problem.  

Multilinking and load balancing could affect the results. Stuff like that
takes processing time during which no bytes are being sent possibly.

> 
> WS---\_T1_ 
> WSSwitch---Router1_T1_Router2-Server
> WS---/_T1_
> 
> If I am able to optimize both packet size and window size what
> would be the
> theoretical max throughput I could expect across a T-1 line
> (not taking into
> account end-device issues or switch latency)?  This is more for
> my own
> knowledge.

Strictly speaking, your theoretical max throughput could be 1.536 Mbps. But
it's unlikely you'll achieve that, especially if your testing involves the
workstations, server, switches and router shown in your drawing. If you
could test just from the router, you might be more likely to achieve max
throughput.

If your concern is what can you expect for your typical applications, then
you should test from the workstations, though.

The only answer to the question about expected throughput is that you have
to measure it, check out the protocol behavior with an anlyzer or other
tools, do some optimzation if possible, and then do some more measurements.
Theory is useless in this case. It's not theory that you care about. It's
the real-world with all its computer messiness that you care about.
Throughput is a measurment. Don't confuse it with capacity or theoretical
capability.

And, have a HAPPY NEW YEAR!

Priscilla

> 
> Thanks for all the responses.
> 
> Brett Johnson 
> 
> -Original Message-
> From: Priscilla Oppenheimer [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, December 31, 2002 1:10 PM
> To: [EMAIL PROTECTED]
> Subject: RE: Built-in CSU capacity [7:60022]
> 
> 
> Oops. Sorry for the null message. See below for what I really
> wanted to say.
> 
> Priscilla Oppenheimer wrote:
> > 
> > Brett Johnson wrote:
> > > 
> > > I am running a test on two 3660 routers with multiple CSU
> > cards
> > > and
> > > cross-over t1 cables between the two routers.  I am unable
> to
> > > exceed 75%
> > > capacity on any t1 no matter how much data I pump into the
> > > router.  Below is
> > > a sample config for one of the interfaces, the rest are
> > > duplicates with
> > > different addresses:
> > > 
> > >   
> > > controller t1 1/0
> > >   framing esf
> > >   clock source internal
> > >   channel-group 0 timeslots 1-24 speed 64
> > > 
> > > interface serial 1/0:0
> > >   ip address 10.0.0.1 255.255.255.0
> > >   encapsulation ppp
> > >   no ip route cache
> > >   no ip mroute cache
> > > 
> > > ip route 0.0.0.0 0.0.0.0 serial 1/0:0
> > > 
> > > Is there a way to use the full bandwidth (CEF, 7200 router
> > with
> > > CEF and
> > > multiport CSU, external CSUs...) or is thi

Re: Built-in CSU capacity [7:60022]

2002-12-31 Thread The Long and Winding Road
this is beginning to sound like a job for TTCP

Check out:

http://www.netcraftsmen.net/id27.htm

for information about this process. I know it is supported on Cisco routers,
although I've not played with it much.

--
TANSTAAFL
"there ain't no such thing as a free lunch"




""Priscilla Oppenheimer""  wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> Brett Johnson wrote:
> >
> > While I agree that upper-layer protocols do add overhead and we
> > have to take
> > into account the inter-frame gap, I still think I should be
> > able to achieve
> > greater then 75%.
>
> I was assuming we were counting all bytes, protocol headers included, in
> order to get a network throughput measurement, rather than an application
> throughput measurement. The only protocol overhead I was considering was
the
> ESF framing bits, which to be honest don't amount for much. In fact, if
you
> use 1.536 Mbps instead of 1.544 Mbps in the denominator, than you don't
have
> to worry about ESF. But the inter-frame gaps will still byte you, so to
speak.
>
> > Additionally, doesn't the byte count from
> > the snmp string
> > include the upper-layer protocol bytes?  I might be mistaken on
> > that, but I
> > thought I read that the byte count is accumulated with the
> > total packet size
> > including data and protocol headers.
>
> I'm not sure which byte count you're referring to or where you're
measuring
> it. But you will need to know what it counts in order to determine if you
> should get better than 75%.
>
> What does the load on the router show? That might be a better measurement
> for what you're doing? It counts all bytes.
>
> >
> > What I take from the conversation so far is that I should be
> > able to achieve
> > greater throughput if there are no problems with windowing, mtu,
> > retransmissions,  Since these are test routers I am playing
> > with, I will
> > make sure there are no issues with them.  These were the first
> > 3660s I
> > played with that have internal CSUs. My thought was using the
> > multiple lines
> > as one virtual pipe and load balancing through static routes
> > was causing the
> > problem.
>
> Multilinking and load balancing could affect the results. Stuff like that
> takes processing time during which no bytes are being sent possibly.
>
> >
> > WS---\   _T1_
> > WSSwitch---Router1_T1_Router2-Server
> > WS---/_T1_
> >
> > If I am able to optimize both packet size and window size what
> > would be the
> > theoretical max throughput I could expect across a T-1 line
> > (not taking into
> > account end-device issues or switch latency)?  This is more for
> > my own
> > knowledge.
>
> Strictly speaking, your theoretical max throughput could be 1.536 Mbps.
But
> it's unlikely you'll achieve that, especially if your testing involves the
> workstations, server, switches and router shown in your drawing. If you
> could test just from the router, you might be more likely to achieve max
> throughput.
>
> If your concern is what can you expect for your typical applications, then
> you should test from the workstations, though.
>
> The only answer to the question about expected throughput is that you have
> to measure it, check out the protocol behavior with an anlyzer or other
> tools, do some optimzation if possible, and then do some more
measurements.
> Theory is useless in this case. It's not theory that you care about. It's
> the real-world with all its computer messiness that you care about.
> Throughput is a measurment. Don't confuse it with capacity or theoretical
> capability.
>
> And, have a HAPPY NEW YEAR!
>
> Priscilla
>
> >
> > Thanks for all the responses.
> >
> > Brett Johnson
> >
> > -Original Message-
> > From: Priscilla Oppenheimer [mailto:[EMAIL PROTECTED]]
> > Sent: Tuesday, December 31, 2002 1:10 PM
> > To: [EMAIL PROTECTED]
> > Subject: RE: Built-in CSU capacity [7:60022]
> >
> >
> > Oops. Sorry for the null message. See below for what I really
> > wanted to say.
> >
> > Priscilla Oppenheimer wrote:
> > >
> > > Brett Johnson wrote:
> > > >
> > > > I am running a test on two 3660 routers with multiple CSU
> > > cards
> > > > and
> > > > cross-over t1 cables between the two routers.  I am unable
> > to
> > > 

Re: Built-in CSU capacity [7:60022]

2003-01-01 Thread s vermill
The Long and Winding Road wrote:
> 
> this is beginning to sound like a job for TTCP
> 
> Check out:
> 
> http://www.netcraftsmen.net/id27.htm
> 
> for information about this process. I know it is supported on
> Cisco routers,
> although I've not played with it much.
> 
> --
> TANSTAAFL
> "there ain't no such thing as a free lunch"
> 

It is supported by mid and higher-end routers but Cisco recommends that you
test *through* routers as opposed to testing from or between routers (as you
know, traffic that originates at routers is handled a little differently
than traffic that shows up at an interface).  I've used it several times,
which is one of the reasons I was hoping the original poster would provide
some detail on how the testing was being conducted.  Depending on the
horsepower of the machine that you use to source ttcp, you can approach, and
very likely exceed these days, 45 Mbps.  The last time I used ttcp I was
trying to simulate a saturated T3, but the machine didn't quite have it in
it to crank out test data that fast.




Message Posted at:
http://www.groupstudy.com/form/read.php?f=7&i=60068&t=60022
--
FAQ, list archives, and subscription info: http://www.groupstudy.com/list/cisco.html
Report misconduct and Nondisclosure violations to [EMAIL PROTECTED]



RE: Built-in CSU capacity [7:60022]

2003-01-01 Thread [EMAIL PROTECTED]
Hi all,

There is a rule called "75 percent rule". The router permits the user
traffic to use only 75% of the link speed by default. The reamining part is
leaved for the Layer2 keepalives, routing updates. You can override that
value by entering "max-reserved-bandwidth" command under the interface. You
should define what percent of the link you want to use as the command
parameter. For example you can enter "max-reserved-bandwidth 95" to make
your traffic use 95% of the link speed.

Happy new year from Turkey.

Best regards.
Erdem Haseki

-Original Message-
From: Brett Johnson [mailto:[EMAIL PROTECTED]] 
Sent: Tuesday, December 31, 2002 6:09 PM
To: [EMAIL PROTECTED]
Subject: Built-in CSU capacity [7:60022]


I am running a test on two 3660 routers with multiple CSU cards and
cross-over t1 cables between the two routers.  I am unable to exceed 75%
capacity on any t1 no matter how much data I pump into the router.  Below is
a sample config for one of the interfaces, the rest are duplicates with
different addresses:

  
controller t1 1/0
framing esf
clock source internal
channel-group 0 timeslots 1-24 speed 64

interface serial 1/0:0
ip address 10.0.0.1 255.255.255.0
encapsulation ppp
no ip route cache
no ip mroute cache

ip route 0.0.0.0 0.0.0.0 serial 1/0:0

Is there a way to use the full bandwidth (CEF, 7200 router with CEF and
multiport CSU, external CSUs...) or is this a limit of the hardware and setup?

Thank you,

Brett Johnson




Message Posted at:
http://www.groupstudy.com/form/read.php?f=7&i=60095&t=60022
--
FAQ, list archives, and subscription info: http://www.groupstudy.com/list/cisco.html
Report misconduct and Nondisclosure violations to [EMAIL PROTECTED]