Interesting question!

Let's go through how it works and see "In Theory" what we might expect to
happen.

The first packet to a destination is always process switched, so first
packets should be evenly distributed between the interfaces.  But the E1 has
fast caching so all subsequent packets will traverse E1.  What I suspect is
that the second packet of a stream, which took E0 for the first packet, will
traverse E1 which will cache the destination and all subsequent packets will
traverse E1.

So even though E0 is used for first packets to a destination, E1 will get
the second packet and will add it to the cache and ALL streams will end up
using E1 effectively stealing everything from E0.  The second packet on
would traverse E1. E0 will barely be used.

No, that's not 100 % correct.  The process engine doesn't care about
destination, it switches the queue.  A stream (let's call it Bob) could stay
on E0, but as the packets are dequeued every packet prior to a Bob packet
would have to be sent to E1. You've got a 50/50 chance of that happening.
So this becomes a straight forward Prob & Stat exercise:  flipping a coin.
While the odds are 50/50 to the individual packet, the stream has a
probability of the aggregation of all preceding packets.  Can you flip a
coin and come up heads 100 times in a row? Yes, but is unlikely.  The more
streams, the more coins that are flipped, and the more likely _a_ stream
will be sent to E1.

I think what we would see if there were 256 streams something similar to:
1st packet:  128 go to E0, 128 go to E1
2nd packet: 64 go to E0,  192 to E1 (128 1st + 64 2nd)
3rd packet: 32 go to E0,  224 to E1 (128 1st + 64 2nd + 32 3rd)
4th packet: 16 go to E0, 240 to E1 (128 1st + 64 2nd + 32 3rd + 16 4th)

So the probability a stream would traverse and stay on E0 to it's completion
would be computed as: p = 100/(2^n) where "p" is the percentage probability
(how many out of 100), "n" is the number of packets in the stream (ie, the
length).  This doesn't take into account when the stream count is 0.

Of course that's my theory.  Anyone have time to bench and test it?

Rodgers Moore, CCDP, CCNP-Security
Design and Security Consultant
Data Processing Sciences, Corp.

"luobin Yang" <[EMAIL PROTECTED]> wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
> Hi, group,
>
> I have question quite confused about. I learnt that per-packet
> load-balancing is used when process-switching is enabled and
> per-destination load-balancing is used when fast-switching is enabled.
>
> My question is, If there are two equal-cost routes between RouterA and
> RouterB, let's say the interfaces are E0 and E1. If I enable
> process-switching on E0 and fast-switching on E1, which load-balancing
> is used in this situation?
>
> Hope can get some answer.
> Luobin
>
> **NOTE: New CCNA/CCDA List has been formed. For more information go to
> http://www.groupstudy.com/list/Associates.html
> _________________________________
> UPDATED Posting Guidelines: http://www.groupstudy.com/list/guide.html
> FAQ, list archives, and subscription info: http://www.groupstudy.com
> Report misconduct and Nondisclosure violations to [EMAIL PROTECTED]
>


**NOTE: New CCNA/CCDA List has been formed. For more information go to
http://www.groupstudy.com/list/Associates.html
_________________________________
UPDATED Posting Guidelines: http://www.groupstudy.com/list/guide.html
FAQ, list archives, and subscription info: http://www.groupstudy.com
Report misconduct and Nondisclosure violations to [EMAIL PROTECTED]

Reply via email to