Sorry, I'm currently out of the office.  

If this is concerning a UCA Information Technology matter that requires 
immediate attention, please contact the IT HelpDesk at 450-3107 or 
[email protected].

If your message was personal or of a "non-urgent" nature, I'll reply as soon as 
I can.

Wayne

>>> Darren Coleman <[email protected]> 11/25/13 16:26 >>>


Hi,

        Also be aware of the 10G performance of DFE linecards. The data is 
internally handled as nine 1 Gbps ports per 10GBase port. Considering a single 
10GBase port, three of its 1 Gbps paths are allocated to each packet processor, 
for a total of nine paths on three processors.
        Also there is light buffering in the C3 that could cause packet loss 
when a 10Gb server is sending packets to multiple 1Gb clients. This is where 
buffering becomes critical and the C3 will start to drop packets when it runs 
out of buffering. So 10Gb server performance will suffer due to all the 
resending of packets due to the light buffering and the effective rate will not 
be 10Gb. But if you  run a test of sending a 10Gb stream through the 10Gb ports 
on a C3, you will see full 10Gb performance.

        I'm unsure of the performance of C5's however if you intend to run 10G 
in your Data Centre we currently have been migrating services from N7's 
connected via 2 x 10G LAG's to chassis bonded 7100's running the same 
configuration. 


Title:
Elaboration of DFE Performance Expectations
Article ID:
1667
Technology :
switching
Previous ID:
ent21567
Products
DFE
Cause
The DFE's performance expectations are generally stated as:

·        "packet forwarding rate" = 13.5 million packets per second, per module
·        "switching capacity" = 18 gigabits per second, per module
·        "backplane capacity" = 20 gigabits per second, per link
This article elaborates upon the standard information set, to explain what is 
meant by these statements and to suggest ways in which port selection may 
optimize bandwidth use.

Important point: There are miscellaneous factors which dictate that the numbers 
stated in this document, though substantially correct, are approximations only.

Solution
Packet forwarding rate = 13.5 Mpps1 per module = 4.5 Mpps per packet 
processor2. 
This is how fast the forwarding decisions can be made for received traffic. 
Assuming 64-byte packets3, this would forward 7.776 Gbps of incoming data 
streams. 
Assuming 75-byte packets, this would forward 9.000 Gbps of incoming data 
streams. 
Assuming 1518-byte packets, this would exceed the module's switching capacity.

Switching capacity = 18 Gbps per module = 6 Gbps per packet processor. 
This is the bandwidth for port-specific combined receive & transmit functions. 
Assuming 64-byte packets, this would exceed the module's forwarding rate4. 
Assuming 75-byte packets, this would match the module's forwarding rate4, 
yielding Line Speed for nine gigabit ports of Full Duplex traffic. 
Assuming 1518-byte packets, this would be 1.474442 Mpps (.737221 Mpps x 2).

FTM2 backplane capacity = 20 Gbps per Link (10 Gbps Rx, 10 Gbps Tx). 
This is the bandwidth for combined receive & transmit functions over a single 
FTM2 link. 
Assuming balanced Full Duplex traffic between one module pair, this exceeds the 
modules' switching capacity. 
On a N7 chassis there are 21 (N5=10, N3=3) of these individual slot-to-slot 
links.

The following chart defines which ports are allocated to individual packet 
processors. Optimized performance may be attained by considering the port 
grouping within each packet processor. For instance, you may wish to connect no 
more than three "power users" to any given gigabit port group, and in extreme 
cases may wish to leave any remaining ports in that group unused - removing 
local bandwidth oversubscription considerations for those users. For balanced 
Full Duplex traffic, backplane utilization should not be a factor.

                                                                      Switching 
Capacity

 Model #     Description              Port#s by Packet Processor       
Oversubscribed?5

 

2G2072-52  48 10/100/1000, 4 MGBIC  01-12,25-36  13-24,37-48  49-52           Y

7G4202-30  30 10/100/1000              01-10        11-20     21-30           Y

4G4202-60  60 10/100/1000              01-20        21-40     41-60           Y

7G4202-60  60 10/100/1000              01-20        21-40     41-60           Y

4G4202-72  72 10/100/1000           01-12,25-36  13-24,37-48  49-72           Y

7G4202-72  72 10/100/1000           01-12,25-36  13-24,37-48  49-72           Y

4G4205-72  72 10/100/1000           01-12,25-36  13-24,37-48  49-72           Y

7G4205-72  72 10/100/1000           01-12,25-36  13-24,37-48  49-72           Y

7G4270-09   9 MGBIC                    01-03        04-06     07-09

7G4270-10  10 MGBIC                    01-03        04-06     07-09  10

7G4270-12  12 MGBIC                    01-04        05-08     09-12           Y

7G4280-19  18 MGBIC, NEM               01-06        07-12     13-18  19-246   Y

4G4282-41  40 10/100/1000, NEM         01-20        21-40     41-466          Y

7G4282-41  40 10/100/1000, NEM         01-20        21-40     41-466          Y

4G4282-49  48 10/100/1000, NEM      01-12,25-36  13-24,37-48  49-546          Y

7G4282-49  48 10/100/1000, NEM      01-12,25-36  13-24,37-48  49-546          Y

4G4285-49  48 10/100/1000, NEM      01-12,25-36  13-24,37-48  49-546          Y

7G4285-49  48 10/100/1000, NEM      01-12,25-36  13-24,37-48  49-546          Y

4H4202-72  72 10/100                01-12,25-36  13-24,37-48  49-72

7H4202-72  72 10/100                01-12,25-36  13-24,37-48  49-72

4H4203-72  72 10/100                01-12,25-36  13-24,37-48  49-72

7H4203-72  72 10/100                01-12,25-36  13-24,37-48  49-72

4H4282-49  48 10/100, NEM           01-12,25-36  13-24,37-48  49-546

4H4283-49  48 10/100, NEM           01-12,25-36  13-24,37-48  49-546

4H4284-49  48 10/100, NEM           01-12,25-36  13-24,37-48  49-546

7H4284-49  48 10/100, NEM           01-12,25-36  13-24,37-48  49-546

4H4285-49  48 10/100, NEM           01-12,25-36  13-24,37-48  49-546

7H4382-25  24 10/100, NEM           01-12,13-24     25-306

7H4382-49  48 10/100, NEM           01-12,25-36  13-24,37-48  49-546

7H4383-49  48 10/100, NEM           01-12,25-36  13-24,37-48  49-546

7H4385-49  48 10/100, NEM           01-12,25-36  13-24,37-48  49-546

7K4290-02   2 10GBase               see below7                                Y
1 This rate assumes the connection is already programmed into hardware (5115). 
Up to 126,000 flows per standard module (42,000 per packet processor) may be 
set up (programmed into hardware), per second.

2 As is discernable in the above chart, all modules have a maximum of three 
packet processors, except the 7G4270-10 which has four, the 7G4280-19 which has 
four when a NEM is installed, and the 7H4382-25 which has two when a NEM is 
installed. The throughput calculations assume three packet processors.

3 These calculations assume 8 bytes for the preamble and StartFrameDelimiter, 
present on the wire but not part of the stated frame size. The InterFrameGap is 
not considered herein.

4 Assuming that the local packet processor made the forwarding decision for 
traffic being transmitted - which is not the case for traffic received into the 
System on another port group (possibly another module) in the System.

5 Switching capacity could be exceeded by the use of more than three Full 
Duplex gigabit ports per packet processor on any module flagged above as 
"oversubscribed". 
Possible workaround: An equivalent switching capacity would be utilized by six 
unidirectional gigabit ports per packet processor, with the incoming unicast 
traffic being directed to another packet processor (port group) for 
transmission. One possible way this infomation could be applied is for server 
backups, which tend to be data-intensive in only one direction. As long as the 
average frame size is 150 or greater, this would not exceed the 4.5 Mpps 
forwarding rate for the local ingress packet processor and thus Line Speed 
could be attained unidirectionally on all six gigabit ports. Also consider 
conditions at each of the egress packet processors, and backplane utilization 
if the traffic is destined to a separate module in the System.

6 Modules which host a "NEM" (Network Expansion Module) have use of their 
higher port numbers if a NEM (e.g. 7G-6MGBIC) is installed, giving access to 
its ports and its onboard packet processor.

7 The useable bandwidth per 10GBase port (1669) will not exceed 9 Gbps 
unidirectionally. The data is internally handled as nine 1 Gbps ports per 
10GBase port. Considering a single 10GBase port, three of its 1 Gbps paths are 
allocated to each packet processor, for a total of nine paths on three 
processors. This results in no oversubscription (as explained above), assuming 
that the 10GBase data stream is such that it may be evenly allocated to each of 
the nine available paths. The second 10GBase port uses the same packet 
processors in the same manner, sharing the bandwidth with the first 10GBase 
port. Since the two 10GBase ports can at best attain an aggregate of 9 Gbps (18 
Gbps with Rx and Tx combined), the use of two 10GBase ports on one 7K4290-02 is 
a potentially oversubscribed scenario.

For a discussion of Packet Buffer distribution, please refer to 1668.

 






Cheers

Darren
> James,
>       I have used both methods and there are advantages and disadvantages to 
> each depending on the details of your situation.  If all things are equal I 
> would recommend using LACP.  The primary reason being simplicity of ongoing 
> maintenance/management.  I had a situation where I needed to add a few more 
> VLANS to my network and realised that this would change the hash value of my 
> MSTP.  I had to create the VLANS on every switch in the MSTP system to make 
> it re-converge correctly (even though the VLANS were not required in all 
> locations).   This all turned out OK in the end but in the process I managed 
> to badly break the entire system as the MSTP trees fragmented during the 
> process of adding the new VLANS.  Luckily I was doing this during a 
> maintenance window!!!  By contrast adding a new VLAN on a LACP system is 
> relatively painless ( just make sure that you egress the new VLAN/S on both 
> the LAG port and the underlying physical ports).
> 
> 
> Geoff Smith
> Technical Administrator
> Information, Communications & Technology
> Toowoomba Service Centre
> T: 07 4688 6649
> F: 07 4631 9174
> M: 0417 006 504
> E: [email protected]
> W: http://www.toowoombaRC.qld.gov.au/
> Toowoomba Regional Council
> PO Box 3021, Toowoomba Village Fair QLD 4350
> -----Original Message-----
> From: James Andrewartha [mailto:[email protected]] 
> Sent: Monday, 25 November 2013 11:58 AM
> To: Enterasys Customer Mailing List
> Subject: [enterasys] LAG vs MSTP for redundant switch links
> 
> Hi list,
> 
> I'm setting up dual fibre links between our core and edge switches, and was 
> pondering whether to set up link aggregation or use MSTP to balance the 
> traffic across the links. My primary concern is for redundancy, not extra 
> bandwidth (I just checked cacti, and most don't sustain 100Mb/s).
> How do MSTP and LACP compare for failover times?
> 
> The other thing about LACP is the config overhead of having to set the 
> aadminkey on the physical and LAG ports plus ensuring the VLANs match.
> Whereas with MSTP I don't have to worry about the LAG port, and can just set 
> a port priority on the SID to balance the traffic if required.
> 
> I'm leaning towards MSTP, but every man and his dog seems to have a 
> spanning-tree meltdown story. All the switches will be Enterasys (S4 at the 
> core, B3/B5 at the edge) so you'd think it should all work fine.
> Opinions?
> 
> Thanks,
> 
> --
> James Andrewartha
> Network & Projects Engineer
> Christ Church Grammar School
> Claremont, Western Australia
> Ph. (08) 9442 1757
> Mob. 0424 160 877
> 
> ---
> To unsubscribe from enterasys, send email to [email protected] with the body: 
> unsubscribe enterasys [email protected]
> *************************************** 
> This email and any files transmitted with it are intended solely for
> the use of the individual or entity to whom they are addressed. If 
> you have received this email in error please notify the sender and 
> delete the material from any computer.
> 
> The Council accepts no responsibility for the content of any email
> which is sent by an employee which is of a personal nature or which
> represents the personal view of the sender.
> 
> If you wish to contact Council by non electronic means, Council's 
> postal address is:
> 
> Toowoomba Regional Council
> PO Box 3021 Village Fair, Toowoomba Qld 4350
> ***************************************
> 
> ---
> To unsubscribe from enterasys, send email to [email protected] with the body: 
> unsubscribe enterasys [email protected]
> 

----------------------------------------------------
Darren Coleman | Network Support Manager | Information Technology Services - 
Networks and Communications | Building #56, The Australian National University, 
Canberra, ACT, 0200, Australia | E: [email protected] | T: +61 2 6125 
4627 | F: +61 2 6125 8199 | W: http://information.anu.edu.au

CRICOS Provider #00120C


---
To unsubscribe from enterasys, send email to [email protected] with the body: 
unsubscribe enterasys [email protected]

---
To unsubscribe from enterasys, send email to [email protected] with the body: 
unsubscribe enterasys [email protected]

Reply via email to