-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hello,
Are there any limitations (rate limits) for traffic, applied to management Ethernet interface of a CRS3 PRP (Performance Route Processor) ? Temporarily changing those limits, if possible, would be great for our experiment. I was not able to find related information while searching (Cisco, Google), so any hint is appreciated. What I try to achieve is to fill a 100 Gbps circuit between 2 CRSs for 50% or more. Using MGEN on a laptop with gigabit eth and a routing loop this probably can be done. The problem is that each of the 2 routers has at this moment only 100G interfaces, so the only way to inject traffic is through management eth. What happens is that the traffic on this interface does not exceed the following values (bps and pkts/s), no matter how much I increase MGEN parameters above 40000 UDP pkts/s (each packet 1460 bytes): input: 480634000 bps, 40000 pkts/s output: 880000 bps, 1000 pkts/s (these are merely ICMP unreachables) MGEN was run like this: mgen event "ON 1 UDP DST 192.168.255.1/5000 PERIODIC [PKTS 1460]" where PKTS was 10K, 20K, 40K, 60K and 80K. Traffic on 100G link was growing until it reached and remained at about 15 Gbps for 40K and above. Achieved maximum traffic was 30 Gbps (15 Gbps for each PRP eth interface, 2 x PRP on each router). Regards. - -- Valeriu Vraciu RoEduNet Iasi -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0.22 (Darwin) Comment: GPGTools - http://gpgtools.org iEYEARECAAYFAlPp/q0ACgkQncI+CatY949K1QCeKjrqU6fSMbJU/sn97g2WTiT+ u0gAniWXCvPSm1NGMiy9EMC9LvMFd/JF =igVR -----END PGP SIGNATURE----- _______________________________________________ cisco-nsp mailing list cisco-nsp@puck.nether.net https://puck.nether.net/mailman/listinfo/cisco-nsp archive at http://puck.nether.net/pipermail/cisco-nsp/