On Wed, 12 Jul 2006 16:45:44 EDT, Paul Moore said: > While I have been doing to some casual performance test of the NetLabel > patch I have never posted anything to the list, so for the first time > here are some NetLabel/CIPSO numbers ...
Many thanks.. > (in 10^6 bits/sec) (rate / sec) > TEST tcp_stream udp_stream tcp_rr udp_rr > ================================================================= > NoPatch 941.52 961.61 10778.58 10901.03 > Disable 941.53 961.60 10814.46 11129.77 > Unlabel 941.51 961.61 10769.00 10896.26 The fact the first 3 are all within noise of each other is a good sign. (I'm using Disable-NoPatch as a rough indication of noise...) > C_NoCat 932.30 954.04 9904.58 10106.00 Not bad - this measures just our infrastructure.. And it's certainly non-zero but probably within the realm of tolerable for sites that need CIPSO. A second pass at benchmarking this should probably note whether the slowdown is primarily a CPU-full issue, or an added-latency issue. If it's just the TCP window relating to a different bandwidth*RTT product, it has different implications for servicing multiple connections. It's quite possible for a 1% increase in CPU use to cost us 5% throughput - but if the CPU is still at 80%, that means we can take on another 20 connections and each sees the same 5% drop (and yes, I'm glossing over the queuing issues of bursty traffic). > C_FlCat 625.46 935.52 9110.29 9262.92 > C_F_LxV 686.46 935.53 9325.37 9484.93 Any idea why the tcp_rr only dropped about 14%, but tcp_stream dropped 30%? I'd expect the rate to be more sensitive to it, because the testing is per-packet, not per-KB? > C_F_NoC 328.69 935.53 6258.61 6415.35 I tuned in late - are there any real configurations where a site would actually want cipso_cache_enable=0 set? Or is this an indication that the option needs to be nailed to 1?
pgptC9vJslw7v.pgp
Description: PGP signature
-- redhat-lspp mailing list [email protected] https://www.redhat.com/mailman/listinfo/redhat-lspp
