On Thu, 13 Jul 2006 09:05:18 EDT, Paul Moore said:

> > >   C_NoCat    932.30          954.04          9904.58     10106.00
> >
> > Not bad - this measures just our infrastructure.. And it's certainly
> > non-zero but probably within the realm of tolerable for sites that need
> > CIPSO.

Quick summary for those who don't want to read further into numbers -
most of them look very good. 

> I think we can attribute the bulk of the slowdown in the C_NoCat case to the 
> extra 12 bytes (no categories, it would be 40 bytes with full categories) of 
> the CIPSO IP option ...
> 
>   932.30 / 941.52 = 99.02%   (difference in throughput)
>               12 / 1500 =   0.80%   (CIPSO percentage of max packet length)
>  -------------------------------------
>                                      99.82%

OK, looking closer at the *full* results, it makes sense. ;)

And now I see why the tcp_rr and udp_rr dropped 10% - because those used
a 1-byte payload - and there it's 12 extra bytes on top of a 60-byte or
so header...  Again, it seems we're getting it basically "for free"
compared to any *other* IP option of the same length..

(My first reading I thought the throughput and rr were numbers from the
same run - they're 2 different tests, which matters)..

> ... which means once we take into account the inherent limitations of the 
> CIPSO protocol there is only a 0.18% slowdown when using CIPSO to transfer 
> sensitivity levels.  That seems reasonable to me.

Yep, the only measurable cost at least this far down seems to be the
length of the header on the wire...

> Feel free to poke holes in my logic - I am neither a statistician or 
> a "performance guy".

No, the logic seems fine.. :)

> > >   C_FlCat    625.46          935.52          9110.29      9262.92
> > >   C_F_LxV    686.46          935.53          9325.37      9484.93
> >
> > Any idea why the tcp_rr only dropped about 14%, but tcp_stream dropped 30%?
> > I'd expect the rate to be more sensitive to it, because the testing is
> > per-packet, not per-KB?

Given a 1-byte payload on the tcp_rr and udp_rr tests, the added 40 bytes
of IP headers explains the drop down to the low 9K, and the udp_stream
numbers line up fine too.

There's still something unexplained about that 625 for tcp_stream on C_FlCat.
Was either box hitting CPU saturation at that point?

 
> If people really feel that detailed analysis of this test is important for 
> acceptance let me know and I'll see what I can do.

Probably don't need to be *much* more detailed - it's a good coverage of
tests, and most of the numbers are within statistical noise of "the best
we can possibly do while carrying a CIPSO header".  I think once we figure
out what happened on C_FlCat, just saying "Performance has been tested and
found not an issue" and add a URL pointing to this thread should be good
enough.

> > >   C_F_NoC    328.69          935.53          6258.61      6415.35
> >
> > I tuned in late - are there any real configurations where a site would
> > actually want cipso_cache_enable=0 set?  Or is this an indication that
> > the option needs to be nailed to 1?
> 
> It was more for my own curiosity rather than anything else, I just thought I 
> would throw it in here in case others were curious too.  Basically, I have 
> always asserted that a CIPSO label cache would have a huge benefit in terms 
> of receive side performance but I never had any numbers to back it up - now I
> do.

This one, we're obviously getting CPU bound or something and that's why
the numbers fell through the floor...

OK.. that wouldn't be the first debugging knob with crud performance the
kernel has sprouted.  How much memory does the cache use on a per-connection
basis?  We're already carrying a number of slab entries and rcvbufs and
the like around per connection - unless I'm insufficiently caffienated, it
looks to be impossible to open an IPv4 TCP connection without burning around
128K of memory (assuming sane buffer sizes for anything over 10mbit).

I'm suspecting the right answer here is "create a slab for it and move on" ;)

Attachment: pgpypANhLHZXB.pgp
Description: PGP signature

--
redhat-lspp mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/redhat-lspp

Reply via email to