Hi,
I am exploring haproxy code to understand how high performance
numbers are realized. Could you please clarify a few queries. If this
is not the correct alias, please let me know.
I am reproducing some information from http://haproxy.1wt.eu/ and
marking my queries in blue:
- O(1) event checker on systems that allow it (currently only
Linux with HAProxy 1.2), allowing instantaneous detection of any
event on any connection among tens of thousands.
>> Is this achieved by using epoll ?
- event aggregation : timing resolution is adapted to match the
system scheduler's resolution. This allows many events to be
processed at once without having to sleep when we're sure that we
would have woken up immediately. This also leaves a large performance
margin with virtually no degradation of response time when the CPU
usage approaches 100%.
>> Could you please point me in the code where this adaptation is done ?
- reduced footprint for frequently and randomly accessed memory
areas such as the file descriptor table which uses 4 bitmaps. This
reduces the number of CPU cache misses and memory prefetching time.
>> The "file descriptor table" referred above is kernel's table ? Or
is it the fdtab memory in haproxy.c ? How is reduced footprint achieved ?
- kernel TCP splicing
>> I searched for "splice" to find the code which makes use of
kernel TCP splicing. Is kernel TCP splicing not used in haproxy.c ?
Please clarify.
Thanks,
Babu