Hi Lukas,

On Thu, May 08, 2014 at 10:20:28AM +0200, Lukas Tribus wrote:
> Hi Willy,
> 
> >> When it uses the private cache, I would also have to change the
> >> configuration to allow ssl sessions over multiple http requests right?
> >
> > No you don't need to change anymore, what Emeric's patch does is to
> > reimplement a hand-crafted spinlock mechanism.
> 
> Two slightly unrelated questions:
> 
> When we do those kind of workarounds (tcp mode, reconnecting to our own
> HTTP(S) frontend), would using unix sockets be more performant (causing
> less load) than TCP over the loopback or are there disadvantages?

For SSL they're much better than plain TCP sockets, the network stack
is lighter, the connection rate is much higher, there are no ports to
allocate on any side. The two real disadvantages I'm seeing to unix
sockets here are :
  - no support for splice() which means that transferring data between
    processes involves a copy. When using SSL that's not an issue since
    we already can't splice with SSL.

  - having to connect via the filesystem is not always fun to configure
    with chroots. But Linux supports abstract sockets, which solve this
    trouble.

I used to have a quick'n'dirty patch a year or two ago which made use
of socketpair() to chain a backend to a frontend within the same process.
It allowed me to compare the performance. All in all, the connection
rate was twice as fast as with TCP, and the data rate was twice as
slow eventhough splice was not used. It might be because of a configurable
limit on the buffer sizes that I had not played with.

I would really love to have unix/abstract sockets to the servers for 1.5
(or even backport them later). Normally there's little work to do, I think
that most of the work consists in copy-pasting tcp_connect_server() into
a new function pointed to by proto_uxst->connect.

> > I just ran a few tests here and at 4.5k conn/s spread over 4 processes I
> > see that the lock is already held about 1% of the time, which is very low
> > and does not justify using a syscall to sleep.
> 
> Would it be possible to share the SSL cache between two haproxy instances,
> via a "peer" like protocol? I think stud had something similar available.

Yes, it was Emeric who did it for stud. He has a PoC patch lying somewhere for
haproxy as well, but he doesn't feel comfortable with having this because he
told me about a corner case where changing the config (or ciphers) on one node
would render its session incompatible with the other nodes, possibly forcing all
users to renegotiate all the time till the configs are in sync (or something
like this :-)).

One thing that was missing in order to implement the feature cleanly was to
be able to listen to a UDP socket and receive events on it. I remember at the
time he used a few hacks, but now it should be much easier and cleaner.

> Suppose you have an active/passive setup with VIP failover, starting with
> an empty SSL cache after a failover can be a problem.

Yes quite clearly. We also observe this during config reloads for the same
reasons (though this issue will not be solved since only new entries are
broadcast).

Willy


Reply via email to