Hi Willy,

Op 25-11-2017 om 8:33 schreef Willy Tarreau:
Hi Pieter,

On Tue, Nov 21, 2017 at 04:34:16PM +0100, PiBa-NL wrote:
Hi William,

I was intending to use the new feature to pass open sockets to the next
haproxy process.
And thought that master-worker is a 'requirement' to make that work as it
would manage the transferal of sockets.
Now i'm thinking thats not actually how its working at all..
I could 'manually' pass theĀ  -x /haproxy.socket to the next process and make
it take over the sockets that way i guess.?
Yes it's the intent indeed. Master-worker and -x were developed in parallel
and then master-worker was taught to be compatible with this, but the primary
purpose of -x is to pass FDs without needing MW.
Great, i suppose ill need to make a few (small) changes implementing this then in the package i maintain for pfSense, probably easier than changing it to use master-worker anyhow :).

(How does this combine with nbproc>1 and multiple stats sockets bound to
separate processes?)
There's a special case for this. Normally, as you know, listening FDs not
used in a process are closed after the fork(). Now by simply setting
"expose-fd listeners" on your stats socket, the process running the stats
socket will keep *all* listening FDs open in order to pass them upon
invocation from the CLI. Thus, even with nbproc>1, sockets split across
different processes and a single stats socket, -x will retrieve all
listeners at once.
Oke this will work well then :).
Was thinking if i'm going to do it myself (pass the -x argument), i need to make sure i do it properly.
Though i can imagine a future where the master would maybe provide some
aggregated stats and management socket to perform server status changes.
Perhaps i should step away from using master-worker for the moment.
I think you don't need it for now and you're right that we'd all like it
to continue to evolve. Simon had done an amazing work on this subject in
the past, making a socket server and stuff like this but by then the
internal architecture was not ready and we faced many issues so we had to
drop it. But despite this there was already a huge motivation in trying
to get this to work. This was during 1.5-dev5! Since then, microservices
have emerged with the need for more common updates, the need to aggregate
information has increased, etc. So yes, I think that the reasons that
motivated us to try this 7 years ago are still present and have been
emphasized over time. Maybe in a few years the master-worker mode will
be the only one supported if it provides extra facilities such as being
a CLI gateway for all processes or collecting stats. Let's just not rush
and use it for what it is for now : a replacement for the systemd-wrapper.
Ok clear, and thanks for the history involved, i'm not using the systemd-wrapper, so no need for me to use its replacement. I just thought it looked fancy to use and maybe 'future proof' though that is to early to really tell.. no more 'restarting' of processes but just sending a 'reload' request did seem like a better design. (though in the background the same restarting of processes still happens..) This with the added (wrongful) though it was required for socket transferal i was thinking lets give it a try :).
However the -W -D combination doesn't seem to (fully) work as i expected,
responses below..
As mentionned in the other thread, there's an issue on this and kqueue
that I have no idea about. I'm suspecting an at_exit() doing nasty stuff
somewhere and some head-scratching will be needed (I hate the principle
of at_exit() as it cheats on the stuff you believe when reading it).
Ok, thanks for looking into it. No need to rush as i can work with rc4 as it is.. At least on my test machine..
I would prefer to do both 'catch' startup errors and daemonize haproxy.
In my previous mail i'm starting it with -D, and the -W is equivalent of the
global master-worker option in the config, so it 'should' daemonize right?
But it did not(properly?), ive just tried with both startup parameters -D -W
the result is the same.
The master with pid 3061 is running under the system /sbin/init pid 1,
however the pid 2926 also keeps running i would want/expect 2926 to exit
when startup is complete.

I just also noted that the 2926 actually becomes a 'zombie'.?. that cant be
good right?
It's *possible* that this process either still had a connection and couldn't
quit, or that a bug made it believe it still had a connection. Given that you
had a very strange behaviour with -D -W, let's consider there's an unknown
issue there for now and that it could explain a lot of strange behaviours.
No 'connections' to the process iirc its a isolated test environment and it seems related to the stdout/errout output.. I did have one of these handles open to these outputs as i described in the next mail with the little php example code..

It seems to work properly the way you describe.. (when properly demonized..)
Sorry for the noise on this part..
No pb, feedback is always very useful, and mistakes in feedback are as much
unavoidable as mistakes in the code :-)
Actually it wasn't noise, it was 'quiet' or '-q' that makes the all the master-worker processes exit. (also without daemon mode)

Unfortunately signals are asynchronous, and we don't have a way yet to return
a bad exit code upon reload. But we might implement a synchronous
configuration notification in the future, using the admin socket for example.
Being able to signaling the master to reload over a admin socket and getting
'feedback' about its results would likely also solve my 'reload feedback'
problem.
Lets consider that a feature request :).
I really think it will eventually happen because the master is the most
interesting process to act on in multi-process environments. I'm just
cautious, observing how much nbproc continues to be used once threads
start to appear in configs : if we end up always using a master with a
single worker made of multiple threads, maybe the master socket will
be very limited. If nbproc keeps making sense despite threads, a master
socket will need to do a lot of stuff.
Yes this thought crossed my mind. Maybe nbproc will be obsolete in a year or two ;) Still it might be nice to have a master and a worker process so master can read new config/certificate/or open a new listening socket or other information and pass it to the worker without restarting it if 'small?' changes can apply while its running.. For the moment it might still make sense to use nbproc i guess as workers can then crash separately (yes i know that shouldn't ever happen..) it is probably a little more solid, and certainly more proven design for haproxy than using threads.
Thanks!
Willy

Thanks for your responses :).

Regards,
PiBa-NL / Pieter


Reply via email to