Hi William,
Found the 'crash?' i was talking about earlier again.
Start haproxy like this:
haproxy -f /root/hap.conf -W -D -dk -q
Then issues a USR2 to the master. (the first parent/zombie is already
gone so thats good imho..)
It will temporarily start new workers and then immediately everything
stops running..
Anyhow looking forward to your replies.
Regards,
PiBa-NL
Op 22-11-2017 om 17:48 schreef PiBa-NL:
Hi William,
I'm not 100% sure but i think the stdout and errout files should be
closed before process exit?
It seems to me they are not.
At least with the following php script it fails to 'read' where the
output from haproxy ends and it keeps waiting.
Without the -W it succeeds.
Could you check?
Regards,
PiBa-NL
#!/usr/local/bin/php-cgi -f
<?php
exec('killall haproxy');
$descriptorspec = array(
0 => array("pipe", "r"), // stdin is a pipe that the child will
read from
1 => array("pipe", "w"), // stdout is a pipe that the child will
write to
2 => array("pipe", "w") // stderr is a file to write to
);
$cwd = '/root';
$env = array();
$process = proc_open('haproxy -f hap.conf -W -D -dk', $descriptorspec,
$pipes, $cwd, $env);
echo "\n#### START\n";
echo "\n#### procstatus\n";
print_r(proc_get_status($process));
if (is_resource($process)) {
echo "\n#### ERROUT\n";
while (false !== ($char = fgetc($pipes[2]))) {
echo "$char";
}
echo "\n#### STDOUT\n";
while (false !== ($char = fgetc($pipes[1]))) {
echo "$char";
}
echo "\n#### DONE reading..";
fclose($pipes[0]);
fclose($pipes[1]);
fclose($pipes[2]);
$return_value = proc_close($process);
echo "command returned $return_value\n";
} else {
echo 'FAIL';
};
Op 21-11-2017 om 16:34 schreef PiBa-NL:
Hi William,
I was intending to use the new feature to pass open sockets to the
next haproxy process.
And thought that master-worker is a 'requirement' to make that work
as it would manage the transferal of sockets.
Now i'm thinking thats not actually how its working at all..
I could 'manually' pass the -x /haproxy.socket to the next process
and make it take over the sockets that way i guess.? (How does this
combine with nbproc>1 and multiple stats sockets bound to separate
processes?)
Though i can imagine a future where the master would maybe provide
some aggregated stats and management socket to perform server status
changes.
Perhaps i should step away from using master-worker for the moment.
However the -W -D combination doesn't seem to (fully) work as i
expected, responses below..
Op 21-11-2017 om 2:59 schreef William Lallemand:
the master-worker was designed in a way to
replace the systemd-wrapper, and the systemd way to run a daemon is
to keep it
on the foreground and pipe it to systemd so it can catch the errors
on the
standard ouput.
However, it was also designed for normal people who wants to daemonize,
so you can combine -W with -D which will daemonize the master.
I'm not sure of getting the issue there, the errors are still
displayed upon
startup like in any other haproxy mode, there is really no change here.
I assume your only problem with your script is the daemonize that
you can
achieve by combining -W and -D.
I would prefer to do both 'catch' startup errors and daemonize haproxy.
In my previous mail i'm starting it with -D, and the -W is equivalent
of the global master-worker option in the config, so it 'should'
daemonize right?
But it did not(properly?), ive just tried with both startup
parameters -D -W the result is the same.
The master with pid 3061 is running under the system /sbin/init pid
1, however the pid 2926 also keeps running i would want/expect 2926
to exit when startup is complete.
I just also noted that the 2926 actually becomes a 'zombie'.?. that
cant be good right?
A kill -1 itself wont tell if a new configured bind cannot find the
interface address to bind to? and a -c before hand wont find such a
problem.
Upon a reload (SIGUSR2 on the master) the master will try to parse the
configuration again and start the listeners. If it fails, the master
will
reexec itself in a wait() mode, and won't kill the previous workers,
the
parsing/bind error should be displayed on the standard output of the
master.
I think i saw it exit but cannot reproduce it anymore with the
scenario of a wrong ip in the bind.. I might have issued a wrong
signal there when i tried (a USR1 instead of a USR2 or something. ).
It seems to work properly the way you describe.. (when properly
demonized..)
Sorry for the noise on this part..
The end result that nothing is running and the error causing that
however should be 'caught' somehow for logging.?. should haproxy
itself
log it to syslogs? but how will the startup script know to notify the
user of a failure?
Well, the master don't do syslog, because there might be no syslog
in your
configuration. I think you should try the systemd way and log the
standard
output.
I don't want to use systemd, but i do want to log standard output, at
least during initial startup..
Would it be possible when starting haproxy with -sf <PID> it would
tell
if the (original?) master was successful in reloading the config /
starting new workers or how should this be done?
That may be badly documented but you are not supposed to use -sf
with the master worker,
you just have to send the -USR2 signal to the master and it will
parse again the
configuration, launch new workers and kill smoothly the previous ones.
Unfortunately signals are asynchronous, and we don't have a way yet
to return
a bad exit code upon reload. But we might implement a synchronous
configuration notification in the future, using the admin socket for
example.
Being able to signaling the master to reload over a admin socket and
getting 'feedback' about its results would likely also solve my
'reload feedback' problem.
Lets consider that a feature request :).
Though maybe i shouldn't be using master-worker at all for the moment..
Currently a whole new set of master-worker processes seems to be
take over..
Well, I supposed that's because you launched a new master-worker
with -sf, it's
not supposed to be used that way but it should work too if you don't
mind
having a new PID.
I kinda expected this to indeed be 'as intended' -sf will fully
replace the old processes.
Thanks for your reply.
Regards,
PiBa-NL / Pieter