Hi Phillip,
we used Jermy's suggestion for draining keep-alives with k8s as follows:
lifecycle:
preStop:
exec:
command:
- /bin/bash
- -c
- rm /etc/environment;
traffic_ctl config set
proxy.config.http.keep_alive_enabled_in 0; sleep 10;
traffic_ctl server stop; sleep 5;
readinessProbe:
exec:
command:
- cat
- /etc/environment
so prior to terminating the POD , k8s will run our preStop hook where we do
the following:
1) remove the file so readiness probe starts to fail , and thus K8s stops
sending us new connections
2) set keep_alive_enabled to 0 , so any current keep alive connections are
drained
3) gracefully stop traffic server
we did some limited scope load testing with Jmeter (our use case is live
HLS) and rolling updates passed without errors.
But interestingly enough they also passed without steps 2) and 3) above. i'm
not sure why (maybe due to some Jmeter keep-alive configuration)
Additionally we tried using nginx ingress controller. this one had some
errors with Jmeter due to:
https://github.com/kubernetes/ingress-nginx/issues/489
--
Sent from: http://apache-traffic-server.24303.n7.nabble.com/