I am struggling to find what most people do to automatically reconfigure a 
running deployment based on the number of pods & their IPs, particularly as an 
HPA scales it up and down.

I know I'm still a k8s nube so still feel a little overwhelmed by the volume of 
documentation and new stuff I'm trying to get my head around.  Any pointers 
would be very appreciated.


Setup:
----------------
GKE: 1.5.2 (not alphas features)
Deployment:  Simple Squid docker, acting as a fwd proxy for outbound traffic 
from another app on the same cluster. 
Service:     LoadBalancer ~ as need to f/w access.
HPA:   min=1  max=20 pods  ~ will be tuning to force to be larger as we need 
multiple IPs, rather than cpu/mem, then reduce to 1 when dormant.

All working fine as an HPA controlled cluster of stand-alone pods with an LB in 
front.

The following are the two config examples that need to reflect the size & IPs 
of the pods in the deployment cluster:

1)  Squid Cache peering;  need to explicitly reference their peers, the other 
pods.
2)  Squid delay pools; being used to rate control all source traffic to 
particular destination domains.  Need the total desired rate limit to be 
divided by the number of pods before being used in the squid config.  

What I've tried / considered:

1)  Squid Cache peering
--------------------------

REJECTED OPTIONS:
1.a)  Using squid icp multicast.  No,  not supported on GCP networks.
1.b)  A Scheduled job to update the squid.conf in a configMap with the IPs for 
all the Pods.  No, a deployment wont pick up a changed configMap without the 
pods being recreated / cycled, meaning new IPs and alas the config is no longer 
valid.
1.c)  A hardcoded block of peers for all possible IPs that a Pod might be on.  
No the --cluster-ipv4-cidr option used in creating the GKE cluster insists on a 
min of /19 meaning >8k IPs :(  I'm not going to try that in a squid.conf 
1.d)  Use a Job to manage the squid.conf in a persistent disk attached as a 
volume.  Mount the disk as readOnly on all squid pods & figure out how to get 
squid to reload its config without the Pod restarting and getting a new IP. No; 
 as discovered, even if you segregate the nodes that squid pods run on from a 
single manager pod via node-pools you cant mount as readOnly by the squid pods 
and then also mount readWrite by a single manager Job or external process.
1.e)  Use NFS volume option.   No;  as I dont want to deploy anything old 
school outside of Kubernetes and if you run an NFS head unit on k8s then will 
get the same problem as 1.d above.
 
OPTIONS CURRENTLY BEING CONSIDERED:
1.f)  Use the github volume option.  Get a Job to update on github, all pods 
pull from github & figure out how to auto trigger squid to pickup its new 
config without needing the pods to be restarted....
1.g)  Can I predictably name pods & DNS resolve them ? e.g. 
appX-pod01.{namespace}.cluster.local ?  Therefore hardcode all potential pods 
by name in the squid.conf
1.h)  Can I be prescriptive in assigning a smaller cluster IP range for the 
pods when I load a particular deployment ?  Thus hardcode all potential IPs in 
squid.conf ?
1.i)  Just do squid peering through an LB and accept the losses.
1.j)  Ditch squid & find something else (varnish?) that is better suited to 
kubernetes ?
1.k)  ..... ? help :(


2) Squid delay pools / rate control
--------------------------
 ~ just been concentrating on the above as more difficult and presumed once 
solved will also be able to take a template squid.conf and divide the delay 
pool rate limits by the number of pods before using on all pods in the cluster.

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kubernetes-users+unsubscr...@googlegroups.com.
To post to this group, send email to kubernetes-users@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to