Hey Szabolcs,
 
Since Amos answered your question regarding a simple VM I would like to refer 
to the k8s part.
 
A huge Kubernetes cluster is good for very specific use cases.
It’s not “easy” to scale and or change the config and update out of the box,
you will need to work on that since there aren’t any ready to use solutions for 
these on k8s.
‘kubectl apply -f x.yaml’ is not really a good solution for every scaling 
problem.
Also take into account that you will probably will have issues with cache HITS 
if the distribution
algorithm is unable to inflict the same requests to the same proxy.
 
With k8s since the *big* clusters usually is on BareMetal it’s possible to get 
up to 30 percent more
performance then a VMs. Also, the network latency is not so high in a k8s 
cluster for the same reason.
Basically, in most k8s clusters the traffic is almost like inside a shared 
memory.
 
It’s possible to define the specs of the project and to asses from there.
HAproxy will be able to handle 40k clients without any issues and to allow full 
HA you might need 2 HAproxy machines.
The real issue with such a setup is how the config is applied.
For example, a big list of black and whitelist domains might be better stored 
outside of squid.
Depends on your requirements you might be able to use either ufdbGuard or 
another solution.
 
There aren’t many differences between containerized squid to VM’s is not a lot. 
Actually, in the case of a simple forward proxy
it might be pretty simple to run a containerized squid on-top of a VM (which 
how k8s is most runs like these days).
 
As for autoscaling squid containers on-top of k8s, you will probably need to 
invest a lot more then a VM to make this fit your needs.
 
Since you mentioned more Administration time on a VM, it’s not true (to my 
opinion and experience).
There isn’t much of a difference between a VM and a container for a simple 
forward squid setup.
(it will be different if you need interception of connections)
 
If you do have more details on the required setup itself it would be pretty 
simple to find the right way for a good solution.
 
I really recommend you to read the next article:
https://ably.com/blog/no-we-dont-use-kubernetes
 
which touch many aspects of k8s vs VM’s.
 
I can try to give you an idea for implementation on VM’s but I am still missing 
couple pieces to understand the best.
 
Yours,
Eliezer
 
----
Eliezer Croitoru
NgTech, Tech Support
Mobile: +972-5-28704261
Email: ngtech1...@gmail.com <mailto:ngtech1...@gmail.com> 
Web: https://ngtech.co.il/
My-Tube: https://tube.ngtech.co.il/
 
From: squid-users <squid-users-boun...@lists.squid-cache.org> On Behalf Of 
Pintér Szabolcs
Sent: Tuesday, 20 September 2022 22:52
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Squid performance recommendation
 
Hi squid community,
I need to find most best and sustainable way to build a stable High 
Availability squid cluster/solution for abou 40k user.
Parameters: I need HA, caching(little objects only not like big windows 
updates), scaling(It is just secondly), and I want to use and modify(in 
production,in working hours) complex black- and whitelists
I have some idea:

1. A huge kubernetes cluster 
pro: Easy to scale, change the config and update.
contra: I'm afraid of the network latency.(because of the most plus layers e.g. 
vm network stack, kubernetes network stack ith vxlan and etc.).
2. Simple VM-s with a HAProxy in tcp mode
pro: less network latency(I think)
contra: More time to Administration 


Has anybody any experience with squid in kubernetes(or similar technology) with 
a large number of useres?

What do you think which is the most perfect solution or do you have other idea 
for the implementation?

Thanks!

Best, Szabolcs
-- 
Pintér Szabolcs Péter
H-1117 Budapest, Neumann János u. 1. A épület 2. emelet
+36 1 489-4600 
+36 30 471-3827 
spin...@npsh.hu <mailto:spin...@npsh.hu> 

_______________________________________________
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

Reply via email to