AW: [squid-users] Client randomly reloads pages every second - cache/parent issue?

2012-11-26 Thread Stefan Bauer
-Ursprüngliche Nachricht- Von:Amos Jeffries Gesendet: Di 06.11.2012 13:31 Betreff:Re: [squid-users] Client randomly reloads pages every second - cache/parent issue? An: squid-users@squid-cache.org; > On 6/11/2012 10:39 p.m., Stefan Bauer wrote: > > Dear Users/Develo

Re: [squid-users] ROCK store and UFS (Squid 3.2.3)

2012-11-26 Thread Amos Jeffries
On 27.11.2012 11:44, Horacio H. wrote: Hi, I'm testing Squid 3.2.3 and wanted to use ROCK store combined with UFS or AUFS. Yes, I know it's not currently supported (http://wiki.squid-cache.org/Features/RockStore#limitations), but I did some tests anyway (Yes, I forgot). The limitation is abo

[squid-users] ROCK store and UFS (Squid 3.2.3)

2012-11-26 Thread Horacio H.
Hi, I'm testing Squid 3.2.3 and wanted to use ROCK store combined with UFS or AUFS. Yes, I know it's not currently supported (http://wiki.squid-cache.org/Features/RockStore#limitations), but I did some tests anyway (Yes, I forgot). Doomed to failure, I added this two lines to Squid's default conf

Re: [squid-users] Cache_peer load balancing

2012-11-26 Thread Eliezer Croitoru
Hey Sekar, First thing to do is to actually describe\understand the infrastructure. I Would start and say that Balancing on the same machine would be better done in the firewall level and not by another process. You can do that by using iptables and some modules. Regards, Eliezer On 11/26/201

Re: [squid-users] Cache_peer load balancing

2012-11-26 Thread Amos Jeffries
On 27.11.2012 10:38, Sekar Duraisamy wrote: Hi Team, Hi, seeing you post this four times already today I am inclined to ignore it as spam. Lucky for you I am bored this morning and answer. I have 2 squid instances running on the same machine with different port. Proxy: proxy1.com: 3128

[squid-users] Cache_peer load balancing

2012-11-26 Thread Sekar Duraisamy
Hi Team, I have 2 squid instances running on the same machine with different port. Proxy: proxy1.com: 3128 and 3129 Now i have created new squid instance on the machine with 3130 port and i need to load-balance traffic of 3130 with 3128 and 3129. I have tried like this acl xxx dstdomain xxx.co

[squid-users] Cache_peer config

2012-11-26 Thread Sekar Duraisamy
Hi Team, I have 2 squid instances running on the same machine with different port. Proxy: proxy1.com: 3128 and 3129 Now i have created new squid instance on the machine with 3130 port and i need to load-balance traffic of 3130 with 3128 and 3129. I have tried like this acl xxx dstdomain xxx.co

Re: [squid-users] HTTPS Disconnect with squid

2012-11-26 Thread Amos Jeffries
On 26/11/2012 6:20 p.m., Noc Phibee Telecom wrote: Le 22/11/2012 11:06, Amos Jeffries a écrit : On 22/11/2012 10:33 p.m., Noc Phibee Telecom wrote: Hi we have a problems with squid. We have a lot of user use squid in proxy for access to a https site. The problems: We don't know why, but the

Re: [squid-users] Re: Failover to upstream when direct link down

2012-11-26 Thread Amos Jeffries
On 26/11/2012 5:25 p.m., John Xue wrote: I do as you say. But I find every connection must wait 1 min(read_timeout default) when direct link down, even set to 2 sec but It's too slowly. How to set squid make the peer as the first choose until direct link up? If your link is down an ICMP pa

Re: [squid-users] Re: How to increase buffer size in the helper

2012-11-26 Thread Amos Jeffries
On 26/11/2012 4:00 p.m., Eliezer Croitoru wrote: You are using A dinosaur!! Squid 2.6 dosnt have support for about 5 years. what you should do is use a newer version of squid. If you are using CentOS 5.5 you will have trouble finding RPM for this version. I dont think that my RPM will work on

Re: [squid-users] Rate limiting inbound requests in squid conf

2012-11-26 Thread Nick Fennell
A rule set like the below may help; iptables -I INPUT -p tcp --dport 3128 -m state --state NEW -m recent --set iptables -I INPUT -p tcp --dport 3128 -m state --state NEW -m recent --update --seconds 60 --hitcount 30 -j DROP This should new requests being dropped when more than 30 connections hav