[squid-users] [squid-use​rs] Timeout Directives

2011-06-15 Thread RM
I am using the myip ACL and the tcp_outgoing_address directive so that
my Squid configuration can have multiple IP addresses like the
following (full configuration at the very end of message):

acl ip1 myip 1.1.1.1
acl ip2 myip 2.2.2.2
acl ip3 myip 3.3.3.3
tcp_outgoing_address 1.1.1.1 ip1
tcp_outgoing_address 2.2.2.2 ip2
tcp_outgoing_address 3.3.3.3 ip3

If I use proxy IP address 1.1.1.1 to visit www.website.com and then
use proxy IP address 2.2.2.2 to visit www.website.com less than 5
seconds later, both visits are recorded as 1.1.1.1. However, if I wait
5+ seconds between using 1.1.1.1 and 2.2.2.2 to visit www.website.com,
then www.website.com correctly records one hit from 1.1.1.1 and one
hit from 2.2.2.2.

Basically, I need to configure Squid so that if I use 1.1.1.1 and then
2.2.2.2 to connect to www.website.com in a span of less than 5
seconds, each IP address is recoreded.

I'm guessing there is some timeout or similar configuration that I am
missing that is causing this. Can anyone point me in the right
direction?

I am using Squid 2.6.STABLE21 on CentOS 5.6.

Thanks in advance.

-Ron

-
Full squid.conf configuration
-

http_port 8080

# OPTIONS WHICH AFFECT THE NEIGHBOR SELECTION ALGORITHM
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache

# OPTIONS WHICH AFFECT THE CACHE SIZE
cache_mem 1 MB
cache_swap_low 90
cache_swap_high 95
maximum_object_size 1 MB
maximum_object_size_in_memory 50 KB
cache_replacement_policy heap LFUDA

# LOGFILE PATHNAMES AND CACHE DIRECTORIES
cache_dir aufs /squid/919191-919191 5 16 256
access_log /var/log/squid/access.log squid
pid_filename /var/run/squid-919191-919191.pid

# OPTIONS FOR EXTERNAL SUPPORT PROGRAMS
hosts_file /etc/hosts


# OPTIONS FOR TUNING THE CACHE
refresh_pattern .   0   20% 4320
quick_abort_min 0 KB
quick_abort_max 0 KB


# TIMEOUTS
half_closed_clients off
persistent_request_timeout 0 seconds

# ACCESS CONTROLS

acl ip1 myip 1.1.1.1
acl ip2 myip 2.2.2.2
acl ip3 myip 3.3.3.3

acl ipauth src 1.2.3.4
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl Safe_ports port 80 443
acl CONNECT method CONNECT
acl blocked_urls dstdomain /etc/squid/blocked_urls
acl blocked_regex url_regex /etc/squid/blocked_regex
http_access deny blocked_urls
http_access deny blocked_regex
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !Safe_ports
http_access allow ipauth
http_access allow localhost
http_access deny all
http_reply_access allow all
icp_access allow all

tcp_outgoing_address 1.1.1.1 ip1
tcp_outgoing_address 2.2.2.2 ip2
tcp_outgoing_address 3.3.3.3 ip3

# MISCELLANEOUS
logfile_rotate 10
memory_pools off
forwarded_for off
log_icp_queries off
client_db off
buffered_logs on
header_access X-Forwarded-For deny all
header_access Proxy-Connection deny all
header_access Via deny all
header_access Cache-Control deny all
header_access All allow all

# DELAY POOL PARAMETERS (all require DELAY_POOLS compilation option)
coredump_dir /squid/919191-919191

##5Mbps
delay_pools 1
delay_class 1 1
delay_parameters 1 655360/655360
delay_access 1 allow all


Re: [squid-users] Client bypassing delay pool restrictions

2010-11-15 Thread RM
Hi Amos,

It was my understanding that my quick_abort settings would do the
exact opposite. The manual states the following:

If you do not want any retrieval to continue after the client has
aborted, set both 'quick_abort_min' and 'quick_abort_max' to '0 KB'.

I did however play around with both of these settings changing them to
'1 KB' and '100 KB' and the client is still able to transfer at
5Mbps+.

I am almost certain the client is not tech savvy enough to perform any
of the described tricky behavior.

Any other suggestions would greatly be appreciated!

Thanks again,
Ron M.

On Mon, Nov 15, 2010 at 5:40 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 15/11/10 20:05, RM wrote:

 Hello all,

 I am running Squid Cache: Version 2.6.STABLE21 on CentOS 5.5 and have
 been using delay pools to limit clients' bandwidth usage. Here is the
 delay pool section and related ACL of the squid.conf file. I have
 included the entire squid.conf at the end of the message:

 acl all src 0.0.0.0/0.0.0.0
 delay_pools 1
 delay_class 1 1
 #1Mbps
 delay_parameters 1 131072/131072
 delay_access 1 allow all

 I have used the above delay pool configuration countless times
 previously and I did not have any issue but for some reason there is a
 client that is able to bypass the delay pool bandwidth restriction and
 transfter at rates of 5Mbps+.

 Any help would greatly be appreciated.

 Thanks in advance!

 Ron M.


 More likely that those requests are ones where the client actualy
 disconnected. Your quick_abort setting configure Squid to keep going after a
 disconnect. This happens outside the pooling since there is no client to
 pool.

 It *might* be a client doing some tricky request behaviour. You could pick
 this up by a) these requests are *all* MISS requests (indicating
 only-of-cached header preventing slow network access), or b) these requests
 following an earlier request within a very short time (indicating a leach
 re-attachment once the above pool detachment has been done by Squid).

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3



[squid-users] Client bypassing delay pool restrictions

2010-11-14 Thread RM
Hello all,

I am running Squid Cache: Version 2.6.STABLE21 on CentOS 5.5 and have
been using delay pools to limit clients' bandwidth usage. Here is the
delay pool section and related ACL of the squid.conf file. I have
included the entire squid.conf at the end of the message:

acl all src 0.0.0.0/0.0.0.0
delay_pools 1
delay_class 1 1
#1Mbps
delay_parameters 1 131072/131072
delay_access 1 allow all

I have used the above delay pool configuration countless times
previously and I did not have any issue but for some reason there is a
client that is able to bypass the delay pool bandwidth restriction and
transfter at rates of 5Mbps+.

Any help would greatly be appreciated.

Thanks in advance!

Ron M.


==
squid.conf
==
http_port 8080
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
cache_mem 1 MB
cache_swap_low 90
cache_swap_high 95
maximum_object_size 1 MB
maximum_object_size_in_memory 50 KB
cache_replacement_policy heap LFUDA
cache_dir aufs /var/spool/squid 5 16 256
access_log /var/log/squid/access.log squid
auth_param basic program /usr/lib64/squid/ncsa_auth /etc/squid/passwd
auth_param basic children 2
auth_param basic realm Password Protected Area
auth_param basic credentialsttl 24 hours
auth_param basic casesensitive on
pid_filename /var/run/squid.pid
hosts_file /etc/hosts
refresh_pattern .   0   20% 4320
quick_abort_min 0 KB
quick_abort_max 0 KB
half_closed_clients off
persistent_request_timeout 0 seconds
acl ip0 myip 123.123.123.123
acl ip1 myip 124.124.124.124
acl pwauth proxy_auth REQUIRED
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl Safe_ports port 80 443
acl CONNECT method CONNECT
acl blocked_urls dstdomain /etc/squid/blocked_urls
acl blocked_regex url_regex /etc/squid/blocked_regex
http_access deny blocked_urls
http_access deny blocked_regex
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !Safe_ports
http_access allow pwauth
http_access allow localhost
http_access deny all
http_reply_access allow all
icp_access allow all
tcp_outgoing_address 123.123.123.123 ip0
tcp_outgoing_address 124.124.124.124 ip1
logfile_rotate 10
memory_pools off
forwarded_for off
log_icp_queries off
client_db off
buffered_logs on
coredump_dir /var/spool/squid
delay_pools 1
delay_class 1 1
#1Mbps
delay_parameters 1 131072/131072
delay_access 1 allow all


[squid-users] A single website is loading slow

2010-09-07 Thread RM
I am having issues with just a single website loading very very slowly
through Squid. The problematic website loads fine without a proxy but
takes several minutes to load through Squid. All other websites load
perfectly fine.  I have tried the following:

1) I originally thought the issue was DNS related so I changed the
nameservers that Squid uses by using dns_nameservers. I tried
several different local nameservers and then eventually tried free
services such as Google's and OpenDNS's. No luck.

2) To further convince myself it was not DNS, I entered the website's
IP/host information into /etc/hosts and used Squid's hosts_file
directive to use /etc/hosts. This did not help either.

Squid was restarted each time after making the above changes.

Here are the access.log entries related to loading the website (URL
and IP addresses have been changed).

1283907376.404320   222.222.222.222 TCP_MISS/301 508 GET
http://website.com username DIRECT/111.111.111.111 text/html
1283907415.924  39277   222.222.222.222 TCP_MISS/200 62371 GET
http://www.website.com/ username DIRECT/111.111.111 text/html

As you can see, the first log entry appears quickly after attempting
to load the website. The title of the website appears in the web
browser's title bar almost immediately but the content of the website
does not load until much later.

Any help is much appreciated.

Thanks!

Ron


Re: [squid-users] A single website is loading slow

2010-09-07 Thread RM
On Tue, Sep 7, 2010 at 8:21 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On Tue, 7 Sep 2010 19:31:45 -0700, RM bearm...@gmail.com wrote:
 I am having issues with just a single website loading very very slowly
 through Squid. The problematic website loads fine without a proxy but
 takes several minutes to load through Squid. All other websites load
 perfectly fine.  I have tried the following:

 1) I originally thought the issue was DNS related so I changed the
 nameservers that Squid uses by using dns_nameservers. I tried
 several different local nameservers and then eventually tried free
 services such as Google's and OpenDNS's. No luck.

 2) To further convince myself it was not DNS, I entered the website's
 IP/host information into /etc/hosts and used Squid's hosts_file
 directive to use /etc/hosts. This did not help either.

 Squid was restarted each time after making the above changes.

 Here are the access.log entries related to loading the website (URL
 and IP addresses have been changed).

 1283907376.404    320   222.222.222.222 TCP_MISS/301 508 GET
 http://website.com username DIRECT/111.111.111.111 text/html
 1283907415.924  39277   222.222.222.222 TCP_MISS/200 62371 GET
 http://www.website.com/ username DIRECT/111.111.111 text/html

 As you can see, the first log entry appears quickly after attempting
 to load the website. The title of the website appears in the web
 browser's title bar almost immediately but the content of the website
 does not load until much later.

 Any help is much appreciated.

 You have erased the vital information about *which* website URL and
 *where* it is. Have not provided any information about which squid version
 you are talking about either.

 To get any type of useful help you need to present enough facts for
 someone else to replicate the problem please.

 All we can do at this point is say yes. Your log shows that a website is
 loading slowly. Other sites work fine? then conclude that the problems is
 not in Squid itself but somewhere else which impacts Squid.

 Amos


The website is www.realestate.com

I am using Squid Cache: Version 2.6.STABLE21 on CentOS 5.5 32-bit

Thanks.


[squid-users] Nameservers are operational but Squid cannot resolve

2009-03-14 Thread RM
A few days ago, Squid was working perfectly fine. I have neither made
changes to Squid nor any changes to system configuration files. In
fact, I have not even logged into the server since the problems arose.
Today, I am unable to visit websites while surfing through Squid. When
my web browser is configured to use the Squid proxy, I get the
username and password prompt as I should (I use ncsa_auth). However, I
then get the following error which appears to be DNS related:



The requested URL could not be retrieved



While trying to retrieve the URL: http://www.google.com/

The following error was encountered:

Unable to determine IP address from host name for www.google.com
The dnsserver returned:

Refused: The name server refuses to perform the specified operation.
This means that:

 The cache was not able to resolve the hostname presented in the URL.
 Check if the address is correct.
Your cache administrator is root.




My access.log shows the following entries:

1237099689.372 13 xx.xx.xx.xx TCP_HIT/301 546 GET
http://google.com/ myusername NONE/- text/html
1237099689.515 15 xx.xx.xx.xx TCP_MISS/503 1512 GET
http://www.google.com/ myusername DIRECT/www.google.com text/html

So it appears that the problem is DNS related, however, I am able to
resolve domain names with the same nameservers that Squid is using
when I ping google.com or other domains. I have also tried using other
nameservers which I have verified that work and Squid refused to
resolve domains to IP addresses.

Any help is appreciated!


[squid-users] Re: Nameservers are operational but Squid cannot resolve

2009-03-14 Thread RM
On Sat, Mar 14, 2009 at 3:44 PM, RM bearm...@gmail.com wrote:
 A few days ago, Squid was working perfectly fine. I have neither made
 changes to Squid nor any changes to system configuration files. In
 fact, I have not even logged into the server since the problems arose.
 Today, I am unable to visit websites while surfing through Squid. When
 my web browser is configured to use the Squid proxy, I get the
 username and password prompt as I should (I use ncsa_auth). However, I
 then get the following error which appears to be DNS related:



 The requested URL could not be retrieved

 

 While trying to retrieve the URL: http://www.google.com/

 The following error was encountered:

 Unable to determine IP address from host name for www.google.com
 The dnsserver returned:

 Refused: The name server refuses to perform the specified operation.
 This means that:

  The cache was not able to resolve the hostname presented in the URL.
  Check if the address is correct.
 Your cache administrator is root.




 My access.log shows the following entries:

 1237099689.372     13 xx.xx.xx.xx TCP_HIT/301 546 GET
 http://google.com/ myusername NONE/- text/html
 1237099689.515     15 xx.xx.xx.xx TCP_MISS/503 1512 GET
 http://www.google.com/ myusername DIRECT/www.google.com text/html

 So it appears that the problem is DNS related, however, I am able to
 resolve domain names with the same nameservers that Squid is using
 when I ping google.com or other domains. I have also tried using other
 nameservers which I have verified that work and Squid refused to
 resolve domains to IP addresses.

 Any help is appreciated!


Another thing I forgot to mention is that I have tried completely
disabling my IPTables firewall and the problem persisted.


[squid-users] Data transfer limit

2008-10-16 Thread RM
I've tried searching through the archives for data transfer limits but
all I can find is stuff on bandwidth limiting through the use of delay
pools to restrict users to a specific transfer rate.

Here is my situation. I have a Squid server on the internet that users
around the world can connect to but it requires that they know their
own username and password (this is not an open proxy) in order to
connect. So I have this in my squid.conf:

auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/passwd
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off

acl ncsa_users proxy_auth REQUIRED
http_access allow ncsa_users

/etc/squid/passwd has a list of usernames and their associated
passwords. How can I limit each user to a specific amount of data
transfer per month such as 100GB. I do not want to limit the rate to
anything such as 64kbps. I want my users to use my full 10Mbps
connection if they can but once they reach 100GB of transferred data,
I want to disable them.

Is this possible?

Thanks


Re: [squid-users] Slow for one user, fast for everyone else

2008-10-06 Thread RM
On Mon, Oct 6, 2008 at 1:45 AM, Pieter De Wit [EMAIL PROTECTED] wrote:
 Hi JL,

 Does your server use DNS in it's logging ? Perhaps it's reverse DNS ?

 If he downloads a big file, does the speed pick up ?

 Cheers,

 Pieter

 JL wrote:

 I have a server setup which provides an anonymous proxy service to
 individuals across the world. I have one specific user that is
 experiencing very slow speeds. Other users performing the very same
 activities do not experience the slow speeds, myself included. I asked
 the slow user to do traceroutes and it appeared there were no network
 routing issues but for some reason it is VERY slow for him to the
 point of being unusable. The slow user can perform the same exact
 activities perfectly fine using another proxy service but with my
 proxy it is too slow.

 Any help is appreciated.




Thanks Pieter for the reply.

I am not sure what you mean by DNS in its logging. I am assuming you
mean that in the logs hostnames as opposed to IP addresses are logged.
If so, that is not the case, only IP addresses are logged in the Squid
logs. I realize you are probably are also referring to reverse DNS for
the user but just in case you mean reverse DNS for the server, I do
have reverse DNS setup for the server IP's.

I will have to ask to see if big downloads speed up for the user.

Any other help is appreciated.


Re: [squid-users] Slow for one user, fast for everyone else

2008-10-06 Thread RM
On Mon, Oct 6, 2008 at 4:08 AM, RM [EMAIL PROTECTED] wrote:
 On Mon, Oct 6, 2008 at 1:45 AM, Pieter De Wit [EMAIL PROTECTED] wrote:
 Hi JL,

 Does your server use DNS in it's logging ? Perhaps it's reverse DNS ?

 If he downloads a big file, does the speed pick up ?

 Cheers,

 Pieter

 JL wrote:

 I have a server setup which provides an anonymous proxy service to
 individuals across the world. I have one specific user that is
 experiencing very slow speeds. Other users performing the very same
 activities do not experience the slow speeds, myself included. I asked
 the slow user to do traceroutes and it appeared there were no network
 routing issues but for some reason it is VERY slow for him to the
 point of being unusable. The slow user can perform the same exact
 activities perfectly fine using another proxy service but with my
 proxy it is too slow.

 Any help is appreciated.




 Thanks Pieter for the reply.

 I am not sure what you mean by DNS in its logging. I am assuming you
 mean that in the logs hostnames as opposed to IP addresses are logged.
 If so, that is not the case, only IP addresses are logged in the Squid
 logs. I realize you are probably are also referring to reverse DNS for
 the user but just in case you mean reverse DNS for the server, I do
 have reverse DNS setup for the server IP's.

 I will have to ask to see if big downloads speed up for the user.

 Any other help is appreciated.


One thing I forgot to ask is: if he downloads a big file and the speed
picks up, what does this say and how do I fix the problem?

Any other suggestions are appreciated as well.