Re: [squid-users] cachemgr no Cache Client List

2008-11-20 Thread Henrik Nordstrom
On ons, 2008-11-19 at 19:26 -0500, Rick Chisholm wrote:
> anytime I check the Cache Client List I get:
> 
> Cache Clients:
> TOTALS
> ICP : 0 Queries, 0 Hits (  0%)
> HTTP: 0 Requests, 0 Hits (  0%)
> 
> even when I know the Cache has clients... it's weird.

Probably you have set "client_db off" in squid.conf.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Squid very slow

2008-11-20 Thread Henrik Nordstrom
On ons, 2008-11-19 at 22:06 -0400, Wilson Hernandez - MSD, S. A. wrote:

> The dnsserver returned:
> 
> Refused: The name server refuses to perform the specified operation. 
> 
> This means that:
> 
>  The cache was not able to resolve the hostname presented in the URL. 
>  Check if the address is correct. 

Or to be more exact, the DNS server used by your Squid refuses Squid to
use it for resolving the requested host name.

Check your DNS server configuration. Both the list of DNS servers used
by Squid (set in /etc/resolv.conf or squid.conf) and that these servers
really do provide resolver functionality (recursive queries) to the
Squid host.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] Squid and Redirect Users to Content Analyser

2008-11-20 Thread Leandro Lustosa
Hi!

Please! I need help.

I have a content analyser and i am using squid-2.4-stable14 to
authenticate users to access remote desktop (terminal service).

The Squid Proxy provides information about users to the content
analyser this way:


"http://www.site.com default:://user" (That's the wrong way. It should
be like below:)

"http://www.site.com user" (that's the right way)


How can I disable this string: "default://" in my Squid?

I am using Squid only to provide credentials for the content analyser.
It's searching for users in Novell's edirectory.

Below you see my squid.conf and authentication test.

## -> Start squid.conf
auth_param basic program  /usr/lib/squid/squid_ldap_auth -v 3 -b
"o=EMPRESA" -s sub -d -D "cn=Rede,o=EMPRESA" -w MyPass -f
"(&(cn=%s)(objectClass=person))" -h 172.16.3.14 -p 389
auth_param basic children 30
auth_param basic realm Squid-Ldap EmpresaProxy
auth_param basic credentialsttl 2 hours
acl authusers proxy_auth REQUIRED
external_acl_type CHECA_GRUPO children=15 %LOGIN
/usr/lib/squid/squid_ldap_group -v 3 -b "o=EMPRESA" -s sub -d -D
"cn=Rede,o=EMPRESA" -w MyPass -f "objectClass=person" -h 172.16.3.14
-p 389 -f cn=%s
acl navegacao external CHECA_GRUPO Internet
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl Safe_ports port 80  # http
acl Safe_ports port 90  # http-copel
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
#http_access allow all
http_access deny !authusers
http_reply_access allow all
icp_access allow all
always_direct allow all
coredump_dir /var/spool/squid
visible_hostname squid.dom.com.br
redirect_children 30
redirector_bypass off
redirect_program /opt/Websense/bin/WsRedtor # redirect users to
content analyser (websense)
## -> End squid.conf

##-> Start, testing auth squid with edirectory
[EMAIL PROTECTED] ~]# sh squid-ldap-test
usuario senha
user filter '(&(cn=usuario)(objectClass=person))', searchbase 'o=EMPRESA'
attempting to authenticate user
'cn=usuario,ou=INFORMATICA,ou=Sup-Administrativa,o=EMPRESA'
OK
##-> End, testing auth squid with edirectory

Thanks for the attention,


[squid-users] redirect users to content analyser

2008-11-20 Thread Leandro Lustosa
Hi!

Please! I need help.

I have a content analyser and i am using squid-2.4-stable14 to
authenticate users to access remote desktop (terminal service).

The Squid Proxy provides information about users to the content
analyser this way:


"http://www.site.com default:://user" (That's the wrong way. It should
be like below:)

"http://www.site.com user" (that's the right way)


How can I disable this string: "default://" in my Squid?

I am using Squid only to provide credentials for the content analyser.
It's searching for users in Novell's edirectory.

Below you see my squid.conf and authentication test.

## -> Start squid.conf
auth_param basic program  /usr/lib/squid/squid_ldap_auth -v 3 -b
"o=EMPRESA" -s sub -d -D "cn=Rede,o=EMPRESA" -w MyPass -f
"(&(cn=%s)(objectClass=person))" -h 172.16.3.14 -p 389
auth_param basic children 30
auth_param basic realm Squid-Ldap EmpresaProxy
auth_param basic credentialsttl 2 hours
acl authusers proxy_auth REQUIRED
external_acl_type CHECA_GRUPO children=15 %LOGIN
/usr/lib/squid/squid_ldap_group -v 3 -b "o=EMPRESA" -s sub -d -D
"cn=Rede,o=EMPRESA" -w MyPass -f "objectClass=person" -h 172.16.3.14
-p 389 -f cn=%s
acl navegacao external CHECA_GRUPO Internet
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl Safe_ports port 80  # http
acl Safe_ports port 90  # http-copel
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
#http_access allow all
http_access deny !authusers
http_reply_access allow all
icp_access allow all
always_direct allow all
coredump_dir /var/spool/squid
visible_hostname squid.dom.com.br
redirect_children 30
redirector_bypass off
redirect_program /opt/Websense/bin/WsRedtor # redirect users to
content analyser (websense)
## -> End squid.conf

##-> Start, testing auth squid with edirectory
[EMAIL PROTECTED] ~]# sh squid-ldap-test
usuario senha
user filter '(&(cn=usuario)(objectClass=person))', searchbase 'o=EMPRESA'
attempting to authenticate user
'cn=usuario,ou=INFORMATICA,ou=Sup-Administrativa,o=EMPRESA'
OK
##-> End, testing auth squid with edirectory

Thanks for the attention,


Re: [squid-users] Accessing a transparent cache on localhost

2008-11-20 Thread Jonathan Gazeley

Chris Robertson wrote:

Jonathan Gazeley wrote:
I'm new to Squid. I've successfully set up a transparent cache on a 
server which is also the gateway/firewall/NAT for a small LAN. All 
the clients on my LAN use the cache properly. However, the server 
running the cache doesn't use its own cache. I've inserted what I 
thought were the correct rules into my iptables config:


-A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3128
-A PREROUTING -s 127.0.0.1/32 -p tcp --dport 80 -j REDIRECT --to-port 
3128
-A PREROUTING -s 192.168.0.1/32 -p tcp --dport 80 -j REDIRECT 
--to-port 3128
-A PREROUTING -s x.x.x.x/32 -p tcp --dport 80 -j REDIRECT --to-port 
3128 (external public IP)


I think it would need to be part of the OUTPUT chain.  But you would 
have to do some sort of packet marking to avoid matching packets from 
Squid to the internet (lest you create a forwarding loop).


It's probably far easier to set the environment variable "http_proxy" 
(e.g. "export http_proxy=http://localhost:3128";).  Many utilities (YUM 
, apt, wget, etc) honor this.
Thanks Chris, this works well :) yum was the primary application I 
wanted to use the cache anyway, as my LAN consists entirely of Fedora 9 
machines and it would save bandwidth to cache the updates. Mirroring the 
entire repository seemed a bit overkill in this case...



Chris


Re: [squid-users] I Need Help!

2008-11-20 Thread Amos Jeffries

Leandro Lustosa wrote:

Hi!

Please! I need help.

I have a content analyser and i am using squid-2.4-stable14 to
authenticate users to access remote desktop (terminal service).


Have you considered using a Squid from this end of the Decade?
2.4 is long obsoleted and buried in the archives.



The Squid Proxy provides information about users to the content
analyser this way:


"http://www.site.com default:://user" (That's the wrong way. It should
be like below:)

"http://www.site.com user" (that's the right way)


How can I disable this string: "default://" in my Squid?

I am using Squid only to provide credentials for the content analyser.
It's searching for users in Novell's edirectory.

Thanks for the attention,


Leandro Lustosa.


How are you configuring squid to pass data to the analyzer?

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.2


Re: [squid-users] Accessing a transparent cache on localhost

2008-11-20 Thread Amos Jeffries

Jonathan Gazeley wrote:

Hi,

I'm new to Squid. I've successfully set up a transparent cache on a 
server which is also the gateway/firewall/NAT for a small LAN. All the 
clients on my LAN use the cache properly. However, the server running 
the cache doesn't use its own cache. I've inserted what I thought were 
the correct rules into my iptables config:


-A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3128
-A PREROUTING -s 127.0.0.1/32 -p tcp --dport 80 -j REDIRECT --to-port 3128
-A PREROUTING -s 192.168.0.1/32 -p tcp --dport 80 -j REDIRECT --to-port 
3128
-A PREROUTING -s x.x.x.x/32 -p tcp --dport 80 -j REDIRECT --to-port 3128 
(external public IP)


where eth0 is the LAN-facing interface.

My Squid config allows proxying from localhost and localnet:

http_access allow localhost
http_access allow localnet
http_access deny all

Therefore I think I have not set up my iptables quite right. Can anyone 
confirm if this is the right way to go about catching HTTP requests from 
localhost?


localhost is a special IP. It's processed on interface 'lo' and does not 
pass through NAT. Never passes data to the internet either on working 
NIC cards.


The full correct iptables config for basic interception is listed here:
  http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxRedirect

If you mean yo want to catch secondary data from the same machine as 
Squid, and divert it into Squid again. It's not easy to do right and you 
had best ask the experts over at netfilter for correct details.


It's likely to involve L7 filters to detect Squid in the exemption, or 
adding tcp_outgong_tos and policy routing to have Squid mark its traffic 
differently for exemption and catching the marks in in a local interface 
loop.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.2


Re: [squid-users] NTLM auth popup boxes && Solaris 8 tuning for upgrade into 2.7.4

2008-11-20 Thread Amos Jeffries

Henrik Nordstrom wrote:

On ons, 2008-11-19 at 19:39 +0100, [EMAIL PROTECTED] wrote:


auth_param ntlm ttl

do you advice using it because I do not find any reference on it on

squid configuration guide website.
you spoke about ttl parameter .. do you advice using it ??


Not sure who spoke about an auth_param ntlm ttl parameter, but there is
no such parameter.

The ntlm scheme only has three parameters

  program

  children

  keep_alive

there the first (program) specifies the helper to use, the second
(children) needs to be tuned to at least fit your load or there will be
issues with rejected access or sporatic authentication prompts, and the
third is a minor optimization.


I mentioned authenticate_ttl as a general possibility to be looked at.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.2


Re: [squid-users] DG and Squid 1 Machine

2008-11-20 Thread Amos Jeffries

░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░ wrote:

hi all
sorry for my cross posting but this is urgent :(
i have problem here

eth0 192.168.222.100 =>> Go to LAN and act as Client's GW and DNS (
Installed DG and Squid )
eth1 10.0.0.2 =>> Go to LoadBalancing + DMZ server ( IP PUBLIC
forwarded ( got DMZ to this machine )

squid.conf :
http_port 2210 transparent

dansguardian.conf :
filterport = 2211
proxyip = 127.0.0.1
proxyport = 2210

rc.local
/sbin/iptables --table nat --append POSTROUTING --out-interface eth1
-j MASQUERADE
/sbin/iptables --append FORWARD --in-interface  eth1 -j ACCEPT
/sbin/iptables -t nat -A PREROUTING -i eth0 -p tcp -s
192.168.0.0/255.255.0.0 --dport 80 -j DNAT --to 192.168.222.100:2211
/sbin/iptables -t nat -A PREROUTING -p tcp -i eth1 -d 10.0.0.2 --dport
2210 -j DNAT --to-destination 192.168.222.100


output :
ERROR
The requested URL could not be retrieved

While trying to retrieve the URL: http://google.com/
The following error was encountered:
Access Denied.



what wrong ?


Did you remember these...

squid.conf:
  acl localnet src 192.168.0.0/16
  acl localhost src 127.0.0.1
  http_access allow localnet
  http_access allow localhost

Also check your DG controls for similar accept of all requests from 
local network.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.2


Re: [squid-users] disable-internal-dns not working on 2.6.18

2008-11-20 Thread Amos Jeffries

Joseph Jamieson wrote:

Hello,

I am trying to set up a Squid reverse proxy server in order to direct different 
web addresses to different servers.   The caching function is just an added 
bonus.

As I understand it, I need to use --disable-internal-dns build option to do 
this, and put the various host names in /etc/hosts.


No. Just set /etc/hosts. Squid loads it as a fixed set of records always 
preferred over remote lookups.




This is an Ubuntu box and I've downloaded all of the packages necessary to 
build squid, and it does build correctly.   I added the --disable-internal-dns 
option into debian/rules, built binary packages, and installed them.



Try "apt-get install squid". No building necessary.
Current Squid by default has all the necessary components to be a 
reverse-proxy.





Any ideas?   I'd love to get this up and running.  Squid 2.6's reverse proxy 
looks like it's going to be a lot easier to manage than older versions.



It is, unfortunately you seem to have come across some of the docs for 
obsolete Squid versions that ruined your experience so far.


In general Squid does not need to perform any DNS to act as a reverse-proxy.

Install the Ubuntu squid release and take a read of this page for the 
configuration:

  http://wiki.squid-cache.org/SquidFaq/ReverseProxy
(particularly the part 'How Do I Set It Up')

NP: The demo config does not involve DNS. URL domain name in "dstdomain" 
ACL and and IP on the "cache_peer  80 0 ..." lines make it work 
without needing to check destination IP.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.2


[squid-users] squid rewrite feature ... help

2008-11-20 Thread chudy fernandez
#  TAG: rewrite
i've tried
rewrite %301:http://www.google-analytics.com/__utm.gif utm
doesn't work.
whats missing?


  


[squid-users] Squid not showing all pages correctly: solved by TCP tuning

2008-11-20 Thread Rudi Vankemmel
Hi all,

I am running squid V2.7 Stable 2 in a chroot jail which was running fine
except for some pages that now and then do not showed up correctly or
not at all.
When squid was bypassed, the respective pages did show up correctly.

After quite some information gathering on the net and experimenting
with specific
configuration options for squid (broken_posts,  broken_vary_encoding,
relaxed_header_parser, persistent connections settings,...), i still
did not find
a working solution.

At that point i decided to have a look at the basics:  what happened
at HTTP level
as well as lower networking levels (TCP).  The tcpdump and wireshark
tools are your
friends at this point.

I did notice two things: first while sending multiple zero window size
TCP segments
were seen (look at the TCP information in wireshark):
...
[TCP Analysis Flags]
[This is a ZeroWindow segment]

after which the TCP window gets updated again.  However, after that it
starts loosing
segments ( ... [A segment before this frame was lost] ...)
while duplicate acknowledgements are sent ( [This is a TCP
duplicate ack] ...).
The loss of segments happens especially in the upward link (i.e. from
my station to the
web server).

Secondly it goes really wrong when a HTTP POST was done:

.   HTTP POST /flashservices/gateway HTTP/1.0  (application/x-amf)

in order to trigger a Java application for representing the actual
information.
Our station sends out an ACK after which the web server sends back info to us:
this TCP segment is never received :  ...[A segment before this frame
was lost]...
Meaning that the page is never displayed or partially.

The zero window sizes, TCP retransmits, duplicate acknowledgements
have typically to do
with badly sized TCP windows and/or wrong MTU sizes at Ethernet level.
Such problems are typically solved by TCP tuning.  After some
experimenting i found that
the cause was a too high MTU size on the outgoing Ethernet interface.
The standard MTU size on my system was 1460 bytes.  However, my
connection is an ADSL line
using PPPoE as encapsulation.
Changing my MTU size to 1452 bytes (8 bytes extra overhead for the ppp session)
on my ethernet interface solved the issue.  I did furthermore some
further TCP buffer
size (receive/transmit sizes) tuning to account for the very different
upload/download
speeds on the ADSL line (512k/4.6Mbps).  After such optimisation, all
pages showed
correctly via the Squid proxy.

Conclusion:  if you are having pages that do not show up via Squid
while they do if
you bypass Squid, then start looking at what happens at TCP level and
start looking into
TCP tuning.  Optimise also your MTU size on the outgoing interface.
It solved my problem.  There are several good TCP tuning docs and articles on
the internet discussiongroups available.

Hope it is useful for you !
Rudi Vankemmel


[squid-users] Squid default log rotation period

2008-11-20 Thread kaustav_deybiswas

Hi,

I have recently installed Squid and I found that Squid is automatically
rotating its logs every week. I want to set up crontabs to rotate the logs
at the end of each month. How can I disable the default rotation to ensure
that the logs wont get rotated every week, but only at the end of each
month?

Thanks & Regards,
Kaustav
-- 
View this message in context: 
http://www.nabble.com/Squid-default-log-rotation-period-tp20603100p20603100.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] Very slow and high usage CPU on "Rebuilding storage"

2008-11-20 Thread Alexey Vlasov
Hi.

I've got very large /proc/mounts on my server,
# wc -l /proc/mounts
178885 /proc/mounts
Hence rebuilding storage takes much time. Does anyone know how to patch
squid, to make him stop doing this:

[pid  5875] write(5, "2008/11/20 18:07:11| Rebuilding "..., 73) = 73
[pid  5875] statfs("/var/cache/squid", {f_type="EXT2_SUPER_MAGIC",
f_bsize=4096, f_blocks=48062990, f_bfree=38347249, f_bavail=35905772,
f_files=12214272, f_ffree=1170, f_fsid={-2065445204, -1263411833},
f_namelen=255, f_frsize=4096}) = 0
[pid  5875] stat("/var/cache/squid", {st_mode=S_IFDIR|0750,
st_size=4096, ...}) = 0
[pid  5875] open("/proc/mounts", O_RDONLY) = 13
[pid  5875] fstat(13, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0
[pid  5875] mmap(NULL, 4096, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fd1502c3000
[pid  5875] read(13, "rootfs / rootfs rw 0 0\n/dev/root"..., 1024) =
1024
[pid  5875] read(13, "//bin ext3 rw,noatime,"..., 1024) = 1024
[pid  5875] read(13, "v_8b6088f6/dev/pts d"..., 1024) = 1024
[pid  5875] read(13, "sda5 /home/aaddd"..., 1024) = 1024
[pid  5875] read(13, ",data=ordered 0 0\n/dev/sda5 /hom"..., 1024) =
1024
[pid  5875] read(13, "=ordered 0 0\n/dev/sda5 /home/vvv"..., 1024) =
1024
[pid  5875] read(13, "3c3ad98f/tmp t"..., 1024) = 1024
[pid  5875] read(13, "/s0acf91d5/lib"..., 1024) = 1024
[pid  5875] read(13, "errors=continue,data=ordered 0 0"..., 1024) = 1024
[pid  5875] read(13, "rw,noatime,errors=continue,data="..., 1024) = 1024
[pid  5875] read(13, "www/7f8d61"..., 1024) = 1024
... my ~200k mounts ...

-- 
BRGDS. Alexey Vlasov.


Re: [squid-users] Squid default log rotation period

2008-11-20 Thread Henrik Nordstrom
On tor, 2008-11-20 at 07:05 -0800, kaustav_deybiswas wrote:

> I have recently installed Squid and I found that Squid is automatically
> rotating its logs every week.

No it doesn't. But maybe you have a logrotate script doing the log
rotation?

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


RE: [squid-users] disable-internal-dns not working on 2.6.18

2008-11-20 Thread Joseph Jamieson
Hi, thanks for the response.

Indeed, there's apparently a *lot* of old, bad info out there.

I'll follow the doc you linked and hopefully I'll be good to go!

Thanks.


-Original Message-
From: Amos Jeffries [mailto:[EMAIL PROTECTED]
Sent: Thursday, November 20, 2008 7:37 AM
To: Joseph Jamieson
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] disable-internal-dns not working on 2.6.18

Joseph Jamieson wrote:
> Hello,
>
> I am trying to set up a Squid reverse proxy server in order to direct 
> different web addresses to different servers.   The caching function is just 
> an added bonus.
>
> As I understand it, I need to use --disable-internal-dns build option to do 
> this, and put the various host names in /etc/hosts.

No. Just set /etc/hosts. Squid loads it as a fixed set of records always
preferred over remote lookups.

>
> This is an Ubuntu box and I've downloaded all of the packages necessary to 
> build squid, and it does build correctly.   I added the 
> --disable-internal-dns option into debian/rules, built binary packages, and 
> installed them.
>

Try "apt-get install squid". No building necessary.
Current Squid by default has all the necessary components to be a
reverse-proxy.


>
> Any ideas?   I'd love to get this up and running.  Squid 2.6's reverse proxy 
> looks like it's going to be a lot easier to manage than older versions.
>

It is, unfortunately you seem to have come across some of the docs for
obsolete Squid versions that ruined your experience so far.

In general Squid does not need to perform any DNS to act as a reverse-proxy.

Install the Ubuntu squid release and take a read of this page for the
configuration:
   http://wiki.squid-cache.org/SquidFaq/ReverseProxy
(particularly the part 'How Do I Set It Up')

NP: The demo config does not involve DNS. URL domain name in "dstdomain"
ACL and and IP on the "cache_peer  80 0 ..." lines make it work
without needing to check destination IP.

Amos
--
Please be using
   Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
   Current Beta Squid 3.1.0.2






RE: [squid-users] disable-internal-dns not working on 2.6.18

2008-11-20 Thread Joseph Jamieson
Hello again.

I followed that document to the letter here, and squid doesn't want to allow 
any traffic to my cache sites:

-
ERROR
The requested URL could not be retrieved

While trying to retrieve the URL: http://my.site.com/

The following error was encountered:

* Access Denied.

  Access control configuration prevents your request from being allowed at 
this time. Please contact your service provider if you feel this is incorrect.

Your cache administrator is webmaster.
Generated Thu, 20 Nov 2008 13:27:20 GMT by proxy.site.com (squid/2.6.STABLE18)
---

This is what I've added to the squid.conf:

http_port 80 accel defaultsite=my.site.com vhost

cache_peer 192.168.5.15 parent 80 0 no-query originserver name=moon
acl sites_moon dstdomain my.site.com
cache_peer_access moon allow sites_moon

cache_peer 192.168.5.12 parent 80 0 no-query originserver name=triton
acl sites_triton dstdomain terminal.site.com
cache_peer_access triton allow sites_triton

cache_peer 192.168.5.14 parent 80 0 no-query originserver name=titan
acl sites_titan dstdomain files.site.com
cache_peer_access titan allow sites_titan


That's all the guide told me to do, so I'm not sure what to do next.   Gosh, I 
wish this wasn't so difficult.

Joe


-Original Message-
From: Joseph Jamieson [mailto:[EMAIL PROTECTED]
Sent: Thursday, November 20, 2008 11:51 AM
To: 'Amos Jeffries'
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] disable-internal-dns not working on 2.6.18

Hi, thanks for the response.

Indeed, there's apparently a *lot* of old, bad info out there.

I'll follow the doc you linked and hopefully I'll be good to go!

Thanks.


-Original Message-
From: Amos Jeffries [mailto:[EMAIL PROTECTED]
Sent: Thursday, November 20, 2008 7:37 AM
To: Joseph Jamieson
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] disable-internal-dns not working on 2.6.18

Joseph Jamieson wrote:
> Hello,
>
> I am trying to set up a Squid reverse proxy server in order to direct 
> different web addresses to different servers.   The caching function is just 
> an added bonus.
>
> As I understand it, I need to use --disable-internal-dns build option to do 
> this, and put the various host names in /etc/hosts.

No. Just set /etc/hosts. Squid loads it as a fixed set of records always
preferred over remote lookups.

>
> This is an Ubuntu box and I've downloaded all of the packages necessary to 
> build squid, and it does build correctly.   I added the 
> --disable-internal-dns option into debian/rules, built binary packages, and 
> installed them.
>

Try "apt-get install squid". No building necessary.
Current Squid by default has all the necessary components to be a
reverse-proxy.


>
> Any ideas?   I'd love to get this up and running.  Squid 2.6's reverse proxy 
> looks like it's going to be a lot easier to manage than older versions.
>

It is, unfortunately you seem to have come across some of the docs for
obsolete Squid versions that ruined your experience so far.

In general Squid does not need to perform any DNS to act as a reverse-proxy.

Install the Ubuntu squid release and take a read of this page for the
configuration:
   http://wiki.squid-cache.org/SquidFaq/ReverseProxy
(particularly the part 'How Do I Set It Up')

NP: The demo config does not involve DNS. URL domain name in "dstdomain"
ACL and and IP on the "cache_peer  80 0 ..." lines make it work
without needing to check destination IP.

Amos
--
Please be using
   Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
   Current Beta Squid 3.1.0.2








Re: [squid-users] Very slow and high usage CPU on "Rebuilding storage"

2008-11-20 Thread Alexey Vlasov
--- src/store_dir.c.orig2008-11-20 20:34:13.0 +0300
+++ src/store_dir.c 2008-11-20 20:33:22.0 +0300
 storeDirGetBlkSize(const char *path, int *blksize)
 {
 #if HAVE_STATVFS
+/*
 struct statvfs sfs;
 if (statvfs(path, &sfs)) {
debug(50, 1) ("%s: %s\n", path, xstrerror());
@@ -525,7 +526,10 @@
return 1;
 }
 *blksize = (int) sfs.f_frsize;
+*/
+*blksize = 4096;
 #else
+/*
 struct statfs sfs;
 if (statfs(path, &sfs)) {
debug(50, 1) ("%s: %s\n", path, xstrerror());
@@ -533,6 +537,8 @@
return 1;
 }
 *blksize = (int) sfs.f_bsize;
+*/
+*blksize = 4096;
 #endif
 /*
  * Sanity check; make sure we have a meaningful value.


[squid-users] Change squid binary in flight

2008-11-20 Thread Lluis Ribes
Dear Squid Folks,

I have a Squid 3.0Stable1 running in a server. This version was installed
with apt-get Debian package utility. So, it has worked fine until now, where
I have file descriptor problems.

I saw that my installation has 1024 files as max_filedescriptor, I think not
much. I want to change it, but the parameter max_filedescriptor in
squid.conf doesn't work (I receive an error message about unknown
parameter).

So, I think thah the only way is recompile with file_descriptor flag:

./configure --with-filedescriptors=8192 --prefix=/opt/squid --with-openssl
--enable-ssl --disable-internal-dns --enable-async-io
--enable-storeio=ufs,diskd

Ok, I compiled Squid 3.0Stable10. So my question is:

Could I replace directly the binary that it was generated by my compilation
process and located in $SQUID_SOURCE/src/squid with my debian binary version
that it's running nowadays? I have to avoid lost of service of my web.

Thanks a lot!

Lluís




LLUÍS RIBES
Laviniainteractiva
www.laviniainteractiva.com
T (34) 93 272 34 10
Pujades 81
Barcelona 08005

skype: lluisribesportillo







[squid-users] is possible to configure squid to use a socks5 server?

2008-11-20 Thread SA Alfonso Baqueiro
Is possible to configure squid to use a SOCKS5 server to connect to Internet?

how do I do this?

any help apreciated.


Re: [squid-users] Squid default log rotation period

2008-11-20 Thread kaustav_deybiswas



Henrik Nordstrom-5 wrote:
> 
> On tor, 2008-11-20 at 07:05 -0800, kaustav_deybiswas wrote:
> 
>> I have recently installed Squid and I found that Squid is automatically
>> rotating its logs every week.
> 
> No it doesn't. But maybe you have a logrotate script doing the log
> rotation?
> 
> Regards
> Henrik
> 
>  
> 

Henrik,

I havent written any scripts, for sure. My crontab is empty, but squid still
rotates logs after every 7 days. I havent even configured anything relating
to log rotation in squid.conf.

I am using squid-2.6.STABLE16-4.fc7. How do I figure out what is going
wrong?

Thanks & Regards,
Kaustav
-- 
View this message in context: 
http://www.nabble.com/Squid-default-log-rotation-period-tp20603100p20608012.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] disable-internal-dns not working on 2.6.18

2008-11-20 Thread Chris Robertson

Joseph Jamieson wrote:

Hello again.

I followed that document to the letter here, and squid doesn't want to allow 
any traffic to my cache sites:

-
ERROR
The requested URL could not be retrieved

While trying to retrieve the URL: http://my.site.com/

The following error was encountered:

* Access Denied.

  Access control configuration prevents your request from being allowed at 
this time. Please contact your service provider if you feel this is incorrect.

Your cache administrator is webmaster.
Generated Thu, 20 Nov 2008 13:27:20 GMT by proxy.site.com (squid/2.6.STABLE18)
---

This is what I've added to the squid.conf:

http_port 80 accel defaultsite=my.site.com vhost

cache_peer 192.168.5.15 parent 80 0 no-query originserver name=moon
acl sites_moon dstdomain my.site.com
cache_peer_access moon allow sites_moon

cache_peer 192.168.5.12 parent 80 0 no-query originserver name=triton
acl sites_triton dstdomain terminal.site.com
cache_peer_access triton allow sites_triton

cache_peer 192.168.5.14 parent 80 0 no-query originserver name=titan
acl sites_titan dstdomain files.site.com
cache_peer_access titan allow sites_titan


That's all the guide told me to do, so I'm not sure what to do next.   Gosh, I 
wish this wasn't so difficult.

Joe
  


You missed a bit:

And finally you need to set up access controls to allow access to your 
site without pushing other web requests to your web server.

acl our_sites dstdomain your.main.website
***http_access allow our_sites***
cache_peer_access myAccel allow our_sites
cache_peer_access myAccel deny all


Chris


Re: [squid-users] squid rewrite feature ... help

2008-11-20 Thread Chris Robertson

chudy fernandez wrote:

#  TAG: rewrite
i've tried
rewrite %301:http://www.google-analytics.com/__utm.gif utm
doesn't work.
whats missing?
  


Wow.  That's a vague question.

What version of Squid are your using (apparently this directive is only 
available in the Squid 2 development release)?

How is the ACL utm defined (preferably in context)?
What is your testing methodology?
Finally, what leads you to believe it "doesn't work"?

Chris


Re: [squid-users] Large ACLs and TCP_OUTGOING_ADDRESS

2008-11-20 Thread Nyamul Hassan

Where could I find the "theoretical limits" publised by Adrian for 2.7?

Regards
HASSAN



- Original Message - 
From: "Amos Jeffries" <[EMAIL PROTECTED]>

To: "Nyamul Hassan" <[EMAIL PROTECTED]>
Cc: "Squid Users" 
Sent: Tuesday, November 18, 2008 05:31
Subject: Re: [squid-users] Large ACLs and TCP_OUTGOING_ADDRESS



Thank you very much.
Those stats look much better than the low peak ones. Though still not Very 
close to the theoretical limits Adrian published for 2.7.


Some very marginal increases may be gained from re-ordering your 
http_access lines that check for WindowsUpdate. Doing the src check before 
the dstdomain check (left-to-right) will save a few cycles per request.

so:  http_access Allow windowsupdate ispros
becomes: http_access Allow ispros windowsupdate

cache_store_log can be set to 'none' for less time logging debug info you 
generally don't need.


You may want to experiment with the collapsed_forwarding feature. It's 
designed to reduce server-side network lags so should increase the 
internal speeds but depends on higher hit ratios for best effect, which at 
 >40% you have.


That's all I can see right now that might provide any improvement at all.

Amos

Nyamul Hassan wrote:
Thank you Amos for your valuable input on this.  Please find attached a 
snapshot of peak hour traffic.


I'm also attaching the following graphs:

1.  Cache Hit Rate
2.  Client Request Rate
3. CPU IOWait
4.  Service Timers

I'm also attaching a copy of my cache configuration.  Looking at it, can 
you suggest me if I can get any better performance than it is?  I think 
the IOWait is way too high, and I am using regular commodity SATA HDDs.


Any input would be greatly appreciated.

Regards
HASSAN





- Original Message - From: "Amos Jeffries" <[EMAIL PROTECTED]>
To: "Nyamul Hassan" <[EMAIL PROTECTED]>
Cc: "Squid Users" 
Sent: Monday, November 17, 2008 07:01
Subject: Re: [squid-users] Large ACLs and TCP_OUTGOING_ADDRESS



Hi,

I run squid in an ISP scenario.  We have got two identically configured
squid caches being load balanced among 4,000 users over a 50 Mbps link.
The
system runs quite well, although not without the occassional hiccups.
But,
there is a complain from users about not being able to access some
websites
because of same external IP.  For this, we configured the squid.conf to
have
ACLs for different user blocks of /24 and have them mapped through
different
external IPs on each of these boxes.

However, not all /24 blocks have the same number of users, and I also 
have
lots of real IPs still lying unused.  I thought about creating 
different
ACLs for every 5 or 8 users, and then map them to different external 
IPs.

But, having them distributed in 8 IPs in each group would mean at least
500
separate ACLs and their corresponding TCP_OUTGOING_ADDRESS directives.

My question is, will this affect the performance of squid?  Can squid
handle
this?


Depends on the ACL type. Squid should be able to handle many easily. of
the ACl you need; src is the fastest, next best is dstdomain, then dst. 
So

for a marginal boost when combining on one line, put then in that order.

Just look for shortcuts as you go.



My servers are each running on Core 2 Duo 2.33 GHz, 8 GB of RAM, 5 HDDs
(1x80GB IDE for OS, 4x160GB SATA for cache), total 256GB Cache Store 
(64GB

on each HDD).  One of the server's stats are (taken at a very low user
count
time):


Thank you. We are trying to collect rough capacity info for Squid 
whenever

the opportunity comes up. Are you able to provide such stats around peak
load for our wiki?
The info we collect can be seen at
http://wiki.squid-cache.org/KnowledgeBase/Benchmarks

Amos




Cache Manager menu

Squid Object Cache: Version 2.7.STABLE4

Connection information for squid:
Number of clients accessing cache:2133
Number of HTTP requests received:6213380
Number of ICP messages received:1441542
Number of ICP messages sent:1441550
Number of queued ICP replies:0
Request failure ratio: 0.00
Average HTTP requests per minute since start:11488.3
Average ICP messages per minute since start:5330.7
Select loop called: 78705022 times, 0.412 ms avg
Cache information for squid:
Request Hit Ratios:5min: 41.7%, 60min: 43.8%
Byte Hit Ratios:5min: 17.5%, 60min: 16.9%
Request Memory Hit Ratios:5min: 16.2%, 60min: 14.4%
Request Disk Hit Ratios:5min: 44.2%, 60min: 43.6%
Storage Swap size:241613712 KB
Storage Mem size:4194392 KB
Mean Object Size:35.25 KB
Requests given to unlinkd:0
Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   0.55240  0.55240
Cache Misses:  0.72387  0.68577
Cache Hits:0.02899  0.02451
Near Hits: 0.64968  0.64968
Not-Modified Replies:  0.0  0.0
DNS Lookups:   0.0  0.0
ICP Queries:   0.00033  0.00035
Resource usage for squid

Re: [squid-users] Very slow and high usage CPU on "Rebuilding storage"

2008-11-20 Thread Amos Jeffries
> --- src/store_dir.c.orig2008-11-20 20:34:13.0 +0300
> +++ src/store_dir.c 2008-11-20 20:33:22.0 +0300

Please ask code specific questions and discussion in squid-dev mailing list.

This patch is specific to your kernel build. Others may not be able to use
the same block size, so the patch won't port.

Is the problem that squid does this regularly? or just that it does it at
all?

The former may be a bug we can fix, the latter is not.

Amos



Re: [squid-users] Squid default log rotation period

2008-11-20 Thread Amos Jeffries
>
>
>
> Henrik Nordstrom-5 wrote:
>>
>> On tor, 2008-11-20 at 07:05 -0800, kaustav_deybiswas wrote:
>>
>>> I have recently installed Squid and I found that Squid is automatically
>>> rotating its logs every week.
>>
>> No it doesn't. But maybe you have a logrotate script doing the log
>> rotation?
>>
>> Regards
>> Henrik
>>
>>
>>
>
> Henrik,
>
> I havent written any scripts, for sure. My crontab is empty, but squid
> still
> rotates logs after every 7 days. I havent even configured anything
> relating
> to log rotation in squid.conf.
>
> I am using squid-2.6.STABLE16-4.fc7. How do I figure out what is going
> wrong?
>

As Henrik said, Squid needs to be manually set to do any rotation.

Pre-packaged bundles of squid often come with all the control scripts that
do this type of thing.
Check for /etc/logrotate.d/squid or crontab entries calling 'squid -k
rotate'.

Amos


> Thanks & Regards,
> Kaustav
> --
> View this message in context:
> http://www.nabble.com/Squid-default-log-rotation-period-tp20603100p20608012.html
> Sent from the Squid - Users mailing list archive at Nabble.com.
>
>




Re: [squid-users] Squid default log rotation period

2008-11-20 Thread kaustav_deybiswas

Thanks a lot!

Crontabs was empty as I said, but I found /etc/logrotate.d/squid containing
the rotation scripts. I will modify them to suit my needs.

Thanks again,
Kaustav


Amos Jeffries-2 wrote:
> 
>>
>>
>>
>> Henrik Nordstrom-5 wrote:
>>>
>>> On tor, 2008-11-20 at 07:05 -0800, kaustav_deybiswas wrote:
>>>
 I have recently installed Squid and I found that Squid is automatically
 rotating its logs every week.
>>>
>>> No it doesn't. But maybe you have a logrotate script doing the log
>>> rotation?
>>>
>>> Regards
>>> Henrik
>>>
>>>
>>>
>>
>> Henrik,
>>
>> I havent written any scripts, for sure. My crontab is empty, but squid
>> still
>> rotates logs after every 7 days. I havent even configured anything
>> relating
>> to log rotation in squid.conf.
>>
>> I am using squid-2.6.STABLE16-4.fc7. How do I figure out what is going
>> wrong?
>>
> 
> As Henrik said, Squid needs to be manually set to do any rotation.
> 
> Pre-packaged bundles of squid often come with all the control scripts that
> do this type of thing.
> Check for /etc/logrotate.d/squid or crontab entries calling 'squid -k
> rotate'.
> 
> Amos
> 
> 
>> Thanks & Regards,
>> Kaustav
>> --
>> View this message in context:
>> http://www.nabble.com/Squid-default-log-rotation-period-tp20603100p20608012.html
>> Sent from the Squid - Users mailing list archive at Nabble.com.
>>
>>
> 
> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Squid-default-log-rotation-period-tp20603100p20610183.html
Sent from the Squid - Users mailing list archive at Nabble.com.



RE: [squid-users] disable-internal-dns not working on 2.6.18

2008-11-20 Thread Joseph Jamieson
Hi Chris,

I actually DID put in the http_access line in the config file but I entered it 
in wrong.   Oops..  but- It's working now!

This thing is way too cool.   I wish squid were a LITTLE easier to configure 
but you can do friggin' anything with this product.   I can't stress enough how 
cool I think squid is.   I've been using it for years in all different ways and 
although I run into speed bumps configuring things sometimes, I'm always happy 
with the result.

Go Squid!Thanks for the help guys.

-Joe


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Thursday, November 20, 2008 2:15 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] disable-internal-dns not working on 2.6.18

Joseph Jamieson wrote:
> Hello again.
>
> I followed that document to the letter here, and squid doesn't want to allow 
> any traffic to my cache sites:
>
> -
> ERROR
> The requested URL could not be retrieved
>
> While trying to retrieve the URL: http://my.site.com/
>
> The following error was encountered:
>
> * Access Denied.
>
>   Access control configuration prevents your request from being allowed 
> at this time. Please contact your service provider if you feel this is 
> incorrect.
>
> Your cache administrator is webmaster.
> Generated Thu, 20 Nov 2008 13:27:20 GMT by proxy.site.com (squid/2.6.STABLE18)
> ---
>
> This is what I've added to the squid.conf:
>
> http_port 80 accel defaultsite=my.site.com vhost
>
> cache_peer 192.168.5.15 parent 80 0 no-query originserver name=moon
> acl sites_moon dstdomain my.site.com
> cache_peer_access moon allow sites_moon
>
> cache_peer 192.168.5.12 parent 80 0 no-query originserver name=triton
> acl sites_triton dstdomain terminal.site.com
> cache_peer_access triton allow sites_triton
>
> cache_peer 192.168.5.14 parent 80 0 no-query originserver name=titan
> acl sites_titan dstdomain files.site.com
> cache_peer_access titan allow sites_titan
>
>
> That's all the guide told me to do, so I'm not sure what to do next.   Gosh, 
> I wish this wasn't so difficult.
>
> Joe
>

You missed a bit:

> And finally you need to set up access controls to allow access to your
> site without pushing other web requests to your web server.
> acl our_sites dstdomain your.main.website
> ***http_access allow our_sites***
> cache_peer_access myAccel allow our_sites
> cache_peer_access myAccel deny all

Chris






Re: [squid-users] squid rewrite feature ... help

2008-11-20 Thread Amos Jeffries
> chudy fernandez wrote:
>> #  TAG: rewrite
>> i've tried
>> rewrite %301:http://www.google-analytics.com/__utm.gif utm
>> doesn't work.
>> whats missing?
>>
>
> Wow.  That's a vague question.
>
> What version of Squid are your using (apparently this directive is only
> available in the Squid 2 development release)?
> How is the ACL utm defined (preferably in context)?
> What is your testing methodology?
> Finally, what leads you to believe it "doesn't work"?
>
> Chris
>

http://www.squid-cache.org/Doc/config/rewrite/
And yes, its only available in squid-2.HEAD (2.8 alpha) code

The % only refers to the sub-codes _inside_ the URL for replacement.

As an example.

  rewrite 301:http://%rh/__utm.gif utm

redirect requests matching utm ACL to __utm.gif at the same domain as
client requested. So many domains can match this and be individually
redirected to their own __utm.gif files.

Amos




Re: [squid-users] is possible to configure squid to use a socks5 server?

2008-11-20 Thread Amos Jeffries
> Is possible to configure squid to use a SOCKS5 server to connect to
> Internet?
>
> how do I do this?
>
> any help apreciated.
>

Not possible to configure. There are apparently patches around somewhere
to replace the low-level code in squid to use SOCKS4 instead of native
TCP.
I don't know about SOCKS5.

If anyone wants to submit a patch which allows Squid-3.HEAD to use SOCKS
we are favorable towards merging it as a build-time option.

Amos




RE: [squid-users] disable-internal-dns not working on 2.6.18

2008-11-20 Thread Amos Jeffries
> Hi Chris,
>
> I actually DID put in the http_access line in the config file but I
> entered it in wrong.   Oops..  but- It's working now!
>
> This thing is way too cool.   I wish squid were a LITTLE easier to
> configure but you can do friggin' anything with this product.

Thats coming 'real soon now', there are a few of us working on config
upgrades. We've made great strides in Squid-3.1 and later Squid-2, but
still have a ways to go.
If you have any specific hangups or ideas drop a message here or squid-dev
and I'll add it to the todo list.

>   I can't
> stress enough how cool I think squid is.   I've been using it for years in
> all different ways and although I run into speed bumps configuring things
> sometimes, I'm always happy with the result.
>
> Go Squid!Thanks for the help guys.
>
> -Joe
>

 :) Thank you. It's always great to see someone happy with our work.

Amos

>
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Sent: Thursday, November 20, 2008 2:15 PM
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] disable-internal-dns not working on 2.6.18
>
> Joseph Jamieson wrote:
>> Hello again.
>>
>> I followed that document to the letter here, and squid doesn't want to
>> allow any traffic to my cache sites:
>>
>> -
>> ERROR
>> The requested URL could not be retrieved
>>
>> While trying to retrieve the URL: http://my.site.com/
>>
>> The following error was encountered:
>>
>> * Access Denied.
>>
>>   Access control configuration prevents your request from being
>> allowed at this time. Please contact your service provider if you
>> feel this is incorrect.
>>
>> Your cache administrator is webmaster.
>> Generated Thu, 20 Nov 2008 13:27:20 GMT by proxy.site.com
>> (squid/2.6.STABLE18)
>> ---
>>
>> This is what I've added to the squid.conf:
>>
>> http_port 80 accel defaultsite=my.site.com vhost
>>
>> cache_peer 192.168.5.15 parent 80 0 no-query originserver name=moon
>> acl sites_moon dstdomain my.site.com
>> cache_peer_access moon allow sites_moon
>>
>> cache_peer 192.168.5.12 parent 80 0 no-query originserver name=triton
>> acl sites_triton dstdomain terminal.site.com
>> cache_peer_access triton allow sites_triton
>>
>> cache_peer 192.168.5.14 parent 80 0 no-query originserver name=titan
>> acl sites_titan dstdomain files.site.com
>> cache_peer_access titan allow sites_titan
>>
>>
>> That's all the guide told me to do, so I'm not sure what to do next.
>> Gosh, I wish this wasn't so difficult.
>>
>> Joe
>>
>
> You missed a bit:
>
>> And finally you need to set up access controls to allow access to your
>> site without pushing other web requests to your web server.
>> acl our_sites dstdomain your.main.website
>> ***http_access allow our_sites***
>> cache_peer_access myAccel allow our_sites
>> cache_peer_access myAccel deny all
>
> Chris
>
>
>
>
>




Re: [squid-users] Change squid binary in flight

2008-11-20 Thread Kinkie
On Thu, Nov 20, 2008 at 7:10 PM, Lluis Ribes <[EMAIL PROTECTED]> wrote:
> Dear Squid Folks,
>
> I have a Squid 3.0Stable1 running in a server. This version was installed
> with apt-get Debian package utility. So, it has worked fine until now, where
> I have file descriptor problems.
>
> I saw that my installation has 1024 files as max_filedescriptor, I think not
> much. I want to change it, but the parameter max_filedescriptor in
> squid.conf doesn't work (I receive an error message about unknown
> parameter).

Are you sure? It may be a runtime limitation; please check that
there's a 'ulimit -n 8192' line in the squid startup script (replace
8192 with your desired limit).

> So, I think thah the only way is recompile with file_descriptor flag:
>
> ./configure --with-filedescriptors=8192 --prefix=/opt/squid --with-openssl
> --enable-ssl --disable-internal-dns --enable-async-io
> --enable-storeio=ufs,diskd
>
> Ok, I compiled Squid 3.0Stable10. So my question is:
>
> Could I replace directly the binary that it was generated by my compilation
> process and located in $SQUID_SOURCE/src/squid with my debian binary version
> that it's running nowadays? I have to avoid lost of service of my web.

the debian package may have different configure options; if you miss
some configuration option your configuration file may be incompatible
with your new binary.
you may want to run 'squid -v' to check that your new configure
options are compatible with the previous ones.

You may also want to keep your old binary around, to be able to roll
back in case of problems.


-- 
/kinkie


Re: [squid-users] is possible to configure squid to use a socks5 server?

2008-11-20 Thread SA Alfonso Baqueiro
> Not possible to configure. There are apparently patches around somewhere
> to replace the low-level code in squid to use SOCKS4 instead of native
> TCP.
> I don't know about SOCKS5.
>
> If anyone wants to submit a patch which allows Squid-3.HEAD to use SOCKS
> we are favorable towards merging it as a build-time option.

Ok, no native support. Thanks for the answer.

Is possible to use SQUID using tsocks ?


Re: [squid-users] Large ACLs and TCP_OUTGOING_ADDRESS

2008-11-20 Thread Amos Jeffries
> Where could I find the "theoretical limits" publised by Adrian for 2.7?
>
> Regards
> HASSAN
>

Somewhere in squid-dev over the late 2007- early 2008 he pushed a graph
out comparing cacheboy and Squid-2.7 and Squid-2.HEAD.

All I can find right now is this thread:
  http://www.squid-cache.org/mail-archive/squid-dev/200701/0077.html
  http://www.squid-cache.org/mail-archive/squid-dev/200701/0083.html

And some old graphs on his cacheboy site:
  http://www.cacheboy.net/polygraph/cacheboy_1.4.pre3_test2/one-page.html
looks like he has scraped out another 50rps since the early reports.

One indicates squid is capable of ~500 RPS on regular home hardware. And
the other that a very old version was capable of >3500 RPS on high-end
hardware in 2006.

Amos

>
>
> - Original Message -
> From: "Amos Jeffries" <[EMAIL PROTECTED]>
> To: "Nyamul Hassan" <[EMAIL PROTECTED]>
> Cc: "Squid Users" 
> Sent: Tuesday, November 18, 2008 05:31
> Subject: Re: [squid-users] Large ACLs and TCP_OUTGOING_ADDRESS
>
>
>> Thank you very much.
>> Those stats look much better than the low peak ones. Though still not
>> Very
>> close to the theoretical limits Adrian published for 2.7.
>>
>> Some very marginal increases may be gained from re-ordering your
>> http_access lines that check for WindowsUpdate. Doing the src check
>> before
>> the dstdomain check (left-to-right) will save a few cycles per request.
>> so:  http_access Allow windowsupdate ispros
>> becomes: http_access Allow ispros windowsupdate
>>
>> cache_store_log can be set to 'none' for less time logging debug info
>> you
>> generally don't need.
>>
>> You may want to experiment with the collapsed_forwarding feature. It's
>> designed to reduce server-side network lags so should increase the
>> internal speeds but depends on higher hit ratios for best effect, which
>> at
>>  >40% you have.
>>
>> That's all I can see right now that might provide any improvement at
>> all.
>>
>> Amos
>>
>> Nyamul Hassan wrote:
>>> Thank you Amos for your valuable input on this.  Please find attached a
>>> snapshot of peak hour traffic.
>>>
>>> I'm also attaching the following graphs:
>>>
>>> 1.  Cache Hit Rate
>>> 2.  Client Request Rate
>>> 3. CPU IOWait
>>> 4.  Service Timers
>>>
>>> I'm also attaching a copy of my cache configuration.  Looking at it,
>>> can
>>> you suggest me if I can get any better performance than it is?  I think
>>> the IOWait is way too high, and I am using regular commodity SATA HDDs.
>>>
>>> Any input would be greatly appreciated.
>>>
>>> Regards
>>> HASSAN
>>>
>>>
>>>
>>>
>>>
>>> - Original Message - From: "Amos Jeffries"
>>> <[EMAIL PROTECTED]>
>>> To: "Nyamul Hassan" <[EMAIL PROTECTED]>
>>> Cc: "Squid Users" 
>>> Sent: Monday, November 17, 2008 07:01
>>> Subject: Re: [squid-users] Large ACLs and TCP_OUTGOING_ADDRESS
>>>
>>>
> Hi,
>
> I run squid in an ISP scenario.  We have got two identically
> configured
> squid caches being load balanced among 4,000 users over a 50 Mbps
> link.
> The
> system runs quite well, although not without the occassional hiccups.
> But,
> there is a complain from users about not being able to access some
> websites
> because of same external IP.  For this, we configured the squid.conf
> to
> have
> ACLs for different user blocks of /24 and have them mapped through
> different
> external IPs on each of these boxes.
>
> However, not all /24 blocks have the same number of users, and I also
> have
> lots of real IPs still lying unused.  I thought about creating
> different
> ACLs for every 5 or 8 users, and then map them to different external
> IPs.
> But, having them distributed in 8 IPs in each group would mean at
> least
> 500
> separate ACLs and their corresponding TCP_OUTGOING_ADDRESS
> directives.
>
> My question is, will this affect the performance of squid?  Can squid
> handle
> this?

 Depends on the ACL type. Squid should be able to handle many easily.
 of
 the ACl you need; src is the fastest, next best is dstdomain, then
 dst.
 So
 for a marginal boost when combining on one line, put then in that
 order.

 Just look for shortcuts as you go.

>
> My servers are each running on Core 2 Duo 2.33 GHz, 8 GB of RAM, 5
> HDDs
> (1x80GB IDE for OS, 4x160GB SATA for cache), total 256GB Cache Store
> (64GB
> on each HDD).  One of the server's stats are (taken at a very low
> user
> count
> time):

 Thank you. We are trying to collect rough capacity info for Squid
 whenever
 the opportunity comes up. Are you able to provide such stats around
 peak
 load for our wiki?
 The info we collect can be seen at
 http://wiki.squid-cache.org/KnowledgeBase/Benchmarks

 Amos



>>> Cache Manager menu
>>>
>>> Squid Object Cache: Version 2.7.STABLE4
>>>
>>> Connection informatio

Re: [squid-users] Change squid binary in flight

2008-11-20 Thread Amos Jeffries
> On Thu, Nov 20, 2008 at 7:10 PM, Lluis Ribes <[EMAIL PROTECTED]> wrote:
>> Dear Squid Folks,
>>
>> I have a Squid 3.0Stable1 running in a server. This version was
>> installed
>> with apt-get Debian package utility. So, it has worked fine until now,
>> where
>> I have file descriptor problems.
>>
>> I saw that my installation has 1024 files as max_filedescriptor, I think
>> not
>> much. I want to change it, but the parameter max_filedescriptor in
>> squid.conf doesn't work (I receive an error message about unknown
>> parameter).
>
> Are you sure? It may be a runtime limitation; please check that
> there's a 'ulimit -n 8192' line in the squid startup script (replace
> 8192 with your desired limit).
>
>> So, I think thah the only way is recompile with file_descriptor flag:
>>
>> ./configure --with-filedescriptors=8192 --prefix=/opt/squid
>> --with-openssl
>> --enable-ssl --disable-internal-dns --enable-async-io
>> --enable-storeio=ufs,diskd
>>
>> Ok, I compiled Squid 3.0Stable10. So my question is:
>>
>> Could I replace directly the binary that it was generated by my
>> compilation
>> process and located in $SQUID_SOURCE/src/squid with my debian binary
>> version
>> that it's running nowadays? I have to avoid lost of service of my web.
>
> the debian package may have different configure options; if you miss
> some configuration option your configuration file may be incompatible
> with your new binary.
> you may want to run 'squid -v' to check that your new configure
> options are compatible with the previous ones.
>
> You may also want to keep your old binary around, to be able to roll
> back in case of problems.
>

Indeed. There is at least one patch needed to make squid log correctly in
Debian. IIRC the packages are built with 4096 fd by default, maybe stable1
missed out for some reason.

Please try the latest package (STABLE8-1 in Debian) before a custom build
on production environments.

With the same Debian build options and patch "make install" places all
binaries in the correct places for a /etc/init.d/restart to work.

Amos




Re: [squid-users] Change squid binary in flight

2008-11-20 Thread Amos Jeffries
> On Thu, Nov 20, 2008 at 7:10 PM, Lluis Ribes <[EMAIL PROTECTED]> wrote:
>> Dear Squid Folks,
>>
>> I have a Squid 3.0Stable1 running in a server. This version was
>> installed
>> with apt-get Debian package utility. So, it has worked fine until now,
>> where
>> I have file descriptor problems.
>>
>> I saw that my installation has 1024 files as max_filedescriptor, I think
>> not
>> much. I want to change it, but the parameter max_filedescriptor in
>> squid.conf doesn't work (I receive an error message about unknown
>> parameter).
>
> Are you sure? It may be a runtime limitation; please check that
> there's a 'ulimit -n 8192' line in the squid startup script (replace
> 8192 with your desired limit).
>
>> So, I think thah the only way is recompile with file_descriptor flag:
>>
>> ./configure --with-filedescriptors=8192 --prefix=/opt/squid
>> --with-openssl
>> --enable-ssl --disable-internal-dns --enable-async-io
>> --enable-storeio=ufs,diskd
>>
>> Ok, I compiled Squid 3.0Stable10. So my question is:
>>
>> Could I replace directly the binary that it was generated by my
>> compilation
>> process and located in $SQUID_SOURCE/src/squid with my debian binary
>> version
>> that it's running nowadays? I have to avoid lost of service of my web.
>
> the debian package may have different configure options; if you miss
> some configuration option your configuration file may be incompatible
> with your new binary.
> you may want to run 'squid -v' to check that your new configure
> options are compatible with the previous ones.
>
> You may also want to keep your old binary around, to be able to roll
> back in case of problems.
>

Indeed. There is at least one patch needed to make squid log correctly in
Debian. http://wiki.squid-cache.org/SquidFaq/CompilingSquid (Debian
section)

IIRC the packages are built with 4096 fd by default, maybe stable1 missed
out for some reason. stable8 is available for Debian, please try that
before going to a custom build.

If you must, using the exact same configure options as your packaged build
and the logs/log patch leaves "make install" placing all binaries in the
correct places for a /etc/init.d/restart to work.

Amos




Re: [squid-users] Large ACLs and TCP_OUTGOING_ADDRESS

2008-11-20 Thread Nyamul Hassan

Thanks Amos.  The links were very insightful.

However, the 2500req/sec that ShuXin Zheng mentioned (and later achieved 
3500req/sec) was in a reverse proxy scenario.  Is that also the expected 
limit for a regular forward proxy?


I am also using regular commodity 4 x SATA 3.0 Gbps HDDs, compared to SCSI 
by ShuXin.  Given the speeds SATA can achieve these days, is there any 
thumbrule between comparing them?


Regards
HASSAN



- Original Message - 
From: "Amos Jeffries" <[EMAIL PROTECTED]>

To: "Nyamul Hassan" <[EMAIL PROTECTED]>
Cc: "Amos Jeffries" <[EMAIL PROTECTED]>; "Squid Users" 


Sent: Friday, November 21, 2008 08:56
Subject: Re: [squid-users] Large ACLs and TCP_OUTGOING_ADDRESS



Where could I find the "theoretical limits" publised by Adrian for 2.7?

Regards
HASSAN



Somewhere in squid-dev over the late 2007- early 2008 he pushed a graph
out comparing cacheboy and Squid-2.7 and Squid-2.HEAD.

All I can find right now is this thread:
 http://www.squid-cache.org/mail-archive/squid-dev/200701/0077.html
 http://www.squid-cache.org/mail-archive/squid-dev/200701/0083.html

And some old graphs on his cacheboy site:
 http://www.cacheboy.net/polygraph/cacheboy_1.4.pre3_test2/one-page.html
looks like he has scraped out another 50rps since the early reports.

One indicates squid is capable of ~500 RPS on regular home hardware. And
the other that a very old version was capable of >3500 RPS on high-end
hardware in 2006.

Amos





[squid-users] IP Forwarder

2008-11-20 Thread ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░
i use HAVP
and my client direct ( trnasparant to HAVP PORT - same machine )
at havp log
20/11/2008 23:23:30 192.168.222.222 GET 200
http://mail.google.com/mail/images/2/5/pebbles/opacity.png 424+169 OK


at squid log
1227241410.457530 127.0.0.1 TCP_MISS/200 563 GET
http://mail.google.com/mail/rc? - DIRECT/66.249.89.18 image/png

the problem is 
i have multiple rule user groups that already configured at squid.conf
how to make my squid work again with HAVP capatibilities

my rc.local
echo "1" > /proc/sys/net/ipv4/ip_forward
#/etc/init.d/networking restart
/sbin/iptables --flush
/sbin/iptables --table nat --flush
/sbin/iptables --delete-chain
/sbin/iptables --table nat --delete-chain
/sbin/iptables -F -t nat
/sbin/iptables --table nat --append POSTROUTING --out-interface eth1
-j MASQUERADE
/sbin/iptables --append FORWARD --in-interface  eth1 -j ACCEPT
/sbin/iptables -t nat -A PREROUTING -i eth0 -p tcp -s
192.168.0.0/255.255.0.0 --dport 80 -j DNAT --to 127.0.0.1:2210
--
my squid.conf
http_port 2210 transparent
icp_port 3130
snmp_port 0
cache_peer 127.0.0.1 sibling 8080 0 no-query no-digest default
cache_peer_access 127.0.0.1 allow all

my havp conf
# Default:
TRANSPARENT true
# Default: NONE
PARENTPROXY 127.0.0.1
PARENTPORT 2210
# Default:
FORWARDED_IP true
# Default:
X_FORWARDED_FOR true
#
# Port HAVP is listening on.
#
# Default:
# PORT 8080
===

help me pls


-- 
-=-=-=-=
http://amyhost.com
Hot News !!! : Dikarenakan Banyaknya permintaan Domain registration
sehingga Stok Saldo kini terupdate menggunakan Kurs saat ini yaitu Rp.
85.000 untuk non-reseller | Rp. 82.000 untuk Reseller

Pengin punya Layanan SMS PREMIUM ?
Contact me ASAP. dapatkan Share revenue MAXIMAL tanpa syarat traffic...


[squid-users] Re: IP Forwarder

2008-11-20 Thread ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░
CORRECTION
iptables rule :

/sbin/iptables --flush
/sbin/iptables --table nat --flush
/sbin/iptables --delete-chain
/sbin/iptables --table nat --delete-chain
/sbin/iptables -F -t nat
/sbin/iptables --table nat --append POSTROUTING --out-interface eth1
-j MASQUERADE
/sbin/iptables --append FORWARD --in-interface  eth1 -j ACCEPT
/sbin/iptables -t nat -A PREROUTING -i eth0 -p tcp -s
192.168.0.0/255.255.0.0 --dport 80 -j DNAT --to 192.168.222.100:8080


On Fri, Nov 21, 2008 at 11:27 AM, ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░
<[EMAIL PROTECTED]> wrote:
> i use HAVP
> and my client direct ( trnasparant to HAVP PORT - same machine )
> at havp log
> 20/11/2008 23:23:30 192.168.222.222 GET 200
> http://mail.google.com/mail/images/2/5/pebbles/opacity.png 424+169 OK
>
>
> at squid log
> 1227241410.457530 127.0.0.1 TCP_MISS/200 563 GET
> http://mail.google.com/mail/rc? - DIRECT/66.249.89.18 image/png
>
> the problem is 
> i have multiple rule user groups that already configured at squid.conf
> how to make my squid work again with HAVP capatibilities
>
> my rc.local
> echo "1" > /proc/sys/net/ipv4/ip_forward
> #/etc/init.d/networking restart
> /sbin/iptables --flush
> /sbin/iptables --table nat --flush
> /sbin/iptables --delete-chain
> /sbin/iptables --table nat --delete-chain
> /sbin/iptables -F -t nat
> /sbin/iptables --table nat --append POSTROUTING --out-interface eth1
> -j MASQUERADE
> /sbin/iptables --append FORWARD --in-interface  eth1 -j ACCEPT
> /sbin/iptables -t nat -A PREROUTING -i eth0 -p tcp -s
> 192.168.0.0/255.255.0.0 --dport 80 -j DNAT --to 127.0.0.1:2210
> --
> my squid.conf
> http_port 2210 transparent
> icp_port 3130
> snmp_port 0
> cache_peer 127.0.0.1 sibling 8080 0 no-query no-digest default
> cache_peer_access 127.0.0.1 allow all
> 
> my havp conf
> # Default:
> TRANSPARENT true
> # Default: NONE
> PARENTPROXY 127.0.0.1
> PARENTPORT 2210
> # Default:
> FORWARDED_IP true
> # Default:
> X_FORWARDED_FOR true
> #
> # Port HAVP is listening on.
> #
> # Default:
> # PORT 8080
> ===
>
> help me pls
>
>
> --
> -=-=-=-=
> http://amyhost.com
> Hot News !!! : Dikarenakan Banyaknya permintaan Domain registration
> sehingga Stok Saldo kini terupdate menggunakan Kurs saat ini yaitu Rp.
> 85.000 untuk non-reseller | Rp. 82.000 untuk Reseller
> 
> Pengin punya Layanan SMS PREMIUM ?
> Contact me ASAP. dapatkan Share revenue MAXIMAL tanpa syarat traffic...
>



-- 
-=-=-=-=
http://amyhost.com
Hot News !!! : Dikarenakan Banyaknya permintaan Domain registration
sehingga Stok Saldo kini terupdate menggunakan Kurs saat ini yaitu Rp.
85.000 untuk non-reseller | Rp. 82.000 untuk Reseller

Pengin punya Layanan SMS PREMIUM ?
Contact me ASAP. dapatkan Share revenue MAXIMAL tanpa syarat traffic...


[squid-users] Low LRU Reference Age & HDD Capacity

2008-11-20 Thread Nyamul Hassan

Hi again!

I have 2 identically configured cache servers running as siblings to each 
other serving ~ 4,500 clients together.  The configurations are :  Core 2 
Duo 2.33 GHz, 8 GB RAM, 1 x 160 GB IDE (OS), 4 x 160 GB SATA 3.0 Gbps 
(cache_dir).


My LRU Reference Age (in Store Directory page) is only between 3.00 and 3.41 
days.  If I read understood it correctly, it means most of my cache contents 
are replaced in less than 3.41 days.  Does anybody have any idea if this is 
a low value or not?


I'm using only 65G on each HDD as cache_dir, which is roughly 40% of the 
drive capacity.  I read somewhere in the mailing list that we can go upto 
80% of drive capacity safely.  But when I do even 80G, my IOWait goes way 
high.  :(  Does that mean I need faster HDDs?


My performance is reported in the benchmark: 
http://wiki.squid-cache.org/KnowledgeBase/Benchmarks#head-99a7a6b698d2e97de2bdd4385cb423cd7c788bf8. 
We are missing the "Byte Hit Ratio" there which stands at between 15 adn 20% 
during peak hours.


The config file can be found here: 
http://116.193.170.11/squid/squid_2008-11-21-1000_config.html. Some 
highlights of the config are:


cache_mem 4294967296 bytes
maximum_object_size_in_memory 65536 bytes
memory_replacement_policy lru
cache_replacement_policy lru
cache_dir aufs /cachestore/cache1 65536 16 256
cache_dir aufs /cachestore/cache2 65536 16 256
cache_dir aufs /cachestore/cache3 65536 16 256
cache_dir aufs /cachestore/cache4 65536 16 256
minimum_object_size 0 bytes
maximum_object_size 1073741824 bytes
cache_swap_low 90
cache_swap_high 95
update_headers on
access_log /var/log/squid/access.log squid
cache_log /var/log/squid/cache.log