[squid-users] Squid to get around Android proxy authentication

2012-04-30 Thread Crawford, Ben
Good Day,

I am running squid 2.7 (although switching to squid 3 is likely to
happen soon) on our local school internal proxy (Ubuntu) that is
behind a larger network proxy (which I don't have control over).

We have started allowing students to access our wireless network as the
proliferation of smart phones, tablets and laptops has been steadily
increasing.

The problem is Andorid does not play nice with proxies that require
authentication.  I had an idea of a way around this that would still tie
things to the individual logins.  The solution I have been looking at
is to either bind the http_port or MAC address (through arp) to a
specific cache peer.  Here is what I was thinking:

Either:
http_port 123 name=student1_port
cache_peer 10.x.x.x parent 3128 no-query login=user:my_pass name=student1_peer
cache_peer_access student1_peer allow student1_port

Or:
cache_peer 10.x.x.x parent 3128 no-query login=user:my_pass name=student1_peer
acl student1_mac arp 01:01:01:01:01:01
cache_peer_access student1_peer allow student1_mac

I was hoping that one of these solutions would allow me to point at the
local proxy and avoid having to provide details for the upstream proxy
which requires authentication (basic auth - which I continue to rail
against).  However, no such luck just yet.

I am still relatively new to squid, and searches along with trial and
error have also been unsuccessful.

Any suggestions would be greatly appreciated.

Cheers,
Ben


[squid-users] squid3.1.15 + publish.ovi.com

2012-04-30 Thread Gerson Barreiros
I'm using squid 3.1.15 (amos ppa) + ubuntu 10.04

And we can't open the 'register' link (
https://publish.ovi.com/register/country_and_account_type ) located
at https://publish.ovi.com/login

When we click 'register' the login page get refreshed.

Any ideas?


Re: [squid-users] commBind: Cannot bind socket error

2012-04-30 Thread Amos Jeffries

On 1/05/2012 1:36 a.m., Nick Howitt wrote:

Hi,
I am new to squid and I am trying to run in on my ClearOS 5.2 gateway 
where it is supplied as a pre-configured package. However, whenever I 
try to start it I lose all internet access. I would like to run it in 
transparent mode which is a menu option I have for it.


My cache.log reads:
2012/04/25 12:51:06| Starting Squid Cache version 2.6.STABLE21 for 
i686-redhat-linux-gnu...




2012/04/25 12:51:06| Accepting proxy HTTP connections at 0.0.0.0, port 
3128, FD 13.


So squid is configured to listen on a wildcard port (*:3128) which binds 
to every IP address the box has using a single open+listen operation. 
This is successful.


Then Squid is *also* instructed to bind particular IP:port combinations ...

2012/04/25 12:51:06| commBind: Cannot bind socket FD 14 to 
192.168.3.1:3128: (98) Address already in use


... oops,  *:3128 is already open ...

2012/04/25 12:51:06| commBind: Cannot bind socket FD 14 to 
192.168.2.1:3128: (98) Address already in use


... oops, *:3128 is already open ...

2012/04/25 12:51:06| commBind: Cannot bind socket FD 14 to 
127.0.0.1:3128: (98) Address already in use


... oops, *:3128 is already open ...

At this point I lose internet access. and it does not change when I 
switch it to transparent mode. I am not aware of anything else running 
on port 3128 and netstat -an -t | grep 3128 shows nothing.


You configured Squid to open port 3128 four times. Only the first 
attempt succeeds, the others clash with it.


Squid is operating with the wildcard port open for all traffic. BUT, 
intercepted traffic cannot be received by the regular forward-proxy port 
3128. Your requests passed to any IP and port 3128 are rejected as 
malformed client->proxy requests (true, because they are client->origin 
format requests).





If it helps at all, this is my squid.conf:

acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.0/8
acl webconfig_lan src 192.168.2.0/24 192.168.3.0/24  192.168.10.0/24
acl webconfig_to_lan dst 192.168.2.0/24 192.168.3.0/24  192.168.10.0/24
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80# http
acl Safe_ports port 21# ftp
acl Safe_ports port 443# https
acl Safe_ports port 70# gopher
acl Safe_ports port 210# wais
acl Safe_ports port 1025-65535# unregistered ports
acl Safe_ports port 280# http-mgmt
acl Safe_ports port 488# gss-http
acl Safe_ports port 591# filemaker
acl Safe_ports port 777# multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access allow webconfig_to_lan


The above "allow webconfig_to_lan" rule opens your proxy to 4 out of the 
5 most common proxy attacks

http://wiki.squid-cache.org/SquidFaq/SecurityPitfalls

Oops.



http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports


Move your global allow rule down to here below the basic security 
protections.



And consider carefully why you need it in the first place. There are no 
accel mode ports configured. For an interception proxy you should be 
able to depend on the src type ACL to operate correctly or you have 
configured the interception rules wrong.




http_access allow localhost
http_access allow webconfig_lan
http_access deny all
icp_access allow all
http_port 3128
hierarchy_stoplist cgi-bin ?
access_log /var/log/squid/access.log squid
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
refresh_pattern ^ftp:144020%10080
refresh_pattern ^gopher:14400%1440
refresh_pattern .020%4320
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
coredump_dir /var/spool/squid
error_directory /etc/squid/errors
follow_x_forwarded_for allow localhost
http_port 192.168.3.1:3128 transparent
http_port 192.168.2.1:3128 transparent
http_port 127.0.0.1:3128 transparent

Can anyone help me, please?


Please follow the advice in  
http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxDnat#iptables_configuration


Additionally, why do you have three interception ports? and why is 
127.0.0.1 involved?


Amos


Re: [squid-users] http to squid to https

2012-04-30 Thread Squid Tiz

On Apr 29, 2012, at 10:36 PM, Amos Jeffries wrote:

> On 28/04/2012 10:37 a.m., Squid Tiz wrote:
>> I am kinda new to squid.  Been looking over the documentation and I just 
>> wanted a sanity check on what I am trying to do.
>> 
>> I have a web client that hits my squid server.  The squid connects to an 
>> apache server via ssl.
>> 
>> Here are the lines of interest from my squid.conf for version 3.1.8
>> 
>> http_port 80 accel defaultsite=123.123.123.123
>> cache_peer 123.123.123.123 parent 443 0 no-query originserver ssl 
>> sslflags=DONT_VERIFY_PEER name=apache1
>> 
>> The good news is, that works just as I hoped.  I get a connection.
>> 
>> But I am questioning the DONT_VERIFY_PEER.Don't I want to verify peer?
> 
> Ideally yes. It is better security. But up to you whether you need it or not.
> It means having available to OpenSSL on the squid box (possibly via 
> squid.conf settings) the CA certificate which signed the peers certificate, 
> so that verification will not fail.
> 
>> 
>> I simply hacked up a self signed cert on the apache server.  Installed 
>> mod_ssl and restarted apache and everything started to work on 443.
>> 
>> On the command line for the squid server I can curl the apache box with:
>> 
>> curl --cacert  _the_signed_cert_from_the_apache_node_ https://apache.server
>> 
>> Is there a way with sslcert and sslkey to setup a keypair that will verify?
> 
> They are for configuring the *client* certificate and key sent by Squid to 
> Apache. For when Apache is doing the verification of its clients.
> 
> Squid has a sslcacert= option which does the same as curl --cacert option. 
> For validating the Apache certificate(s).
> 
>>   Do I need a signed cert?
> 
> Yes, TLS requires signing. Your self-signing CA will do however, so long as 
> both ends of the connection are in agreement on the CA trust.
> 
>> 
>> I tried to add the cert and key to the cach_peer line in the config.  Squid 
>> did restart.  But no connection.  Why would curl work but not squid?
>> 
> see above.
> 
> Amos

Amos,

Thanks for the reply.  

I was just curious to see if I good get this to fly.  The goal is to attach to 
the squid server via http and have squid verify and attach to the SSL server 
using a self signed cert.  This seems to work.  Squid starts OK and my logs are 
clean.  No validation errors.

Comments appreciated.


Create the CA stuff on the apache server:

Key
openssl genrsa -des3 -out ca.key 4096
CRT
openssl req -new -x509 -days 3650 -key ca.key -out ca.crt

Create a server cert:

Key
openssl genrsa -des3 -out server.key 4096
CSR
openssl req -new -key server.key -out server.csr
CRT
openssl x509 -req -days 3650 -in server.csr -CA ca.crt -CAkey ca.key 
-set_serial 01 -out server.crt

Then go a head and install these certs on the server.  Test the server on port 
443/SSL etc.

Then create a client cert:

Key
openssl genrsa -des3 -out client.key 2048
CSR
openssl req -new -key client.key -out client.csr
CRT
openssl ca -in client.csr -cert ca.crt -keyfile ca.key -out client.crt

Touch up the key - don't want to enter the password on start-up.

openssl rsa -in client.key -out client.key.insure
mv client.key client.key.secure
mv client.key.insecure client.key

Then take the ca.crt, the client.key and the client.crt and deploy them on the 
squid server.

Update the /etc/hosts file:

ip-address cn-name-of-apache-server

Then the squid.conf:

http_port 8080 accel defaultsite=cn-name-of-apache-server
cache_peer cn-name-of-apache-server parent 443 0 no-query originserver ssl \
sslcafile=/path/ca.crt sslcert=/path/client.crt sslkey=/path/client.key 
name=yum1


-- 
Regs
-Dean



Re: [squid-users] Fwd: Tproxy Squid 3.1

2012-04-30 Thread Amos Jeffries

On 01.05.2012 08:42, Daniel Echizen wrote:

Hi,
Im facing a weird problem with tproxy few weeks, the problem is, all
work fine except clients that is behind a tplink router and another
one that i dont remembe, but almost tplink wr541g routers, if i 
remove

iptables mangle redirect rule, client has traffic, enable not, dont
speak english very well, so i hope someone can understand and help
me.. this is a server with 1000+ clients, and im getting very
frustrated with this problem.

my config:

ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100

/sbin/iptables -v -t mangle -N DIVERT
/sbin/iptables -v -t mangle -A DIVERT -j MARK --set-mark 1
/sbin/iptables -v -t mangle -A DIVERT -j ACCEPT
/sbin/iptables -v -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
/sbin/iptables -v -t mangle -D PREROUTING -p tcp --dport 80 \
                  -j TPROXY --tproxy-mark 0x1/0x1 --on-port 5128 2>&1

/usr/local/sbin/ebtables -t broute -A BROUTING -i eth5 -p ipv4
--ip-proto tcp --ip-dport 80 -j redirect --redirect-target DROP
/usr/local/sbin/ebtables -t broute -A BROUTING -i eth3 -p ipv4
--ip-proto tcp --ip-sport 80 -j redirect --redirect-target DROP

cd /proc/sys/net/bridge/
for i in *
do
echo 0 > $i
done
unset i

echo 0 > /proc/sys/net/ipv4/conf/lo/rp_filter
echo 0 > /proc/sys/net/ipv4/conf/all/rp_filter
echo 1 > /proc/sys/net/ipv4/ip_forward


i hav 2 interfaces in bridge, as i said.. all working fine.. except
with this tplink routers
also got log in iptable mangle, and then i can see traffic from the
client router, but traffic cant reach squid
, in access.log cant get anything
i use a mikrotik as pppoe-server, my network is:

router <-> squidbox <-> mikrotik <-> clients


With Squid inline on a bridge like this there should be *no* squid 
related configuration outside the Squid box.


Is the tplink being used as "router" or "squidbox" in that diagram?

What kernel and iptables version is the squidbox? some of the older 
2.6.3x kernels have bridge+tproxy problems.



Amos


Re: [squid-users] Duplicate If-None-Match headers

2012-04-30 Thread Eliezer Croitoru

On 30/04/2012 14:32, Andy Taylor wrote:

Hi,

I'm having a number of problems with Squid at the moment with duplicate
Etags in the headers. I'm using Squid as an accelerator to forward
traffic to Apache, which serves up a Drupal installation.

After roughly 3 days, a number of pages on the site start to fail with
400 Bad Request errors; it starts with just a few and then slowly
spreads to more pages. I did a tcpdump of the requests coming from Squid
to Apache, and Apache is spitting out a 400 error because of the header
size. Hundreds of etags are appearing in the If-None-Match headers
field, which hits Apache's header size limit, causing the error. The
only way I've found to 'fix' this so far is to either:

1. Flush Squid cache entirely
2. Purge the affected pages

But then after a few days the problem comes back again. I've been using
Squid as an accelerator to Drupal installations for years and this
hasn't happened before. I'm using the following version of Squid:

Squid Cache: Version 2.6.STABLE21

which is the latest version available in the CentOS 5 repositories. The
only difference between this installation of Squid/Apache/Drupal and
others which have worked fine in the past is the version of Drupal -
Drupal 7. Supposedly Drupal 7 has significantly altered cache handling,
but I can't work out why this would cause this problem with Squid.

The only thing I can think of at the moment is something to do with
Squid's cache rotation (specifically the LRU functionality), so that
when Squid rotates its cache, something ends up corrupted or malformed.

Any help or suggestions would be much appreciated!

Thanks,

Andy Taylor
i suppose that there are many changes since squid 2.6 and for me squid 
3119 and 3216 works fine with drupal so i suppose you can try to compile 
a more advanced and supported version of squid as a starter.


Eliezer
--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] A Web site view problem , diffrences through 2 squid

2012-04-30 Thread Eliezer Croitoru

On 30/04/2012 10:24, a bv wrote:

Hi,

There are 2 squids running behind 2 different firewalls and 2 diffrent
internet connections (same isp, different network ranges). Users
report that they get problems viewing the site.

When i look at that site through 2 proxies and with different browsers
i get different results (and its changing) , especially i get some
errors about the web site codes from ie but
  when i switch the proxy through ie i see diffent results. Sometimes i
get the error 400 and after i clear the cache of the browser and
request the site it turns back.  1 firewall has IPS running 1 not but
couldnt find anything at the ips logs either. The web sites owners
didnt answer to my questions yet. What do you recommend to anaylyze
and fix the issue? Other sites are viewed well through bot of them.


Regards

did you tried to disable cache for this site?
if by different browsers it show different data it can be because the 
site has some browser based templates.


Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Transparent proxy and IP address rotation

2012-04-30 Thread Eliezer Croitoru

On 30/04/2012 23:44, Kirk Hoganson wrote:

I would like to configure our squid proxy (Version 3.0.STABLE19 on Linux
Ubuntu 10.04) to use a pool of addresses for outgoing connections. I
setup squid as a transparent proxy using "http_port 3128 transparent" in
the squid.conf, and then I setup an iptables to provide source nat
address rotation for the multiple interfaces the proxy has available.

The connections failed when attempting to source nat on the proxy. Would
this work if I were able to use tproxy instead of transparent on the
proxy server? Or is there another solution within squid that would allow
it to rotate through all available interfaces?

Thanks,
Kirk
if you just need couple of outgoing addresses and not the clients IP 
address intercept is fine.(not tproxy)

this kind of LB should be done using the os routing system.
a pool of addresses can be tricky because it can be done using 2 or 200 
IP addresses.


i have written some good sample for "multihoming" option that is like 
this and just needed to be tweaked a bit.

have a look at:
http://www.squid-cache.org/mail-archive/squid-dev/201204/0019.html

i do remember that something could have been done using iptables also 
but it dont remember how it should be done.


what did you tried to do on iptables?

i also found this nice iptables method sample:
http://www.pmoghadam.com/homepage/HTML/Round-robin-load-balancing-NAT.html

Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


[squid-users] Transparent proxy and IP address rotation

2012-04-30 Thread Kirk Hoganson
I would like to configure our squid proxy (Version 3.0.STABLE19 on Linux 
Ubuntu 10.04) to use a pool of addresses for outgoing connections.  I 
setup squid as a transparent proxy using "http_port 3128 transparent" in 
the squid.conf, and then I setup an iptables to provide source nat 
address rotation for the multiple interfaces the proxy has available.


The connections failed when attempting to source nat on the proxy.  
Would this work if I were able to use tproxy instead of transparent on 
the proxy server?  Or is there another solution within squid that would 
allow it to rotate through all available interfaces?


Thanks,
Kirk


[squid-users] Fwd: Tproxy Squid 3.1

2012-04-30 Thread Daniel Echizen
Hi,
Im facing a weird problem with tproxy few weeks, the problem is, all
work fine except clients that is behind a tplink router and another
one that i dont remembe, but almost tplink wr541g routers, if i remove
iptables mangle redirect rule, client has traffic, enable not, dont
speak english very well, so i hope someone can understand and help
me.. this is a server with 1000+ clients, and im getting very
frustrated with this problem.

my config:

ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100

/sbin/iptables -v -t mangle -N DIVERT
/sbin/iptables -v -t mangle -A DIVERT -j MARK --set-mark 1
/sbin/iptables -v -t mangle -A DIVERT -j ACCEPT
/sbin/iptables -v -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
/sbin/iptables -v -t mangle -D PREROUTING -p tcp --dport 80 \
                  -j TPROXY --tproxy-mark 0x1/0x1 --on-port 5128 2>&1

/usr/local/sbin/ebtables -t broute -A BROUTING -i eth5 -p ipv4
--ip-proto tcp --ip-dport 80 -j redirect --redirect-target DROP
/usr/local/sbin/ebtables -t broute -A BROUTING -i eth3 -p ipv4
--ip-proto tcp --ip-sport 80 -j redirect --redirect-target DROP

cd /proc/sys/net/bridge/
for i in *
do
echo 0 > $i
done
unset i

echo 0 > /proc/sys/net/ipv4/conf/lo/rp_filter
echo 0 > /proc/sys/net/ipv4/conf/all/rp_filter
echo 1 > /proc/sys/net/ipv4/ip_forward


i hav 2 interfaces in bridge, as i said.. all working fine.. except
with this tplink routers
also got log in iptable mangle, and then i can see traffic from the
client router, but traffic cant reach squid
, in access.log cant get anything
i use a mikrotik as pppoe-server, my network is:

router <-> squidbox <-> mikrotik <-> clients

hope someone help!


Re: [squid-users] slow internet browsing.

2012-04-30 Thread Muhammad Yousuf Khan
Thanks Eliezer Croitoru this has been a helpful stuff .

ill let you know if found any difficulty in deplying procedure.

Thanks

On Mon, Apr 30, 2012 at 3:27 AM, Eliezer Croitoru  wrote:
> On 29/04/2012 08:49, Muhammad Yousuf Khan wrote:
>>
>> IT seems that things are doing good with out huge domain list. so now
>> my next goal is squidguard.
>>
>> but the problem with squid guard  was that i tried it configuring and
>> i saw many online manuals but it didnt activated so i just started
>> using domain list. however if thing doesnt work ill update the status.
>>
>> Thanks you all for your kind help.
>>
>> Thanks
>>
>> On Fri, Apr 27, 2012 at 1:09 PM, Muhammad Yousuf Khan
>>  wrote:
>
> 
> i have used squidguard from source and it seems to work very well.
> it took me a while to understand and configure but it works perfectly.
> have a look at:
> http://www.visolve.com/squid/whitepapers/redirector.php#Configuring_Squid_for_squidGuard
>
>
>
>
> --
> Eliezer Croitoru
> https://www1.ngtech.co.il
> IT consulting for Nonprofit organizations
> eliezer  ngtech.co.il


Re: [squid-users] Local Client Access

2012-04-30 Thread Hasanen AL-Bana
set forwarded_for off

On Mon, Apr 30, 2012 at 5:50 PM, Roman Gelfand  wrote:
> My squid server is behind NATed firewall.  When accessing site
> www.dnsstuff.com, it reports my ip address as local address of the
> client.
>
> For instance,
>
> 1. squid server ip is 192.168.1.10
> 2. client accesing the www.dnsstuff.com site via squid server is 
> 192.168.1.101.
>
> The www.dnsstuff.com reports  my ip as  192.168.1.101 instead of wan ip.
>
> I am using squid 3.19
>
> Thanks for your help
>
> On Mon, Apr 30, 2012 at 9:03 AM, Amos Jeffries  wrote:
>> On 30/04/2012 11:56 p.m., Roman Gelfand wrote:
>>>
>>> My client access configuration is as follows.
>>>
>>> always_direct allow all
>>> http_access allow all
>>>
>>> # Squid normally listens to port 3128
>>> http_port 3128 ssl-bump generate-host-certificates=on
>>> dynamic_cert_mem_cache_size=4MB cert=/usr/local/squid/etc/host.pem
>>>
>>> url_rewrite_children 64
>>>
>>> url_rewrite_program /usr/local/bin/squidGuard -c
>>> /usr/local/squidGuard/squidGuard.conf
>>>
>>>
>>> It appears that, when sending ougoing requests,  http header is from
>>> the original host.  I guessing, this is why it is called transparent
>>> proxy.
>>
>>
>> There is nothing of transparent proxying in this config.
>> * You have ssl-bump decryption of CONNECT requests.
>> * You have a re-writer/redirector altering the traffic URLs.
>>
>> Tranparent means the requests are not altered.
>>
>>
>>>   It seems that that causes routing problems.  Could you tell me
>>> where I am going wrong here.
>>
>>
>> Could you please explain the problem?
>>  And also give an indication of what Squid version you are talking about
>> please.
>>
>> Amos


Re: [squid-users] Local Client Access

2012-04-30 Thread Roman Gelfand
My squid server is behind NATed firewall.  When accessing site
www.dnsstuff.com, it reports my ip address as local address of the
client.

For instance,

1. squid server ip is 192.168.1.10
2. client accesing the www.dnsstuff.com site via squid server is 192.168.1.101.

The www.dnsstuff.com reports  my ip as  192.168.1.101 instead of wan ip.

I am using squid 3.19

Thanks for your help

On Mon, Apr 30, 2012 at 9:03 AM, Amos Jeffries  wrote:
> On 30/04/2012 11:56 p.m., Roman Gelfand wrote:
>>
>> My client access configuration is as follows.
>>
>> always_direct allow all
>> http_access allow all
>>
>> # Squid normally listens to port 3128
>> http_port 3128 ssl-bump generate-host-certificates=on
>> dynamic_cert_mem_cache_size=4MB cert=/usr/local/squid/etc/host.pem
>>
>> url_rewrite_children 64
>>
>> url_rewrite_program /usr/local/bin/squidGuard -c
>> /usr/local/squidGuard/squidGuard.conf
>>
>>
>> It appears that, when sending ougoing requests,  http header is from
>> the original host.  I guessing, this is why it is called transparent
>> proxy.
>
>
> There is nothing of transparent proxying in this config.
> * You have ssl-bump decryption of CONNECT requests.
> * You have a re-writer/redirector altering the traffic URLs.
>
> Tranparent means the requests are not altered.
>
>
>>   It seems that that causes routing problems.  Could you tell me
>> where I am going wrong here.
>
>
> Could you please explain the problem?
>  And also give an indication of what Squid version you are talking about
> please.
>
> Amos


Re: [squid-users] A Web site view problem , diffrences through 2 squid

2012-04-30 Thread Amos Jeffries

On 30/04/2012 7:24 p.m., a bv wrote:

Hi,

There are 2 squids running behind 2 different firewalls and 2 diffrent
internet connections (same isp, different network ranges). Users
report that they get problems viewing the site.

When i look at that site through 2 proxies and with different browsers
i get different results (and its changing) , especially i get some
errors about the web site codes from ie but
  when i switch the proxy through ie i see diffent results. Sometimes i
get the error 400 and after i clear the cache of the browser and
request the site it turns back.  1 firewall has IPS running 1 not but
couldnt find anything at the ips logs either. The web sites owners
didnt answer to my questions yet. What do you recommend to anaylyze
and fix the issue? Other sites are viewed well through bot of them.


The tool at redbot.org does cacheability and some behaviour analysis for 
any given URL.


Amos


[squid-users] commBind: Cannot bind socket error

2012-04-30 Thread Nick Howitt

Hi,
I am new to squid and I am trying to run in on my ClearOS 5.2 gateway 
where it is supplied as a pre-configured package. However, whenever I 
try to start it I lose all internet access. I would like to run it in 
transparent mode which is a menu option I have for it.


My cache.log reads:
2012/04/25 12:51:06| Starting Squid Cache version 2.6.STABLE21 for 
i686-redhat-linux-gnu...

2012/04/25 12:51:06| Process ID 24435
2012/04/25 12:51:06| With 1024 file descriptors available
2012/04/25 12:51:06| Using epoll for the IO loop
2012/04/25 12:51:06| DNS Socket created at 0.0.0.0, port 50915, FD 6
2012/04/25 12:51:06| Adding domain howitts.lan from /etc/resolv.conf
2012/04/25 12:51:06| Adding domain howitts.lan from /etc/resolv.conf
2012/04/25 12:51:06| Adding nameserver 127.0.0.1 from /etc/resolv.conf
2012/04/25 12:51:06| Adding nameserver 208.67.222.222 from /etc/resolv.conf
2012/04/25 12:51:06| Adding nameserver 208.67.220.220 from /etc/resolv.conf
2012/04/25 12:51:06| Adding nameserver 194.168.4.100 from /etc/resolv.conf
2012/04/25 12:51:06| Adding nameserver 194.168.8.100 from /etc/resolv.conf
2012/04/25 12:51:06| User-Agent logging is disabled.
2012/04/25 12:51:06| Referer logging is disabled.
2012/04/25 12:51:06| Unlinkd pipe opened on FD 11
2012/04/25 12:51:06| Swap maxSize 102400 + 8192 KB, estimated 0 objects
2012/04/25 12:51:06| Target number of buckets: 425
2012/04/25 12:51:06| Using 8192 Store buckets
2012/04/25 12:51:06| Max Mem  size: 8192 KB
2012/04/25 12:51:06| Max Swap size: 102400 KB
2012/04/25 12:51:06| Local cache digest enabled; rebuild/rewrite every 
3600/3600 sec

2012/04/25 12:51:06| Rebuilding storage in /var/spool/squid (CLEAN)
2012/04/25 12:51:06| Using Least Load store dir selection
2012/04/25 12:51:06| Set Current Directory to /var/spool/squid
2012/04/25 12:51:06| Loaded Icons.
2012/04/25 12:51:06| Accepting proxy HTTP connections at 0.0.0.0, port 
3128, FD 13.
2012/04/25 12:51:06| commBind: Cannot bind socket FD 14 to 
192.168.3.1:3128: (98) Address already in use
2012/04/25 12:51:06| commBind: Cannot bind socket FD 14 to 
192.168.2.1:3128: (98) Address already in use
2012/04/25 12:51:06| commBind: Cannot bind socket FD 14 to 
127.0.0.1:3128: (98) Address already in use

2012/04/25 12:51:06| Accepting ICP messages at 0.0.0.0, port 3130, FD 14.
2012/04/25 12:51:06| WCCP Disabled.
2012/04/25 12:51:06| Ready to serve requests.
2012/04/25 12:51:06| Done reading /var/spool/squid swaplog (0 entries)
2012/04/25 12:51:06| Finished rebuilding storage from disk.
2012/04/25 12:51:06| 0 Entries scanned
2012/04/25 12:51:06| 0 Invalid entries.
2012/04/25 12:51:06| 0 With invalid flags.
2012/04/25 12:51:06| 0 Objects loaded.
2012/04/25 12:51:06| 0 Objects expired.
2012/04/25 12:51:06| 0 Objects cancelled.
2012/04/25 12:51:06| 0 Duplicate URLs purged.
2012/04/25 12:51:06| 0 Swapfile clashes avoided.
2012/04/25 12:51:06|   Took 0.3 seconds (   0.0 objects/sec).
2012/04/25 12:51:06| Beginning Validation Procedure
2012/04/25 12:51:06|   Completed Validation Procedure
2012/04/25 12:51:06|   Validated 0 Entries
2012/04/25 12:51:06|   store_swap_size = 0k
2012/04/25 12:51:07| storeLateRelease: released 0 objects

At this point I lose internet access. and it does not change when I 
switch it to transparent mode. I am not aware of anything else running 
on port 3128 and netstat -an -t | grep 3128 shows nothing.


If it helps at all, this is my squid.conf:

acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.0/8
acl webconfig_lan src 192.168.2.0/24 192.168.3.0/24  192.168.10.0/24
acl webconfig_to_lan dst 192.168.2.0/24 192.168.3.0/24  192.168.10.0/24
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80# http
acl Safe_ports port 21# ftp
acl Safe_ports port 443# https
acl Safe_ports port 70# gopher
acl Safe_ports port 210# wais
acl Safe_ports port 1025-65535# unregistered ports
acl Safe_ports port 280# http-mgmt
acl Safe_ports port 488# gss-http
acl Safe_ports port 591# filemaker
acl Safe_ports port 777# multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access allow webconfig_to_lan
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow webconfig_lan
http_access deny all
icp_access allow all
http_port 3128
hierarchy_stoplist cgi-bin ?
access_log /var/log/squid/access.log squid
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
refresh_pattern ^ftp:144020%10080
refresh_pattern ^gopher:14400%1440
refresh_pattern .020%4320
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
coredump_dir /var/spool/squid
error_directory /etc/squid/errors
follow_x_forwarded_for allow localhost
http_port 192.168.3.1:3128 transparent
http_port 192.168.2.1:31

Re: [squid-users] Squid Reconfigure ICAP Settings

2012-04-30 Thread Amos Jeffries

On 30/04/2012 8:37 p.m., Justin Lawler wrote:

Hi,

Will squid reconfigure ICAP settings if a 'squid -k reconfigure' is triggered?


Squid re-loads the whole config file, rotates the logs, and restarts all 
helper processes when reconfigure is triggered.




Want to know can we update ICAP acl settings on the fly without restarting 
squid.


Look up the X-Next-Service feature of ICAP protocol. I think you will 
find it far better for deciding what service(s) to run a request through 
than dynamically changing the Squid configuration file.


Amos


Re: [squid-users] Local Client Access

2012-04-30 Thread Amos Jeffries

On 30/04/2012 11:56 p.m., Roman Gelfand wrote:

My client access configuration is as follows.

always_direct allow all
http_access allow all

# Squid normally listens to port 3128
http_port 3128 ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=4MB cert=/usr/local/squid/etc/host.pem

url_rewrite_children 64

url_rewrite_program /usr/local/bin/squidGuard -c
/usr/local/squidGuard/squidGuard.conf


It appears that, when sending ougoing requests,  http header is from
the original host.  I guessing, this is why it is called transparent
proxy.


There is nothing of transparent proxying in this config.
* You have ssl-bump decryption of CONNECT requests.
* You have a re-writer/redirector altering the traffic URLs.

Tranparent means the requests are not altered.


   It seems that that causes routing problems.  Could you tell me
where I am going wrong here.


Could you please explain the problem?
 And also give an indication of what Squid version you are talking 
about please.


Amos


[squid-users] Local Client Access

2012-04-30 Thread Roman Gelfand
My client access configuration is as follows.

always_direct allow all
http_access allow all

# Squid normally listens to port 3128
http_port 3128 ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=4MB cert=/usr/local/squid/etc/host.pem

url_rewrite_children 64

url_rewrite_program /usr/local/bin/squidGuard -c
/usr/local/squidGuard/squidGuard.conf


It appears that, when sending ougoing requests,  http header is from
the original host.  I guessing, this is why it is called transparent
proxy.  It seems that that causes routing problems.  Could you tell me
where I am going wrong here.

Thanks in advance


Re: [squid-users] Duplicate If-None-Match headers

2012-04-30 Thread Robert Collins
On Mon, Apr 30, 2012 at 11:32 PM, Andy Taylor  wrote:
> Hi,
>
> I'm having a number of problems with Squid at the moment with duplicate
> Etags in the headers. I'm using Squid as an accelerator to forward traffic
> to Apache, which serves up a Drupal installation.
>
> After roughly 3 days, a number of pages on the site start to fail with 400
> Bad Request errors; it starts with just a few and then slowly spreads to
> more pages. I did a tcpdump of the requests coming from Squid to Apache, and
> Apache is spitting out a 400 error because of the header size. Hundreds of
> etags are appearing in the If-None-Match headers field, which hits Apache's
> header size limit, causing the error. The only way I've found to 'fix' this
> so far is to either:

So, this is probably poor behaviour out of Drupal. Squid believes that
there are hundreds of different versions of that page, all equally
likely to be validated and used as a response by the backend. We
probably want a cap on the number of variants we support, or at least
a knob to set it.

I'd look at your backend behaviour though - even with a knob, you're
still wasting a lot of processing.

-Rob


[squid-users] Duplicate If-None-Match headers

2012-04-30 Thread Andy Taylor

Hi,

I'm having a number of problems with Squid at the moment with duplicate 
Etags in the headers. I'm using Squid as an accelerator to forward 
traffic to Apache, which serves up a Drupal installation.


After roughly 3 days, a number of pages on the site start to fail with 
400 Bad Request errors; it starts with just a few and then slowly 
spreads to more pages. I did a tcpdump of the requests coming from Squid 
to Apache, and Apache is spitting out a 400 error because of the header 
size. Hundreds of etags are appearing in the If-None-Match headers 
field, which hits Apache's header size limit, causing the error. The 
only way I've found to 'fix' this so far is to either:


1. Flush Squid cache entirely
2. Purge the affected pages

But then after a few days the problem comes back again. I've been using 
Squid as an accelerator to Drupal installations for years and this 
hasn't happened before. I'm using the following version of Squid:


Squid Cache: Version 2.6.STABLE21

which is the latest version available in the CentOS 5 repositories. The 
only difference between this installation of Squid/Apache/Drupal and 
others which have worked fine in the past is the version of Drupal - 
Drupal 7. Supposedly Drupal 7 has significantly altered cache handling, 
but I can't work out why this would cause this problem with Squid.


The only thing I can think of at the moment is something to do with 
Squid's cache rotation (specifically the LRU functionality), so that 
when Squid rotates its cache, something ends up corrupted or malformed.


Any help or suggestions would be much appreciated!

Thanks,

Andy Taylor


[squid-users] Squid Reconfigure ICAP Settings

2012-04-30 Thread Justin Lawler
Hi,

Will squid reconfigure ICAP settings if a 'squid -k reconfigure' is triggered?

Want to know can we update ICAP acl settings on the fly without restarting 
squid.

Thanks and regards,
Justin
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



[squid-users] A Web site view problem , diffrences through 2 squid

2012-04-30 Thread a bv
Hi,

There are 2 squids running behind 2 different firewalls and 2 diffrent
internet connections (same isp, different network ranges). Users
report that they get problems viewing the site.

When i look at that site through 2 proxies and with different browsers
i get different results (and its changing) , especially i get some
errors about the web site codes from ie but
 when i switch the proxy through ie i see diffent results. Sometimes i
get the error 400 and after i clear the cache of the browser and
request the site it turns back.  1 firewall has IPS running 1 not but
couldnt find anything at the ips logs either. The web sites owners
didnt answer to my questions yet. What do you recommend to anaylyze
and fix the issue? Other sites are viewed well through bot of them.


Regards


RE: SPAM: Re: [squid-users] Cache that will not grow in size

2012-04-30 Thread Mark Engels
Thanks for the reply Eliezer.

I've had a read through the more readable explanation and well. It definitely 
was more readable. I think Ill need to re-read the percentage field a few more 
times before I grasp it completely (rather busy here). However Ive taken your 
advice onboard and ive modified the values to read as 8640 90% 43800. I also 
added ipa and dmg file extensions to the original flv pattern as they were up 
there on our users more frequent requests. Originally I had copied the values 
from a blog somewhere on the net so I wasn’t aware there was a maximum value.

Tracking our users web history per site is rather tricky as we have 1300 users 
using approximately 700gig per month and there are constantly changing web 
filters, so what shows as a high use site may actually be blocked by the time I 
goto change any refresh patterns. However in saying that, I get fairly good 
reports from our ISA server which sits one step up.

Would you have any other suggestions to provide?


-Original Message-
From: Eliezer Croitoru [mailto:elie...@ngtech.co.il]
Sent: Friday, 27 April 2012 4:13 PM
To: squid-users@squid-cache.org
Subject: SPAM: Re: [squid-users] Cache that will not grow in size

On 27/04/2012 08:37, Mark Engels wrote:
> Hello everyone,
>
> Ive been working on a squid cache appliance for a few weeks now (on and off) 
> and things appear to be working. However I seem to have an issue where the 
> cache size simply refuses to grow in size. My first attempt had the cache 
> stall at 2.03gb and with this latest build im stalling at 803mb.
>
> I haven’t a clue on where to go or what to look at for determining
> what could be wrong and im hopeing you could be of assistance ☺ also
> any tips for better performance or improved caching would be greatly
> appreciated. (Yes I have googled and I think ive applied what I could
> but it’s a little over my head a few weeks in and no deep linux
> experience)
>
>
> Some facts:
>
> Ive been determining the cache size with the following command, du –hs
> /var/spool/squid Squid is running on a centOS6.2 machine Squid is
> version 3.1.10 CentOS is running in a hyperV virtual machine with
> integration services installed VM has 4gb ram and a 60gb HDD allocated
> Squid is acting as a cache/error page handler box only. There is the
> main proxy sitting one step downstream with squid setup in a “T”
> network (the main cache can skip squid and go direct to the net if
> squid falls over on me, hyperV issue)
>
>
> Config file:
>
> Acl downstream src 192.168.1.2/32
> http_access allow downstream
>
> cache_mgr protectedem...@moc.sa.edu.au
>
> <  all the standard acl rules here>
>
> http_access allow localnet
> http_access allow localhost
> http_access deny all
>
> # Squid normally listens to port 3128
> http_port 8080
>
> # We recommend you to use at least the following line.
> hierarchy_stoplist cgi—bin ?
>
> # Uncomment and adjust the following to add a disk cache directory.
> cache_dir ufs /var/spool/squid 3 16 256
>
> # Leave coredumps in the first cache dir coredump_dir /var/spool/squid
>
> # Change maxinum object size
> maxinum_object_size 4 GB
>
> # Define max cache_mem
> cache_mem 512 MB
>
> #Lousy attempt at youtube caching
> quick_abort_min -1 KB
> acl youtube dstdomain .youtube.com
> cache allow youtube
>
from the next refresh patterns it seems you might not quite understand the 
meaning of the patterns syntax and options.
the first thing i suggest is to look at:
http://www.squid-cache.org/Doc/config/refresh_pattern/
a more "readable" place is :
http://etutorials.org/Server+Administration/Squid.+The+definitive+guide/Chapter+7.+Disk+Cache+Basics/7.7+refresh_pattern/

and try to read it once or twice so you will know how to benefit from it.
also try to read some info about caching here:
http://www.mnot.net/cache_docs/
and a tool that will help you to analyze pages for cachebility is redbot:
http://redbot.org/

there is a maximum time that a object can stay in the cache server as it's a 
cache server and not hosting service.
it's max of 365 days (total of 525600 minutes) if i remember right so it's 
useless to use "99" as a max time for object freshness.
if you want to cache youtube videos you lack a little bit of knowledge about it 
so just start with basic caching tweaking.
also you should check your users browsing habits in order to gain maximum cache 
efficiency.
until then you will get a solid caching goals you wont need to shoot so hard.
one very good tool to analyze your users habits is "sarg".

if you need some help i can assist you with it.
just as an example this site: http://djmaza.com/ is heaven of cache proxy 
server but until you wont analyze it you wont know what to do with it.
you can see in this link:
http://redbot.org/?descend=True&uri=http://djmaza.com/
how the page is built.

Regards,
Eliezer

> # Add any of your own refresh_pattern entries above these.
> refresh_pattern -i \.flv$ 10080 90% 99 ignore-no-cache
> override-expire i