[squid-users] time error squid

2012-03-31 Thread Jose R. Cristo Almaguer
Hello, I use squid 3.1.19, everything works fine, but I have problems with
the time in the squid error page gives me a time that is not the system or
BIOS, any ideas?


Greetings, joSE;



-- 
Este mensaje ha sido analizado por MailScanner
en busca de virus y otros contenidos peligrosos,
y se considera que está limpio.
For all your IT requirements visit: http://www.transtec.co.uk



Re: [squid-users] Opening a specific port

2012-03-31 Thread Amos Jeffries

On 30/03/2012 10:10 p.m., a bv wrote:

Hi,

I want a user give access to a spesific port to a spesific destination
from squid. adding that port to the Safe_ports section and doing a
reconfigure , user still gets denied to that destination and port.
User has general http_access allow rule.


Conclusion: you have a rule denying that user access to that port.

If you want assistance debbugging your access controls, you will need to 
post them so the helpers can see and tell you where the problem(s) are.


Amos


Re: [squid-users] Allowing linked sites - NTLM and un-authenticated users

2012-03-31 Thread Amos Jeffries

On 30/03/2012 11:45 p.m., Jasper Van Der Westhuizen wrote:

Hi everyone

I've been struggling to get a very specific setup going.

Some background:  Our users are split into Internet users and Non-Internet 
users. Everyone in a specific AD group is allowed to have full internet access. I have two SQUID 
proxies with squidGuard load balanced with NTLM authentication to handle the group authentication. 
All traffic also then gets sent to a cache peer.

This is basically what I need:
1. All users(internet and non-internet) must be able to access sites in 
/etc/squid/lists/whitelist.txt
2. If a user wants to access any external site that is not in the whitelist 
then he must be authenticated. Obviously a non-internet user can try until he 
is blue in the face, it won't work.

These two scenarios are working 100%, except for one irritating bit. Most of 
the whitelisted sites have got linked websites like facebook or twitter or 
yourtube in them that load icons and graphics or adds etc. This causes a 
auth-prompt for non-internet users. I can see the requests in the logs being 
DENIED.

The only way I could think of getting rid of these errors was to implement a 
http_access deny !whitelist after the allow. This works great for 
non-internet users and it blocks all the linked sites without asking to authenticate, but 
obviously this breaks access to all other sites for authenticated users.(access denied 
for all sites)


You can use the all hack and two login lines:

http_access allow whitelist
# allow authed users, but dont challenge if missing auth
http_access allow authed all
# block access to some sites unless already logged in
http_access deny blacklist
http_access deny !authed


The authed users may still have problems logging in if the first site 
they visit is one of the blacklist ones. But if they visit another 
page first they can login and get there.



Amos


Re: [squid-users] Squid Reverse Proxy (accel) always contacting the server

2012-03-31 Thread Amos Jeffries

On 30/03/2012 12:47 p.m., Daniele Segato wrote:

Hi,

This is what I want to obtain:

Environment:
* everything on the same machine (Debian GNU\Linux)
* server running on tomcat, port 8080
* squid running on port 280
* client can be anywhere, but for now it's on the localhost machine too

I want to set up an http cache to my tomcat server to reduce the load 
on it.


And I expect to obtain a result like this:

First request
1. 9:00 AM (today) client request GET to http://localhost:280/myservice
2. squid receive the request, nothing in cache, contact my server
3. tomcat reply with a 200, the body and some header:
Cache-Control: public, max-age=3600
Last-Modified: //8:00 AM//
4. squid store in cache that result that should be valid until 10:00 
AM (today) = 9:00 AM (time of the request) + 3600 seconds (max-age)

5. client receive the response

Second request:
1. 9:05 AM (today) client request GET to 
http://localhost:280/myservice with header

If-Modified-Since: //8:00 AM//
2. squid receive the request, see 9:05 AM  10:00 AM -- cache hit 304
3. client receive the response 304

Third request (after 10:00 AM)
1. 10:05 AM (today) client request GET to 
http://localhost:280/myservicewith header

If-Modified-Since: //8:00 AM//
2. squid receive the request, see 10:05 AM  10:00 AM -- time to see 
if the server has a new version, forward the if-modified-since request 
to the server
3. suppose the resource is not changed: tomcat reply with a 304 Not 
Modified, again with headers:

Cache-Control: public, max-age=3600
Last-Modified: //8:00 AM//
4. squid store update the cache value to be valid until 11:05 AM 
(today) = 10:05 AM (time of the request) + 3600 seconds (max-age)

5. client receive the response: 304 Not Modified



Instead squid is ALWAYS requiring the resource to the server:
$ curl -v -H 'If-Modified-Since: Thu, 29 Mar 2012 22:14:20 GMT' 
'http://localhost:280/alfresco/service/catalog/products'


* About to connect() to localhost port 280 (#0)
*   Trying 127.0.0.1...
* connected
* Connected to localhost (127.0.0.1) port 280 (#0)

GET /alfresco/service/catalog/products HTTP/1.1
User-Agent: curl/7.24.0 (x86_64-pc-linux-gnu) libcurl/7.24.0

OpenSSL/1.0.0h zlib/1.2.6 libidn/1.24 libssh2/1.2.8 librtmp/2.3

Host: localhost:280
Accept: */*
If-Modified-Since: Thu, 29 Mar 2012 22:14:20 GMT


* additional stuff not fine transfer.c:1037: 0 0
* HTTP 1.0, assume close after body
 HTTP/1.0 304 Not Modified
 Date: Thu, 29 Mar 2012 23:27:57 GMT
 Cache-Control: public,  max-age=3600
 Last-Modified: Thu, 29 Mar 2012 22:14:20 GMT




   max-age

  The max-age response directive indicates that the response is to
  be considered stale after its age is greater than the specified
  number of seconds.



The logic goes like this:

  Object modified ... 22:14:20
  Valid   +3600
   == fresh until 23:14:50
  Current time: 23:27:57

   23:14:50  23:27:15 == currently stale. must revalidate.

Expires header can be used to set an absolute time for invaldation. 
max-age is relative to age.


Amos


Re: [squid-users] limiting connections

2012-03-31 Thread Amos Jeffries

On 31/03/2012 3:07 a.m., Carlos Manuel Trepeu Pupo wrote:


Now I have the following question:
The possible error to return are 'OK' or 'ERR', if I assume like
Boolean answer, OK-TRUE  ERR-FALSE. Is this right ?


Equivalent, yes. Specifically it means success / failure or match / 
non-match on the ACL.



So, if I deny my acl:
http_access deny external_helper_acl

work like this (with the http_access below):
If return OK -  I denied
If return ERR -  I do not denied

It's right this ??? Tanks again for the help !!!


Correct.

Amos



Re: [squid-users] Kernel module uses in squid

2012-03-31 Thread Amos Jeffries

On 31/03/2012 12:41 a.m., parashuram lamani wrote:

Hello all,
Does squid makes use of any kernel module in its implementation, ???
we are trying to write coap native protocol implementation in squid,
do we need to take care of such kernel level programming???


Parashuram
Systems Engineer
Accord software  systems Pvt.Ltd


Squid makes use of whatever is coded for use. Most of Squid uses the 
kernel socket API, jsut like any other network software.


Amos


Re: [squid-users] time error squid

2012-03-31 Thread Amos Jeffries

On 31/03/2012 8:05 p.m., Jose R. Cristo Almaguer wrote:

Hello, I use squid 3.1.19, everything works fine, but I have problems with
the time in the squid error page gives me a time that is not the system or
BIOS, any ideas?


The error pages are supposed to be UTC. Unless you changed them to be 
local timezone or something strange.


Amos


RE: [squid-users] time error squid

2012-03-31 Thread Jose R. Cristo Almaguer
how to do that, first figured it was that was taking the time to hwclock,
then change the time and remains the same, all logs have bad time and I
don’t know to put the correct time. Greetings joSE;

-Mensaje original-
De: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Enviado el: sábado, 31 de marzo de 2012 3:22
Para: squid-users@squid-cache.org
Asunto: Re: [squid-users] time error squid

On 31/03/2012 8:05 p.m., Jose R. Cristo Almaguer wrote:
 Hello, I use squid 3.1.19, everything works fine, but I have problems with
 the time in the squid error page gives me a time that is not the system or
 BIOS, any ideas?

The error pages are supposed to be UTC. Unless you changed them to be 
local timezone or something strange.

Amos

-- 
Este mensaje ha sido analizado por MailScanner
en busca de virus y otros contenidos peligrosos,
y se considera que está limpio.
For all your IT requirements visit: http://www.transtec.co.uk


-- 
Este mensaje ha sido analizado por MailScanner
en busca de virus y otros contenidos peligrosos,
y se considera que está limpio.
For all your IT requirements visit: http://www.transtec.co.uk



Re: [squid-users] Squid Reverse Proxy (accel) always contacting the server

2012-03-31 Thread Daniele Segato

On 03/31/2012 10:13 AM, Amos Jeffries wrote:

On 30/03/2012 12:47 p.m., Daniele Segato wrote:

Instead squid is ALWAYS requiring the resource to the server:
$ curl -v -H 'If-Modified-Since: Thu, 29 Mar 2012 22:14:20 GMT'
'http://localhost:280/alfresco/service/catalog/products'

* About to connect() to localhost port 280 (#0)
* Trying 127.0.0.1...
* connected
* Connected to localhost (127.0.0.1) port 280 (#0)

GET /alfresco/service/catalog/products HTTP/1.1
User-Agent: curl/7.24.0 (x86_64-pc-linux-gnu) libcurl/7.24.0

OpenSSL/1.0.0h zlib/1.2.6 libidn/1.24 libssh2/1.2.8 librtmp/2.3

Host: localhost:280
Accept: */*
If-Modified-Since: Thu, 29 Mar 2012 22:14:20 GMT


* additional stuff not fine transfer.c:1037: 0 0
* HTTP 1.0, assume close after body
 HTTP/1.0 304 Not Modified
 Date: Thu, 29 Mar 2012 23:27:57 GMT
 Cache-Control: public, max-age=3600
 Last-Modified: Thu, 29 Mar 2012 22:14:20 GMT




max-age

The max-age response directive indicates that the response is to
be considered stale after its age is greater than the specified
number of seconds.



The logic goes like this:

Object modified ... 22:14:20
Valid +3600
== fresh until 23:14:50
Current time: 23:27:57

23:14:50  23:27:15 == currently stale. must revalidate.

Expires header can be used to set an absolute time for invaldation.
max-age is relative to age.



Hi amos,

My content has been lastly modified at 22:14:20.
But I did two successive request, one at 23:27:00, one at 23:27:20

the first one: 23:27:00 was a cache miss
the second is what you see above.

you are saying that max-age is added to last modified date
but that doesn't make much sense to me.

If the server (parent cache) is returning the content at 23:27:00 saying 
max-age 3600 I would expect that 3600 start from now.



anyway, I thought about this before and I also tried to modify the 
content, then immediately giving two request to squid.


this time, suppose:

  Object modified ... 00:00:00
  Valid   +3600
   == fresh until 01:01:00
  Current time: 00:05:00

   01:01:00  00:05:00 == currently fresh. shouldn't bother the server.

instead what's actually happening is that squid is doing a request to my 
server, only header, but it's still doing it.


My server, to compute the Last-Modified date has to do all the job of 
collecting the data, looping to each data element and extract, for each, 
the last modified date, then compute the last one.. it build a model 
that is then rendered: it's pretty short anyway since it's gzipped text.


So the big work of my server is to collect the data, and my server have 
to do it both if you do a GET both if you do an HEAD request.


I would like squid to revalidate with my server every, say 1 minute, 
even 10 seconds is ok.. but it shouldn't revalidate every single request 
it is receiving.


I hope I made my point.

I wanted to give you an example but now squid is always giving me a TCP_MISS

# squid3 -k debug  curl -v 
'http://localhost:280/alfresco/service/catalog/products'; squid3 -k debug

* About to connect() to localhost port 280 (#0)
*   Trying 127.0.0.1...
* connected
* Connected to localhost (127.0.0.1) port 280 (#0)
 GET /alfresco/service/catalog/products HTTP/1.1
 User-Agent: curl/7.24.0 (x86_64-pc-linux-gnu) libcurl/7.24.0 
OpenSSL/1.0.0h zlib/1.2.6 libidn/1.24 libssh2/1.2.8 librtmp/2.3

 Host: localhost:280
 Accept: */*

* additional stuff not fine transfer.c:1037: 0 0
* HTTP 1.0, assume close after body
 HTTP/1.0 200 OK
 Date: Sat, 31 Mar 2012 14:53:51 GMT
 Content-Language: en_US
 Cache-Control: public, max-age=3600
 Last-Modified: Sat, 31 Mar 2012 14:03:55 +
 Vary: Accept, Accept-Language
 Content-Type: application/json;charset=UTF-8
 Content-Length: 1668
 Server: Jetty(6.1.21)
 X-Cache: MISS from localhost
 X-Cache-Lookup: MISS from localhost:280
 Via: 1.0 localhost (squid/3.1.19)
* HTTP/1.0 connection set to keep alive!
 Connection: keep-alive



in the debug log I see:


2012/03/31 16:53:51.696| getDefaultParent: returning localhost
2012/03/31 16:53:51.696| peerAddFwdServer: adding localhost DEFAULT_PARENT
2012/03/31 16:53:51.696| peerSelectCallback: 
http://localhost/alfresco/service/catalog/products
2012/03/31 16:53:51.696| fwdStartComplete: 
http://localhost/alfresco/service/catalog/products
2012/03/31 16:53:51.696| fwdConnectStart: 
http://localhost/alfresco/service/catalog/products
2012/03/31 16:53:51.696| 
PconnPool::key(flexformAccel,8080,localhost,[::]is 
{flexformAccel:8080/localhost}
2012/03/31 16:53:51.696| PconnPool::pop: found 
myfAccel:8080/localhost(to use)


[...]

2012/03/31 16:53:52.159| mem_hdr::write: [249,251) object end 249
2012/03/31 16:53:52.159| storeSwapOut: 
http://localhost/alfresco/service/catalog/products

2012/03/31 16:53:52.159| storeSwapOut: store_status = STORE_PENDING
2012/03/31 16:53:52.159| store_swapout.cc(190) swapOut: storeSwapOut: 
mem-inmem_lo = 0
2012/03/31 16:53:52.159| store_swapout.cc(191) swapOut: storeSwapOut: 
mem-endOffset() = 251
2012/03/31 16:53:52.159| 

Re: [squid-users] Squid Reverse Proxy (accel) always contacting the server

2012-03-31 Thread Daniele Segato

On 03/31/2012 05:01 PM, Daniele Segato wrote:

On 03/31/2012 10:13 AM, Amos Jeffries wrote:

max-age

The max-age response directive indicates that the response is to
be considered stale after its age is greater than the specified
number of seconds.



The logic goes like this:

Object modified ... 22:14:20
Valid +3600
== fresh until 23:14:50
Current time: 23:27:57

23:14:50  23:27:15 == currently stale. must revalidate.

Expires header can be used to set an absolute time for invaldation.
max-age is relative to age.


Ok I think I now understood you...



you are saying that max-age is added to last modified date
but that doesn't make much sense to me.

If the server (parent cache) is returning the content at 23:27:00 saying
max-age 3600 I would expect that 3600 start from now.




anyway, I thought about this before and I also tried to modify the
content, then immediately giving two request to squid.


apparently this was caused by a mistake I did with the server (see below)



this time, suppose:

Object modified ... 00:00:00
Valid +3600
== fresh until 01:01:00
Current time: 00:05:00

01:01:00  00:05:00 == currently fresh. shouldn't bother the server.

instead what's actually happening is that squid is doing a request to my
server, only header, but it's still doing it.

My server, to compute the Last-Modified date has to do all the job of
collecting the data, looping to each data element and extract, for each,
the last modified date, then compute the last one.. it build a model
that is then rendered: it's pretty short anyway since it's gzipped text.

So the big work of my server is to collect the data, and my server have
to do it both if you do a GET both if you do an HEAD request.

I would like squid to revalidate with my server every, say 1 minute,
even 10 seconds is ok.. but it shouldn't revalidate every single request
it is receiving.

I hope I made my point.



this question is still in place :)




I wanted to give you an example but now squid is always giving me a
TCP_MISS


this was my mistake, the Last-Modified date format was wrong from server :)

please ignore the debug and everything behind this point in my previous 
email...


Now it's giving cache hits in ram!


I think I can summarize my question in this two questions:
1) can I make squid3 update the cache with my server every, say, 1 
minute (at most) but use it's cache otherwise without bothering the 
server (not even for headers)? how?


Avoiding to call the server for 1 hour, I think, it's a bit too much: 
the content can change in the meanwhile and I don't want the user to 
wait 1 hour for it.


On the other part I don't want every single request after that hour is 
pass to see squid contacting my server to check if the last modified 
date is changed.




2) which is the best way to debug why squid3 is deciding to keep a cache 
entry, contact the server or not? looking at the huge debug log is not 
very simple maybe some log option to filter it with the cache decisions 
informations only would help




Thanks and sorry for the previous message


[squid-users] Re: Allowing linked sites - NTLM and un-authenticated users

2012-03-31 Thread sichent

Hi Jasper,

Why not to enable the authentication for all (possibly single sign on), 
group them into groups and use external acl to separate access rights?


In this case they no one will get authentication popups and blocked 
sites will be clearly indicated with Cache access denied message?


Sorry if I miss something :)
Best regards,
sich



[squid-users] Startup error with client request buffer

2012-03-31 Thread Guillaume Hilt

Hello,

I'm running Squid 3.1.14 (last available version) on Ubuntu 11.10 AMD64.
When i'm trying to run it, it fail with this error :
2012 Mar 31 17:30:44 rendez-vous Client request buffer of 524288 bytes 
cannot hold a request with 1048576 bytes of headers. Change 
client_request_buffer_max or request_header_max_size limits.
FATAL: Client request buffer of 524288 bytes cannot hold a request with 
1048576 bytes of headers. Change client_request_buffer_max or 
request_header_max_size limits.

Squid Cache (Version 3.1.14): Terminated abnormally.
CPU Usage: 0.012 seconds = 0.012 user + 0.000 sys
Maximum Resident Size: 16032 KB
Page faults with physical i/o: 0

client_request_buffer_max_size and request_header_max_size are set to 
2048kB.


Here's my conf :

auth_param basic program /usr/lib/squid3/squid_db_auth --user squid 
--password X --plaintext --persist

auth_param basic children 5
auth_param basic realm Squid
auth_param basic credentialsttl 2 hours
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
acl localnet src 10.0.0.0/8# RFC1918 possible internal network
acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
acl localnet src 192.168.0.0/16# RFC1918 possible internal network
acl FTP proto FTP
acl SSL_ports port 443 21 20
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443# https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210# wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280# http-mgmt
acl Safe_ports port 488# gss-http
acl Safe_ports port 591# filemaker
acl Safe_ports port 777# multiling http
acl Safe_ports port 901# SWAT
acl purge method PURGE
acl CONNECT method CONNECT
acl My_ports port 80 21 6667
acl db-auth proxy_auth REQUIRED
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow FTP
http_access allow purge localhost
http_access deny purge
http_access deny CONNECT !SSL_ports
http_access allow db-auth
http_access allow localhost
http_access deny all
http_reply_access allow all
icp_access allow localnet
icp_access deny all
htcp_access allow localnet
htcp_access deny all
http_port XX.XX.XX.XX:23
http_port XX.XX.XX.XX:80
hierarchy_stoplist cgi-bin ?
maximum_object_size_in_memory 1 KB
maximum_object_size 1 KB
log_ip_on_direct off
coredump_dir /var/spool/squid3
refresh_pattern ^ftp:  144020% 10080
refresh_pattern ^gopher:   14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 00%  0
refresh_pattern .  0   20% 4320
request_header_max_size 2048 KB
reply_header_max_size 2048 KB
client_request_buffer_max_size 2048 KB
request_header_access Allow allow all
request_header_access Authorization allow all
request_header_access WWW-Authenticate allow all
request_header_access Proxy-Authorization allow all
request_header_access Proxy-Authenticate allow all
request_header_access Cache-Control allow all
request_header_access Content-Encoding allow all
request_header_access Content-Length allow all
request_header_access Content-Type allow all
request_header_access Date allow all
request_header_access Expires allow all
request_header_access Host allow all
request_header_access If-Modified-Since allow all
request_header_access Last-Modified allow all
request_header_access Location allow all
request_header_access Pragma allow all
request_header_access Accept allow all
request_header_access Accept-Charset allow all
request_header_access Accept-Encoding allow all
request_header_access Accept-Language allow all
request_header_access Content-Language allow all
request_header_access Mime-Version allow all
request_header_access Retry-After allow all
request_header_access Title allow all
request_header_access Connection allow all
request_header_access Proxy-Connection allow all
request_header_access Cookie allow all
request_header_access Set-Cookie allow all
request_header_access User-Agent allow all
request_header_access All deny all
httpd_suppress_version_string on
always_direct allow FTP
forwarded_for delete
client_db off
cache_access_log /dev/null
cache_store_log /dev/null

Any idea ?

Regards,

--
  Guillaume Hilt



Re: [squid-users] limiting connections

2012-03-31 Thread Carlos Manuel Trepeu Pupo
On Sat, Mar 31, 2012 at 4:18 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 31/03/2012 3:07 a.m., Carlos Manuel Trepeu Pupo wrote:


 Now I have the following question:
 The possible error to return are 'OK' or 'ERR', if I assume like
 Boolean answer, OK-TRUE  ERR-FALSE. Is this right ?


 Equivalent, yes. Specifically it means success / failure or match /
 non-match on the ACL.


 So, if I deny my acl:
 http_access deny external_helper_acl

 work like this (with the http_access below):
 If return OK -  I denied
 If return ERR -  I do not denied

 It's right this ??? Tanks again for the help !!!


 Correct.

OK, following the idea of this thread that's what I have:

#!/bin/bash
while read line; do
# - This it for debug (Testing i saw that not always save to
file, maybe not always pass from this ACL)
echo $line  /home/carlos/guarda 

result=`squidclient -h 10.11.10.18 mgr:active_requests | grep
-c $line`

  if [ $result == 1 ]
then
echo 'OK'
echo 'OK'/home/carlos/guarda 
  else
echo 'ERR'
echo 'ERR'/home/carlos/guarda 
  fi
done

In the squid.conf this is the configuration:

acl test src 10.11.10.12/32
acl test src 10.11.10.11/32

acl extensions url_regex /etc/squid3/extensions
# extensions contains:
\.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|mpg|wma|ogg|wmv|asx|asf|deb|rpm|exe|zip|tar|tgz|rar|ppt|doc|tiff|pdf)$
external_acl_type one_conn %URI /home/carlos/contain
acl limit external one_conn

http_access allow localhost
http_access deny extensions !limit
deny_info ERR_LIMIT limit
http_access allow test


I start to download from:
10.11.10.12 - 
http://ch.releases.ubuntu.com//oneiric/ubuntu-11.10-desktop-i386.iso
then start from:
10.11.10.11 - 
http://ch.releases.ubuntu.com//oneiric/ubuntu-11.10-desktop-i386.iso

And let me download. What I'm missing ???


# -

http_access deny all




 Amos



Re: [squid-users] Squid Reverse Proxy (accel) always contacting the server

2012-03-31 Thread Amos Jeffries

On 1/04/2012 3:53 a.m., Daniele Segato wrote:

On 03/31/2012 05:01 PM, Daniele Segato wrote:

On 03/31/2012 10:13 AM, Amos Jeffries wrote:

max-age

The max-age response directive indicates that the response is to
be considered stale after its age is greater than the specified
number of seconds.



The logic goes like this:

Object modified ... 22:14:20
Valid +3600
== fresh until 23:14:50
Current time: 23:27:57

23:14:50  23:27:15 == currently stale. must revalidate.

Expires header can be used to set an absolute time for invaldation.
max-age is relative to age.


Ok I think I now understood you...



you are saying that max-age is added to last modified date
but that doesn't make much sense to me.

If the server (parent cache) is returning the content at 23:27:00 saying
max-age 3600 I would expect that 3600 start from now.




anyway, I thought about this before and I also tried to modify the
content, then immediately giving two request to squid.


apparently this was caused by a mistake I did with the server (see below)



this time, suppose:

Object modified ... 00:00:00
Valid +3600
== fresh until 01:01:00
Current time: 00:05:00

01:01:00  00:05:00 == currently fresh. shouldn't bother the server.

instead what's actually happening is that squid is doing a request to my
server, only header, but it's still doing it.

My server, to compute the Last-Modified date has to do all the job of
collecting the data, looping to each data element and extract, for each,
the last modified date, then compute the last one.. it build a model
that is then rendered: it's pretty short anyway since it's gzipped text.

So the big work of my server is to collect the data, and my server have
to do it both if you do a GET both if you do an HEAD request.

I would like squid to revalidate with my server every, say 1 minute,
even 10 seconds is ok.. but it shouldn't revalidate every single request
it is receiving.

I hope I made my point.



this question is still in place :)



revalidation is more of a threshold which gets set on each object. Under 
the threshold no valdation takes place, above it every request gets 
validated. BUT ... a 304 response revalutating the object can change the 
threshold by sending new timestamp and caching headers.






I wanted to give you an example but now squid is always giving me a
TCP_MISS


this was my mistake, the Last-Modified date format was wrong from 
server :)


please ignore the debug and everything behind this point in my 
previous email...


Now it's giving cache hits in ram!


I think I can summarize my question in this two questions:
1) can I make squid3 update the cache with my server every, say, 1 
minute (at most) but use it's cache otherwise without bothering the 
server (not even for headers)? how?


Avoiding to call the server for 1 hour, I think, it's a bit too much: 
the content can change in the meanwhile and I don't want the user to 
wait 1 hour for it.


On the other part I don't want every single request after that hour is 
pass to see squid contacting my server to check if the last modified 
date is changed.




You have the two options of max-age or Expires. The thing to remember is 
to increment the value / threshold forward to the next poitn where you 
want revalidation to take place.


with a max-age N value which you generate dynamically by: calculate 
current age of object when responding, add 60.


with Expires: you simply emit a timestamp of now() + 60 seconds on each 
response.


Other useful things to know;
  Generating an ETag label for each unique output helps caches detect 
unique versions without timestamp calculations. The easy ways to do this 
are to make ETag a MD5 hash of the body object.  Or a hash of the 
Last-Modified timestamp string if the body is too expensive to locate 
MD5 for. Or some other property of the resource which is guaranteed to 
change any time the body changes and not otherwise.


  Cache-Control:stale-while-revalidate tells caches to revalidate, but 
not to block the client response waiting for that validation to finish. 
Clients will get the old object until a new one or 304 is received back.





2) which is the best way to debug why squid3 is deciding to keep a 
cache entry, contact the server or not? looking at the huge debug log 
is not very simple maybe some log option to filter it with the cache 
decisions informations only would help


debug_options 22,3
... or maybe 22,5 if there is not enough at level 3.

Amos


Re: [squid-users] Startup error with client request buffer

2012-03-31 Thread Amos Jeffries

On 1/04/2012 5:27 a.m., Guillaume Hilt wrote:

Hello,

I'm running Squid 3.1.14 (last available version) on Ubuntu 11.10 AMD64.
When i'm trying to run it, it fail with this error :
2012 Mar 31 17:30:44 rendez-vous Client request buffer of 524288 bytes 
cannot hold a request with 1048576 bytes of headers. Change 
client_request_buffer_max or request_header_max_size limits.
FATAL: Client request buffer of 524288 bytes cannot hold a request 
with 1048576 bytes of headers. Change client_request_buffer_max or 
request_header_max_size limits.

Squid Cache (Version 3.1.14): Terminated abnormally.
CPU Usage: 0.012 seconds = 0.012 user + 0.000 sys
Maximum Resident Size: 16032 KB
Page faults with physical i/o: 0

client_request_buffer_max_size and request_header_max_size are set to 
2048kB.


You need to be be able to store all the headers, plus the request and 
URL details, plus the HTTP framing bytes in the buffer at once. Having 
the headers size alone only just fit into the buffer is not enough space.


The other main question is why do you expect to have HTTP requests with 
2MB of *headers* arriving?


Unless you have a very good reason I recommend leaving them unset 
(default values). Squid has the highest limits around out of any of the 
HTTP middleware, there is very little chance that requests reaching or 
passing the Squid default limits will be able to pass over the Internet 
reliably even if you raise them for your proxy.




Here's my conf :

auth_param basic program /usr/lib/squid3/squid_db_auth --user squid 
--password X --plaintext --persist

auth_param basic children 5
auth_param basic realm Squid
auth_param basic credentialsttl 2 hours
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
acl localnet src 10.0.0.0/8# RFC1918 possible internal network
acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
acl localnet src 192.168.0.0/16# RFC1918 possible internal 
network

acl FTP proto FTP
acl SSL_ports port 443 21 20
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443# https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210# wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280# http-mgmt
acl Safe_ports port 488# gss-http
acl Safe_ports port 591# filemaker
acl Safe_ports port 777# multiling http
acl Safe_ports port 901# SWAT
acl purge method PURGE
acl CONNECT method CONNECT
acl My_ports port 80 21 6667
acl db-auth proxy_auth REQUIRED
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow FTP


Um, unlimited FTP access for anyone on the Internet?  see below.


http_access allow purge localhost
http_access deny purge


You have HTCP half-enabled. If you finish that by opening a htcp_port 
and setting htcp_clr_access allow localhost you can probably drop 
PURGE support.



http_access deny CONNECT !SSL_ports


... you already blocked CONNECT !SSL_ports above.


http_access allow db-auth
http_access allow localhost



Allow access to logged in users or any traffic arriving from localhost. 
Notice how there is not mention of a particular protocol. Such as FTP 
versus HTTP versus HTTPS.
That means these rules above are permitting your users access to 
ftp://... without needing that dangrous http_access allow FTP line.



http_access deny all
http_reply_access allow all
icp_access allow localnet
icp_access deny all
htcp_access allow localnet
htcp_access deny all
http_port XX.XX.XX.XX:23
http_port XX.XX.XX.XX:80
hierarchy_stoplist cgi-bin ?
maximum_object_size_in_memory 1 KB
maximum_object_size 1 KB
log_ip_on_direct off
coredump_dir /var/spool/squid3
refresh_pattern ^ftp:  144020% 10080
refresh_pattern ^gopher:   14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 00%  0
refresh_pattern .  0   20% 4320
request_header_max_size 2048 KB
reply_header_max_size 2048 KB
client_request_buffer_max_size 2048 KB
request_header_access Allow allow all
request_header_access Authorization allow all
request_header_access WWW-Authenticate allow all
request_header_access Proxy-Authorization allow all
request_header_access Proxy-Authenticate allow all
request_header_access Cache-Control allow all
request_header_access Content-Encoding allow all
request_header_access Content-Length allow all
request_header_access Content-Type allow all
request_header_access Date allow all
request_header_access Expires allow all
request_header_access Host allow all
request_header_access If-Modified-Since allow all
request_header_access Last-Modified allow all
request_header_access Location allow all
request_header_access Pragma allow all
request_header_access Accept allow all
request_header_access Accept-Charset allow 

Re: [squid-users] limiting connections

2012-03-31 Thread Amos Jeffries

On 1/04/2012 7:58 a.m., Carlos Manuel Trepeu Pupo wrote:

On Sat, Mar 31, 2012 at 4:18 AM, Amos Jeffriessqu...@treenet.co.nz  wrote:

On 31/03/2012 3:07 a.m., Carlos Manuel Trepeu Pupo wrote:


Now I have the following question:
The possible error to return are 'OK' or 'ERR', if I assume like
Boolean answer, OK-TRUEERR-FALSE. Is this right ?


Equivalent, yes. Specifically it means success / failure or match /
non-match on the ACL.



So, if I deny my acl:
http_access deny external_helper_acl

work like this (with the http_access below):
If return OK -I denied
If return ERR -I do not denied

It's right this ??? Tanks again for the help !!!


Correct.

OK, following the idea of this thread that's what I have:

#!/bin/bash
while read line; do
 # -  This it for debug (Testing i saw that not always save to
file, maybe not always pass from this ACL)
 echo $line  /home/carlos/guarda

 result=`squidclient -h 10.11.10.18 mgr:active_requests | grep
-c $line`

   if [ $result == 1 ]
 then
 echo 'OK'
 echo 'OK'/home/carlos/guarda
   else
 echo 'ERR'
 echo 'ERR'/home/carlos/guarda
   fi
done

In the squid.conf this is the configuration:

acl test src 10.11.10.12/32
acl test src 10.11.10.11/32

acl extensions url_regex /etc/squid3/extensions
# extensions contains:
\.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|mpg|wma|ogg|wmv|asx|asf|deb|rpm|exe|zip|tar|tgz|rar|ppt|doc|tiff|pdf)$
external_acl_type one_conn %URI /home/carlos/contain
acl limit external one_conn

http_access allow localhost
http_access deny extensions !limit
deny_info ERR_LIMIT limit
http_access allow test


I start to download from:
10.11.10.12 -  
http://ch.releases.ubuntu.com//oneiric/ubuntu-11.10-desktop-i386.iso
then start from:
10.11.10.11 -  
http://ch.releases.ubuntu.com//oneiric/ubuntu-11.10-desktop-i386.iso

And let me download. What I'm missing ???


You must set ttl=0 negative_ttl=0 grace=0 as options for your 
external_acl_type directive. To disable caching optimizations on the 
helper results.


Amos


Re: [squid-users] time error squid

2012-03-31 Thread Amos Jeffries

On 1/04/2012 1:31 a.m., Jose R. Cristo Almaguer wrote:

how to do that, first figured it was that was taking the time to hwclock,
then change the time and remains the same, all logs have bad time and I
don’t know to put the correct time. Greetings joSE;


Squid uses the system time() and strftime() API calls to locate UTC/GMT 
time details. If the values presented there are wrong your system kernel 
is broken somehow.


NP: the times logged by Squid is *completion* time for each request. Not 
start time.



-Mensaje original-
De: Amos Jeffries

On 31/03/2012 8:05 p.m., Jose R. Cristo Almaguer wrote:

Hello, I use squid 3.1.19, everything works fine, but I have problems with
the time in the squid error page gives me a time that is not the system or
BIOS, any ideas?

The error pages are supposed to be UTC. Unless you changed them to be
local timezone or something strange.

Amos





Re: [squid-users] External users -- firewall -- proxy -- Internet = :-(

2012-03-31 Thread Amos Jeffries

On 30/03/2012 9:06 a.m., pr0xyguy wrote:

Hi guys, hope you can help me here,

Setup:

Intranet = inside firewall
userA --  8080:dansguardian --  3128: http_port transparent --  Internet =


This is broken. DG sending traffic to Squid port 3128 is an explict 
client (DG) configuration, *not* interception. The upcoming Squid which 
validate the received NAT data will reject this traffic.




Extranet = outside firewall (mobile / remote users)
userB --  SOHO router --  corp firewall:80 --  8080:dansguardian --  3128:
http_port transparent --  Internet =


Same again.


userB --  SOHO router --  corp firewall:443 --  3129: https_port transparent
ssl-bump cert=... key=... - -  Internet =


Is the firewall doing port forwarding? (same problem as mentioned for 
DG).  NAT ('forwarding') *must* be done on the Squid box where Squid can 
grab the kernel NAT records from.


Or is it doing proper policy routing? (with NAT on the Squid box for the 
intercept)




The Issue:

userB request Google which we convert from HTTPS to HTTP using DNS trickery
(setup by Google for schools/corps ie. explicit.google.com=0.0.0.0 to
prevent encrypted searches).  So far, so good.


Sort of. I have found Google systems sometimes still automatically 
redirect HTTP to HTTPS anyway when they have decided that TLS is 
mandatory for that service.


If you are going to use SSL intercept trickery anyway, I think use that 
instead of adding the two types of trickery together.


NOTE: If you have control over userB DNS lookups why are you not simply 
setting their DNS WPAD records and using a PAC file?




   However HTTPS coming from
userB (outside our firewall) is not CONNECT, but straight SSL.  Thus the
ssl-bump setup which is working, with invalid cert warnings which is ok for
us, but the google/calendar site gets stuck in a loop.  From my access.log:

*173.162.48.224* TCP_MISS/302 890 GET http://www.google.com/calendar? -
DIRECT/216.239.32.20 text/html
*173.162.48.224 *TCP_MISS/302 1198 GET
http://www.google.com/calendar/render? - DIRECT/216.239.32.20 text/html
*10.0.10.171 *TCP_MISS/302 845 GET http://www.google.com/calendar/render? -
DIRECT/216.239.32.20 text/html
*173.162.48.224 *TCP_MISS/302 717 GET http://www.google.com/calendar/render?
- DIRECT/216.239.32.20 text/html
*173.162.48.224* TCP_MISS/302 717 GET http://www.google.com/calendar/render?
- DIRECT/216.239.32.20 text/html
*173.162.48.224* TCP_MISS/302 717 GET http://www.google.com/calendar/render?
- DIRECT/216.239.32.20 text/html


...and then the session times out with the agent usually returning a page
isn’t redirecting properly warning.  IE will try forever of course, and
eventually crash the system.


This looks like calendar is one of the systems they have not rolled that 
DNS trickery support into properly.




squid.conf:

acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
access_log /var/log/access.log
acl localnet src all


So the entire Internet is part of your LAN? wow.



http_access allow localnet


Then you bypass all security for that huge LAN. Ouch.


acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
cache_effective_user squid3
http_access allow localnet
http_access deny all
ssl_bump allow all
http_port 10.0.10.100:3128 intercept
https_port 10.0.10.100:3129 intercept cert=/www.sample.com.pem
key=/www.sample.com.pem


This is a fixed certificate, same one for all domains registered on the 
Internet. To intercept SSL you *need* the dynamic cert generation 
feature in Squid-3.2. And you also need the external users to trust your 
local certificate generators signing CA.




hierarchy_stoplist cgi-bin ?
coredump_dir /var/cache
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
dns_nameservers 8.8.8.8


Thanks guys!!! Right now I'm shooting in the dark trying this, trying that.
I have a tone of work in this setup, if we can't resolve this I must find
another solution for our external users.

Scott






Re: [squid-users] reply body max size: crash or not?

2012-03-31 Thread Amos Jeffries

On 30/03/2012 7:02 a.m., Tianyin Xu wrote:

Hi, Amos,

Thanks a lot for the response!!
The thing I'm still not clear is that it still works when I set the
limit to 1 bytes which is obviously less than the size of any error
message. So, it means any small setting of this directive won't cause
infinite loop/crash results but only limit the response objects. Am I
right?


It was fixed a while back, but there are still older versions still 
being distributed with some operating systems. For exampel, LTE releases 
with 5-10 year support cycles. Being able to use 1 byte limit is a good 
sign that your Squid is not affected. The point of the documentation 
warning is to make you be careful and check when going small.




This brings the fundamental difference to me. If it may fall into an
infinite loop, the admin should be really cautious and conservative.
Otherwise, it doesn't matter too much (at most reject sth) and can be
set aggressively.


It is reasonable to retain the conservatism regardless of this bug and 
test your version before changing these limits. The effects of limiting 
responses is clearly visible to the end users, even if this bug is fixed 
in your Squid.


Amos


Re: [squid-users] Delay fetching web pages

2012-03-31 Thread Amos Jeffries

On 28/03/2012 6:41 p.m., Colin Coe wrote:

Hi all

I'm running squid 3.1.10 on a RHEL6.2 box.  When I point my clients at
it, the clients experience about a 2 minute 20 second delay between
sending the request to squid and the request being fulfilled.


Might be client problems with Expect:100-continue HTTP/1.1-only 
feature being sent to HTTP/1.0 software?
 That is still happening fairly frequently. We offer admin the choice 
of ignore_expect_100 which changes the HTTP/1.0 proxy from sending 417 
(which tells the client it can retry immediately without Expect:) to 
ignoring it and waiting for the full request to arrive (to which the 
client has no choice but wait some timeout then send the POST data anyway).


Guess which one can result in several minutes delay?

If you want to be sure you will need to grab a packet trace for the HTTP 
headers between the cleint and proxy.


Amos