RE: [squid-users] Re: Problems checking if an object is in cache or not

2013-12-05 Thread Donoso Gabilondo, Daniel
On 2013-12-05 06:07, RW wrote:
 On Wed, 4 Dec 2013 15:45:42 +
 Donoso Gabilondo, Daniel wrote:
 
 
 I saw in internet that I should use squidclient with -t option to 
 check if an object is cached. Why need I enable TRACE on server  to 
 check if a object is cached in the client (Squid)?
 
 I think it  would be -t 0 to get the trace from squid, but I don't see 
 how that would tell you whether the object is cached.
 
 
  I need to ask to
 squid directly if an object is cached or not without server 
 intervention. How can I do it?
 
 The headers in a GET response will tell you if it's served from cache 
 i.e. just leave out the -t.

Yes. Also add the header:
 Cache-Control:only-if-cached

That will make Squid produce an error instead of fetching a new copy from the 
server. It will not prevent revalidation the cache needs to do in order to be 
able to respond about certain types of cached object.

The big question though is why you need to do this at all? what is the 
use-case this fetch/test is part of?

Some customers have very slow networks, and they have some big resources 
(videos). Then, at night, when they don't use the network for anything else, 
our server application informs to client application that it must download the 
resources for the next day.  The goal is to guarantee that resources are cached 
in client before they must be sown, and prevent playback cuts due to network 
delay.

Our client application must check if each resource is cached or not and inform 
to server application, but not only after download them. The server application 
can send when a user wants a command to the client application to check if a 
resource is cached or not, only to check. The resources only are downloaded at 
night.

How can I do it?

I tried to use squidclient with the HEAD command but it always answers MISS 
the first time when the objects are cached. The second time always answers 
TCP_HIT but with resources that are not cached too.
Amos


[squid-users] Problems checking if an object is in cache or not

2013-12-04 Thread Donoso Gabilondo, Daniel
I need check if an object is in squid cache or not. I use squid 3.2.0.12 on 
Fedora 16.

I saw that squidclient should do this but it said that objects are MISS and I 
don't know why because they are cached. (I checked it with Wireshark)

I tried executing this command:

squidclient -h localhost -p 3128 -t 1 
http://192.168.230.10/myvideos/VEA_ESP.mov;

and this is the result:

HTTP/1.1 405 Method Not Allowed
Server: Apache-Coyote/1.1
Allow: POST, GET, DELETE, OPTIONS, PUT, HEAD
Content-Length: 0
Date: Wed, 04 Dec 2013 10:40:25 GMT
X-Cache: MISS from pc02
X-Cache-Lookup: MISS from pc02:3128
Via: 1.1 pc02 (squid/3.2.0.12)
Connection: close

Why is giving the Method not allowed error?
Why is answering that objects are MISS when they are cached?

Here is my squid.conf file content:
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

http_access allow manager localhost
http_access allow localhost
http_access allow all
http_port 3128
hierarchy_stoplist cgi-bin ?

cache_dir ufs /hd/SQUID 7000 16 256

coredump_dir /var/spool/squid

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   0%  0

cache_mem 128 MB
maximum_object_size 4194304 KB
range_offset_limit -1
access_log none
acl Purge method PURGE
acl Get method GET
http_access allow all Purge
http_access allow all Get



RE: [squid-users] Re: Problems checking if an object is in cache or not

2013-12-04 Thread Donoso Gabilondo, Daniel
You are right. 

The TRACE method was disabled in the Server (Tomcat). I enabled it in 
squid.conf and in server (tomcat).
Now TRACE method works, but X-Cache and X-Cache-Lookup options values are 
MISS in both cases.
In access.log file I can see that when the resource is asked a TCP_HIT trace 
appears, and not a TCP_MISS trace. I capture traffic with wireshark and client 
doesn't ask the file.

I saw in internet that I should use squidclient with -t option to check if an 
object is cached.
Why need I enable TRACE on server  to check if a object is cached in the client 
(Squid)? 
I need to ask to squid directly if an object is cached or not without server 
intervention. How can I do it?


 
-Mensaje original-
De: RW [mailto:rwmailli...@googlemail.com] 
Enviado el: miércoles, 04 de diciembre de 2013 13:57
Para: squid-users@squid-cache.org
Asunto: [squid-users] Re: Problems checking if an object is in cache or not

On Wed, 4 Dec 2013 11:02:40 +
Donoso Gabilondo, Daniel wrote:

 I need check if an object is in squid cache or not. I use squid
 3.2.0.12 on Fedora 16.
 
 I saw that squidclient should do this but it said that objects are
 MISS and I don't know why because they are cached. (I checked it
 with Wireshark)
 
 I tried executing this command:
 
 squidclient -h localhost -p 3128 -t 1
 http://192.168.230.10/myvideos/VEA_ESP.mov;
 
 and this is the result:
 
 HTTP/1.1 405 Method Not Allowed
 Server: Apache-Coyote/1.1
 Allow: POST, GET, DELETE, OPTIONS, PUT, HEAD
 Content-Length: 0
 Date: Wed, 04 Dec 2013 10:40:25 GMT
 X-Cache: MISS from pc02
 X-Cache-Lookup: MISS from pc02:3128
 Via: 1.1 pc02 (squid/3.2.0.12)
 Connection: close
 
 Why is giving the Method not allowed error?

Presumably you aren't allowed to use the TRACE method, try it without
the -t option.

 Why is answering that objects are MISS when they are cached?

It's the error message that's a MISS not the object. 




RE: [squid-users] Re: Compile Squid and make error

2013-12-04 Thread Donoso Gabilondo, Daniel
Are you compiling in 32 bits?

Can you try to use -march=i586 flag compile again?


-Mensaje original-
De: Gianluigi Ruggeri [mailto:gianluig...@gmail.com] 
Enviado el: miércoles, 04 de diciembre de 2013 15:46
Para: vikkymoorthy
CC: squid-users@squid-cache.org
Asunto: Re: [squid-users] Re: Compile Squid and make error

Hi,
I followed the squid wiki about CentoOS:

I runned these commands:

# You will need the usual build chain
yum install -y perl gcc autoconf automake make sudo wget

# and some extra packages
yum install libxml2-devel libcap-devel

# to bootstrap and build from bzr needs also the packages yum install 
libtool-ltdl-devel

./configure command with these options:

  --prefix=/usr
  --includedir=/usr/include
  --datadir=/usr/share
  --bindir=/usr/sbin
  --libexecdir=/usr/lib/squid
  --localstatedir=/var
  --sysconfdir=/etc/squid


And during make command I obtain this:

libIpcIo.a(IpcIoFile.o): In function `Ipc::Atomic::WordTint::get() const':
/home/Gianluigi/squid-3.2.12/src/../src/ipc/AtomicWord.h:47: undefined 
reference to `__sync_fetch_and_add_4'
libIpcIo.a(IpcIoFile.o): In function `Ipc::Atomic::WordTint::operator+=(int)':
/home/Gianluigi/squid-3.2.12/src/../src/ipc/AtomicWord.h:31: undefined 
reference to `__sync_add_and_fetch_4'
libIpcIo.a(IpcIoFile.o): In function `Ipc::Atomic::WordTint::get() const':
/home/Gianluigi/squid-3.2.12/src/../src/ipc/AtomicWord.h:47: undefined 
reference to `__sync_fetch_and_add_4'
/home/Gianluigi/squid-3.2.12/src/../src/ipc/AtomicWord.h:47: undefined 
reference to `__sync_fetch_and_add_4'
/home/Gianluigi/squid-3.2.12/src/../src/ipc/AtomicWord.h:47: undefined 
reference to `__sync_fetch_and_add_4'
/home/Gianluigi/squid-3.2.12/src/../src/ipc/AtomicWord.h:47: undefined 
reference to `__sync_fetch_and_add_4'
/home/Gianluigi/squid-3.2.12/src/../src/ipc/AtomicWord.h:47: undefined 
reference to `__sync_fetch_and_add_4'
libIpcIo.a(IpcIoFile.o):/home/Gianluigi/squid-3.2.12/src/../src/ipc/AtomicWord.h:47:
more undefined references to `__sync_fetch_and_add_4' follow
libIpcIo.a(IpcIoFile.o): In function `Ipc::Atomic::WordTint::operator+=(int)':
/home/Gianluigi/squid-3.2.12/src/../src/ipc/AtomicWord.h:31: undefined 
reference to `__sync_add_and_fetch_4'
libIpcIo.a(IpcIoFile.o): In function `Ipc::Atomic::WordTint::get() const':
/home/Gianluigi/squid-3.2.12/src/../src/ipc/AtomicWord.h:47: undefined 
reference to `__sync_fetch_and_add_4'
libIpcIo.a(IpcIoFile.o): In function
`Ipc::Atomic::WordTint::swap_if(int, int)':
/home/Gianluigi/squid-3.2.12/src/../src/ipc/AtomicWord.h:38: undefined 
reference to `__sync_bool_compare_and_swap_4'
libIpcIo.a(IpcIoFile.o): In function `Ipc::Atomic::WordTint::get() const':
/home/Gianluigi/squid-3.2.12/src/../src/ipc/AtomicWord.h:47: undefined 
reference to `__sync_fetch_and_add_4'
/home/Gianluigi/squid-3.2.12/src/../src/ipc/AtomicWord.h:47: undefined 
reference to `__sync_fetch_and_add_4'
libIpcIo.a(IpcIoFile.o): In function
`Ipc::Atomic::WordTint::swap_if(int, int)':
/home/Gianluigi/squid-3.2.12/src/../src/ipc/AtomicWord.h:38: undefined 
reference to `__sync_bool_compare_and_swap_4'
libIpcIo.a(IpcIoFile.o): In function `Ipc::Atomic::WordTint::get() const':
/home/Gianluigi/squid-3.2.12/src/../src/ipc/AtomicWord.h:47: undefined 
reference to `__sync_fetch_and_add_4'
libIpcIo.a(IpcIoFile.o): In function
`Ipc::Atomic::WordTint::swap_if(int, int)':
/home/Gianluigi/squid-3.2.12/src/../src/ipc/AtomicWord.h:38: undefined 
reference to `__sync_bool_compare_and_swap_4'
libIpcIo.a(IpcIoFile.o): In function `Ipc::Atomic::WordTint::operator-=(int)':
/home/Gianluigi/squid-3.2.12/src/../src/ipc/AtomicWord.h:32: undefined 
reference to `__sync_sub_and_fetch_4'
libIpcIo.a(IpcIoFile.o): In function `Ipc::Atomic::WordTint::get() const':
/home/Gianluigi/squid-3.2.12/src/../src/ipc/AtomicWord.h:47: undefined 
reference to `__sync_fetch_and_add_4'
/home/Gianluigi/squid-3.2.12/src/../src/ipc/AtomicWord.h:47: undefined 
reference to `__sync_fetch_and_add_4'
/home/Gianluigi/squid-3.2.12/src/../src/ipc/AtomicWord.h:47: undefined 
reference to `__sync_fetch_and_add_4'
ipc/.libs/libipc.a(Queue.o): In function `Ipc::Atomic::WordTint::swap_if(int, 
int)':
/home/Gianluigi/squid-3.2.12/src/ipc/../../src/ipc/AtomicWord.h:38:
undefined reference to `__sync_bool_compare_and_swap_4'
ipc/.libs/libipc.a(Queue.o): In function `Ipc::Atomic::WordTint::swap_if(int, 
int)':
/home/Gianluigi/squid-3.2.12/src/ipc/Queue.cc:256: undefined reference to 
`__sync_bool_compare_and_swap_4'
ipc/.libs/libipc.a(ReadWriteLock.o): In function
`Ipc::Atomic::WordTint::operator--(int)':
/home/Gianluigi/squid-3.2.12/src/ipc/../../src/ipc/AtomicWord.h:36:
undefined reference to `__sync_fetch_and_sub_4'
ipc/.libs/libipc.a(ReadWriteLock.o): In function
`Ipc::Atomic::WordTint::operator+=(int)':
/home/Gianluigi/squid-3.2.12/src/ipc/../../src/ipc/AtomicWord.h:31:
undefined reference to `__sync_add_and_fetch_4'
ipc/.libs/libipc.a(ReadWriteLock.o): In function
`Ipc::Atomic::WordTint::get() const':

[squid-users] Squid doesn't cache some objetcs

2009-04-16 Thread Donoso Gabilondo, Daniel
I resend this mail because I don't know if it arrived well the first
time. (I have received it in the spam folder) If it was received well
sorry for send it again.

I'm using a linux application that uses squid to cache the objects. 
If I ask the objects with wget linux command the objects are cached
well, but with the application it always receives Partial Content
I saw that GET headers are different. The application adds to header a
Range attribute (is this the problem?) 
 
The application header is this:

GET /Resources/rsc/National/Diapositiva20.JPG HTTP/1.0\r\n
User-Agent: Lavf52.16.0\r\n
Accept: */*\r\n
Range: bytes=0-\r\n
Host: myserver.com:8080\r\n
Authorization: Basic\r\n
Via: 1.1 localhost.localdomain (squid/3.0.STABLE13)\r\n
X-Forwarded-For: 127.0.0.1\r\n
Cache-Control: max-age=0\r\n
Connection: keep-alive\r\n


And the wget header content:

GET /Resources/rsc/National/Diapositiva20.JPG HTTP/1.0\r\n
User-Agent: Wget/1.11.4 (Red Hat modified)\r\n
Accept: */*\r\n
Host: myserver.com:8080\r\n
Via: 1.1 localhost.localdomain (squid/3.0.STABLE13)\r\n
X-Forwarded-For: 127.0.0.1\r\n
Cache-Control: max-age=0\r\n
Connection: keep-alive\r\n

Best regards,
Daniel





[squid-users] always receiving partial content

2009-04-14 Thread Donoso Gabilondo, Daniel
I'm using a  linux application that uses squid to cache the objects. 
If I ask the objects with wget linux command the objects are cached
well, but with the application it always receives Partial Content
I saw that GET headers are different. The application adds to header a
Range attribute (is this the problem?) 
 
The application header is this:

GET /Resources/rsc/National/Diapositiva20.JPG HTTP/1.0\r\n
User-Agent: Lavf52.16.0\r\n
Accept: */*\r\n
Range: bytes=0-\r\n
Host: myserver.com:8080\r\n
Authorization: Basic\r\n
Via: 1.1 localhost.localdomain (squid/3.0.STABLE13)\r\n
X-Forwarded-For: 127.0.0.1\r\n
Cache-Control: max-age=0\r\n
Connection: keep-alive\r\n


And the wget header content:

GET /Resources/rsc/National/Diapositiva20.JPG HTTP/1.0\r\n
User-Agent: Wget/1.11.4 (Red Hat modified)\r\n
Accept: */*\r\n
Host: myserver.com:8080\r\n
Via: 1.1 localhost.localdomain (squid/3.0.STABLE13)\r\n
X-Forwarded-For: 127.0.0.1\r\n
Cache-Control: max-age=0\r\n
Connection: keep-alive\r\n

Best regards,
Daniel





RE: [squid-users] squis is asking if a cached object is modified

2009-03-09 Thread Donoso Gabilondo, Daniel
Thank you very much for your help Amos,

 No your squid is configured as a twisted open proxy. See below for fixes...
I seem to remember helping you with .home.nl earlier. That config was a 
bit weird, but there are some entries in your listed config which worry 
me terribly...

You have a very good memory. XD
Now, I changed the configuration file following your comments.

You may not be doing anything wrong. Squid still suffers from bug #7.
http://www.squid-cache.org/bugs/show_bug.cgi?id=7
Is there any version of squid with the patch applied? I read the comments and I 
downloaded the different versions of squid but without good results.

Regards,
Daniel

Daniel Donoso 
Aeropuertos - Departamento Tecnología y Desarrollo
Airports - Technology and Development Department
donos...@ikusi.com
www.ikusi.com
 
IKUSI - Ángel Iglesias S.A.
Paseo Miramón, 170 
20009 San Sebastián 
SPAIN
Tel.:+34 943 44 88 00
Fax: +34 943 44 88 20
 
 
La información incluida en este mensaje y sus anexos es CONFIDENCIAL y para USO 
EXCLUSIVO de sus destinatarios. No está permitida su divulgación y/o 
reproducción sin autorización. Si ha recibido este mensaje y no le incumbe, le 
rogamos nos los comunique y proceda a su borrado. Gracias.
 
Information included in this e-mail and attached files is CONFIDENTIAL and only 
for the EXCLUSIVE USE of the receivers. Circulation and/or copy without 
permission is not allowed. If you have received this e-mail and you are not the 
intended recipient, please let us know and erase the message and attached 
files. Thank you.
 
P Antes de imprimir este e-mail piense bien si es necesario hacerlo. El 
medioambiente es cosa de todos.
Before printing this e-mail ask yourself if you really need a 
hard copy. We are all responsible for the environment.
 

-Mensaje original-
De: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Enviado el: miércoles, 25 de febrero de 2009 4:38
Para: Donoso Gabilondo, Daniel
CC: squid-users@squid-cache.org
Asunto: Re: [squid-users] squis is asking if a cached object is modified

Donoso Gabilondo, Daniel wrote:
 I have two pc's with Fedora core 10 and squid.3.0.STABLE13
 The content of squid.conf file in both pc's is the same. Squid is
 configured as an accelerator.

No your squid is configured as a twisted open proxy. See below for fixes...

 
 Squid stores the objects correctly in both pc's, but in one pc it is
 always asking to the http server if the object is modified, and I don't
 know why. If I stop the http server, then gets the cached object. The
 other pc is always getting the cached object. The http server sends
 always the mage-axe=86400 value in the header.
 
 What I am doing wrong? My squid.conf file content is this:

You may not be doing anything wrong. Squid still suffers from bug #7.
http://www.squid-cache.org/bugs/show_bug.cgi?id=7

I seem to remember helping you with .home.nl earlier. That config was a 
bit weird, but there are some entries in your listed config which worry 
me terribly...

The global access to permit Purge opens a number of DDoS vectors.

And the use of always_direct allow all as the first always_direct line 
will prevent your otherwise listed cache_peer link every being used.

Also the fact that the cache_peer link settings are listed LAST in the 
config, instead of first indicates its not going to be used even if 
available.

Amos
-- 
Please be using
   Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
   Current Beta Squid 3.1.0.5


[squid-users] squis is asking if a cached object is modified

2009-02-24 Thread Donoso Gabilondo, Daniel

I have two pc's with Fedora core 10 and squid.3.0.STABLE13
The content of squid.conf file in both pc's is the same. Squid is
configured as an accelerator.  

Squid stores the objects correctly in both pc's, but in one pc it is
always asking to the http server if the object is modified, and I don't
know why. If I stop the http server, then gets the cached object. The
other pc is always getting the cached object. The http server sends
always the mage-axe=86400 value in the header.

What I am doing wrong? My squid.conf file content is this:


acl manager proto cache_object
acl localnet src 192.168.0.0/16
acl myserver.com src 192.168.0.0/16
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl Purge method PURGE
check_hostnames on
hosts_file /etc/hosts
dns_defnames on
http_access allow all Purge
minimum_expiry_time 120 seconds
http_access allow manager localhost
http_access allow manager
http_access allow !Safe_ports
http_access allow CONNECT !SSL_ports
http_access allow localhost
http_access allow !localnet
http_access allow myserver.com
hierarchy_stoplist cgi-bin ?
access_log /var/log/squid/access.log squid
#acl QUERY urlpath_regex cgi-bin \?
#no_cache deny QUERY
cache allow all
#refresh_pattern ^ftp:  144020% 10080
#refresh_pattern ^gopher:   14400%  1440
#refresh_pattern .  0   20% 4320
refresh_pattern .   0   0%  0
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
coredump_dir /var/spool/squid
cache_dir ufs /var/spool/squid 700 32 512 
maximum_object_size 8830 KB
cache_mem 120 MB
http_port 3128 accel defaultsite=myserver.com:8080 
cache_peer myserver.com parent 80 0 no-query originserver
forwarded_for on
icp_port3130
icp_access allow all
acl HOMEdstdomain .home.nl
always_direct allow all
never_direct allow HOME
never_direct allow all
cache_peer_access myserver.com allow all
http_access allow HOME
http_access allow all

myserver.com is in the etc/hosts file and in both pc's is the same.

Thank you,

Daniel


[squid-users] Squid asking if cached objects are modified

2009-02-24 Thread Donoso Gabilondo, Daniel

I have two pc's with Fedora core 10 and squid.3.0.STABLE13 The content
of squid.conf file in both pc's is the same. Squid is configured as an
accelerator.  

Squid stores the objects correctly in both pc's, but in one pc it is
always asking to the http server if the object is modified, and I don't
know why. If I stop the http server, then gets the cached object. The
other pc is always getting the cached object. The http server sends
always the mage-axe=86400 value in the header.

What I am doing wrong? My squid.conf file content is this:


acl manager proto cache_object
acl localnet src 192.168.0.0/16
acl myserver.com src 192.168.0.0/16
acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst
127.0.0.0/8 acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl Purge method PURGE
check_hostnames on
hosts_file /etc/hosts
dns_defnames on
http_access allow all Purge
minimum_expiry_time 120 seconds
http_access allow manager localhost
http_access allow manager
http_access allow !Safe_ports
http_access allow CONNECT !SSL_ports
http_access allow localhost
http_access allow !localnet
http_access allow myserver.com
hierarchy_stoplist cgi-bin ?
access_log /var/log/squid/access.log squid #acl QUERY urlpath_regex
cgi-bin \?
#no_cache deny QUERY
cache allow all
#refresh_pattern ^ftp:  144020% 10080
#refresh_pattern ^gopher:   14400%  1440
#refresh_pattern .  0   20% 4320
refresh_pattern .   0   0%  0
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
coredump_dir /var/spool/squid
cache_dir ufs /var/spool/squid 700 32 512 maximum_object_size 8830
KB cache_mem 120 MB http_port 3128 accel defaultsite=myserver.com:8080
cache_peer myserver.com parent 80 0 no-query originserver forwarded_for
on
icp_port3130
icp_access allow all
acl HOMEdstdomain .home.nl
always_direct allow all
never_direct allow HOME
never_direct allow all
cache_peer_access myserver.com allow all http_access allow HOME
http_access allow all

myserver.com is in the etc/hosts file and in both pc's is the same.

Thank you,

Daniel


[squid-users] Servlets Cache

2008-09-29 Thread Donoso Gabilondo, Daniel

Hi,

I'm using squid for a short time and it works fine, but I have a little
problem storing servlets url in the cache.

Squid doesn't cache http://localhost:3128/servlet?Name=SQUID URL, but if
the URL is http://localhost:3128/servlet squid stores it.

I tried to cache the URL http://localhost:3128/servlet? (Without
arguments) but squid doesn't store the object.

Is it possible to configure squid to cache this?

Regards,

Daniel


RE: [squid-users] Servlets Cache

2008-09-29 Thread Donoso Gabilondo, Daniel
Thank you very much John. 

Now works fine.

Regards,
Daniel


-Mensaje original-
De: John Doe [mailto:[EMAIL PROTECTED] 
Enviado el: lunes, 29 de septiembre de 2008 11:19
Para: squid-users@squid-cache.org
Asunto: Re: [squid-users] Servlets  Cache

 I'm using squid for a short time and it works fine, but I have a
little
 problem storing servlets url in the cache.
 
 Squid doesn't cache http://localhost:3128/servlet?Name=SQUID URL, but
if
 the URL is http://localhost:3128/servlet squid stores it.
 
 I tried to cache the URL http://localhost:3128/servlet? (Without
 arguments) but squid doesn't store the object.
 
 Is it possible to configure squid to cache this?

Do you have this in your conf?

  refresh_pattern -i (/cgi-bin/|\?) 0 0%  0

JD



  



[squid-users] How to force resources expired when squid starts

2008-06-23 Thread Donoso Gabilondo, Daniel
Hello, 
I have a question.

My http server sends the objects with max-age of 48 hours. This is
perfect for me, because squid during 48 hours doesn't send to server the
packet to check if the object is modified.

I saw that is possible delete all the cached objects with squidclient,
or with refresh_pattern put some objects expired when a time elapsed or
when the resources are % old.

Is there any way to put all the cached objects expired manually?




RE: [squid-users] How to force resources expired when squid starts

2008-06-23 Thread Donoso Gabilondo, Daniel

 refresh_pattern does same thing as max-age. But applies when no
max-age 
 is given.

I was wrong. Thanks for the explanation.

 No. You can only purge them one by one. Why are you needing this?

Because if a lot of objects are modified during the max-age time,
putting all the objects expired, squid would ask for them again, and
only get the modified objects.

I know that is possible to decrease the max-age or put max-age to 0 (ask
always if the object is modified) but this generate a lot of network
traffic when a lot of objects are asked. 

Thanks for your help.

Daniel


-Mensaje original-
De: Amos Jeffries [mailto:[EMAIL PROTECTED] 
Enviado el: lunes, 23 de junio de 2008 14:46
Para: Donoso Gabilondo, Daniel
CC: squid-users@squid-cache.org
Asunto: Re: [squid-users] How to force resources expired when squid
starts

Donoso Gabilondo, Daniel wrote:
 Hello, 
 I have a question.
 
 My http server sends the objects with max-age of 48 hours. This is
 perfect for me, because squid during 48 hours doesn't send to server
the
 packet to check if the object is modified.
 
 I saw that is possible delete all the cached objects with squidclient,

one by one only.

 or with refresh_pattern put some objects expired when a time elapsed
or
 when the resources are % old.

refresh_pattern does same thing as max-age. But applies when no max-age 
is given.

 
 Is there any way to put all the cached objects expired manually?
 

No. You can only purge them one by one. Why are you needing this?

Amos
-- 
Please use Squid 2.7.STABLE3 or 3.0.STABLE7


RE: [squid-users] name resolution problems (/etc/hosts)

2008-06-19 Thread Donoso Gabilondo, Daniel

 What's the error?

 What does your cache_peer line (and cache_peer_access/domain if any)
look  like?

cache_peer_domain 192.168.240.22 myserver.com
cache_peer_access myserver.com allow all

 Also ... what do you mean by '192.168.240.22:8080 as default site';
the
 public domain name of your site? back-end web server? or the
defaultsite=
 option on squid.conf http_port?

Sorry...

I am refering to http_port option. This is the squid.conf file line:

http_port 3128 accel defaultsite=192.168.240.22:8080 

With this, squid Works fine, but if I replace 192.168.240.22:8080 with
myserver.com:8080 after restart squid doesn't work.

I want to use a name (/etc/hosts) and not the IP, because when the http
server fails, I can edit the /etc/hosts and change the IP to other
http_server, restart the squid and use the new http_server until the
problems of the first are solved.

Is there any better way to do this?





-Mensaje original-
De: Amos Jeffries [mailto:[EMAIL PROTECTED] 
Enviado el: jueves, 19 de junio de 2008 5:37
Para: Henrik Nordstrom
CC: Donoso Gabilondo, Daniel; squid-users@squid-cache.org
Asunto: Re: [squid-users] name resolution problems (/etc/hosts)

 On ons, 2008-06-18 at 18:40 +0200, Donoso Gabilondo, Daniel wrote:
 Hello again,

 I Use Squid as a reverse Proxy with 192.168.240.22:8080 as default
site
 and works fine, but when I put a name, and after restarting, doesn't
 work.

 What's the error?

 What does your cache_peer line (and cache_peer_access/domain if any)
 look like?


Also ... what do you mean by '192.168.240.22:8080 as default site';  the
public domain name of your site? back-end web server? or the
defaultsite=
option on squid.conf http_port?

Amos



RE: [squid-users] name resolution problems (/etc/hosts)

2008-06-19 Thread Donoso Gabilondo, Daniel

Thank you very much. Now works fine!


tor 2008-06-19 klockan 09:21 +0200 skrev Donoso Gabilondo, Daniel:
  What does your cache_peer line (and cache_peer_access/domain if any)
 look  like?
 
 cache_peer_domain 192.168.240.22 myserver.com

This won't match as you haven't told that your service is myserver.com
to Squid, instead you have told that the service name is the internal
IP... see below.

 cache_peer_access myserver.com allow all

Ok.

 http_port 3128 accel defaultsite=192.168.240.22:8080 

Defaultsite SHOULD be the name:port the browsers connect to, i.e.
mysite.com:8080 if that's what they entier in the location bar..

The domain acl above needs to math this.

 I want to use a name (/etc/hosts) and not the IP, because when the
http
 server fails, I can edit the /etc/hosts and change the IP to other
 http_server, restart the squid and use the new http_server until the
 problems of the first are solved.

The IP of the web server is entered in cache_peer. No need to
edit /etc/hosts.. but you can use a servername from /etc/hosts in
cache_peer if you like.

 Is there any better way to do this?

http_port 8080 accel defaultsite=mysite.com:8080

cache_peer ip.of.webserver parent 80 0 no-query originserver

cache_peer_domain ip.of.webserver mysite.com

Regards
Henrik



[squid-users] name resolution problems (/etc/hosts)

2008-06-18 Thread Donoso Gabilondo, Daniel

Hello again,

I Use Squid as a reverse Proxy with 192.168.240.22:8080 as default site
and works fine, but when I put a name, and after restarting, doesn't
work.

I have correctly configured the name in the /etc/hosts file. 

What am I doing wrong? 

 




[squid-users] Is possible configure Squid to ask a cached resource?

2008-06-16 Thread Donoso Gabilondo, Daniel

I use squid as a reserve Proxy with cache. When squid has a resource
cached it doesn't send a request to http server to check if the resource
is modified.

I want to know if is possible (and how) to do:

 * Squid send a request to http server asking if the resource is
modified.

* If the http server doesn't answer, squid uses the cached object. (Very
important)

* If the http server answers that the resource is modified, squid gets
the new resource, if not, it uses the cached object.

 
 


[squid-users] reverse proxy: More http servers

2008-06-16 Thread Donoso Gabilondo, Daniel
I've squid configured as a Reverse Proxy with a default site
(192.168.240.22:8080)

I've an application that asks resources to squid, and squid ask them to
the default site (Is not possible to configure a proxy in the app)

Is it possible to define other site and if the default site is shutdown
or the resource isn't there squid try with other?

I tried to modify the squid.conf file adding others cache_peer and
cache_peer_access without good results.




RE: [squid-users] Problems Using squid 2.6 as a transparent web cache

2008-06-12 Thread Donoso Gabilondo, Daniel

Hello again,
Thank you very much for your help. 

 I suspect you are trying to do some sort of web mashup involving Squid?
 I've found the best ways to do those is to have squid as the public 
 domain gateway and do the app-linking/routing in the squid config.

I want to use squid to cache all the resources needed by the linux application 
and only download again if they are modified.

I have made the changes that you have indicated me.
I am using firefox to make a test, because with the linux application I can't 
test at this moment. I put squid as the proxy, but always download the resource.

I saw that the store.log file is updating with the asked resources. This is the 
file content:

1213266172.237 RELEASE 00 000F EAEEC8FE1A6E2D8434959FA6301A18A0  200 1213266
171 1194446956-1 video/mpeg 6250477/386763 GET 
http://192.168.240.158:808
0/test/video.mpg
1213266174.770 RELEASE 00 0010 197E8B6BA5687EDF00E293B32088D2E7  200 1213266
174 1194446956-1 video/mpeg 6250477/251763 GET 
http://192.168.240.158:808
0/test/video.mpg

I put maximum_object_size 30 KB because the video.mpg is higher than 8 MB 
(10 MB exactly), but I tried to ask small resources (images) and the results 
are the same.

I read squid configuration and for default squid allow all to be catched. 

What am I doing wrong? 

Thank you again for your help.

Daniel

 


-Mensaje original-
De: Amos Jeffries [mailto:[EMAIL PROTECTED] 
Enviado el: miércoles, 11 de junio de 2008 15:11
Para: Donoso Gabilondo, Daniel
CC: squid-users@squid-cache.org
Asunto: Re: [squid-users] Problems Using squid 2.6 as a transparent web cache

Donoso Gabilondo, Daniel wrote:
 Hello,
 I have an application in linux that uses http resources (videos,
 images..). These resources are in other machine with a http server
 running (under windows).
 
 The linux application always download the resources. I installed and
 configured squid in the linux machine to cache these resources, but the
 linux application always downloads them from the http server. I don't
 know how can I resolve the problem. I need some help, please.

I suspect you are trying to do some sort of web mashup involving Squid?
I've found the best ways to do those is to have squid as the public 
domain gateway and do the app-linking/routing in the squid config.

Anyway on to your various problems

 
 The linux ip address is: 192.168.240.23 and the windows with http server
 ip is: 192.168.233.158
 
 This is my squid.conf file content:
 
 acl all src 0.0.0.0/0.0.0.0
 acl manager proto cache_object
 acl localhost src 127.0.0.1/255.255.255.255
 acl to_localhost dst 127.0.0.0/8
 acl SSL_ports port 443
 acl Safe_ports port 80  # http
 acl Safe_ports port 21  # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70  # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535  # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl CONNECT method CONNECT
 http_access allow manager localhost
 http_access deny manager
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow localhost
 http_access deny all

So none of the clients are allowed to make requests?
I'd expect to see a control saying the intercepted network has access 
through.
  acl localnet src 192.168.0.0/16
  http_access deny !localnet

and drop the deny all down a bit

 icp_access allow all

allow all with no port configured? looks like you can kill this.

 hierarchy_stoplist cgi-bin ?
 access_log /var/log/squid/access.log squid
 acl QUERY urlpath_regex cgi-bin \?
 cache deny QUERY
 refresh_pattern ^ftp:   144020% 10080
 refresh_pattern ^gopher:14400%  1440
 refresh_pattern .   0   20% 4320
 acl apache rep_header Server ^Apache
 broken_vary_encoding allow apache
 coredump_dir /var/spool/squid
 cache_dir ufs /var/spool/squid 700 32 512
 http_port 3128 transparent
 icp_port0

 cache_peer  localhost.home.nl parent 8080 0 default
 acl HOMEdstdomain .home.nl

 always_direct  allow all
 never_directallow all

Those lines contradict each other 'everything MUST go direct + nothing 
EVER allowed direct'.

You want just:
   never_direct allow HOME
   never_direct deny all
   cache_peer_access localhost.home.nl allow HOME
   cache_peer_access localhost.home.nl deny all
   http_access allow HOME

  .. the deny I mentioned dropping down goes about here. AFTER the peer 
access config.

 
 
 I executed these commands:
 
 iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to
 192.168.240.23:3128
 iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT
 --to-port 3128

Okay so far. What about intercepting the requests clients make directly 
to your web app?
  Since the app knows its running

RE: [squid-users] Problems Using squid 2.6 as a transparent web cache

2008-06-12 Thread Donoso Gabilondo, Daniel

Here is the trace of the Firefox request:

GET /test/pepe.mpg HTTP/1.0\r\n
Request Method: GET
Request URI: /test/pepe.mpg
Request Version: HTTP/1.0
Host: 192.168.240.22:8080\r\n
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.1.12)
Gecko/20080208 Fedora/2.0.0.12-1.fc8 Firefox/2.0.0.12\r\n
Accept:
text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plai
n;q=0.8,image/png,*/*;q=0.5\r\n
Accept-Language: en-us,en;q=0.5\r\n
Accept-Encoding: gzip,deflate\r\n
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\n
Via: 1.1 localhost.localdomain:3128 (squid/2.6.STABLE19)\r\n
X-Forwarded-For: 192.168.240.23\r\n
Cache-Control: max-age=259200\r\n
Connection: keep-alive\r\n
\r\n


The answer of the server:
HTTP/1.1 200 OK\r\n
Request Version: HTTP/1.1
Response Code: 200
Server: Apache-Coyote/1.1\r\n
ETag: W/6250477-1194446956686\r\n
Last-Modified: Wed, 07 Nov 2007 14:49:16 GMT\r\n
Content-Type: video/mpeg\r\n
Content-Length: 6250477
Date: Thu, 12 Jun 2008 12:43:11 GMT\r\n
Connection: keep-alive\r\n
\r\n



Here is the trace of linux application request (but I can't set to the
application to use the squid as proxy, I have a problem):

GET /test/pepe.mpg HTTP/1.0\r\n
Request Method: GET
Request URI: /test/pepe.mpg
Request Version: HTTP/1.0
User-Agent: Lavf50.5.0\r\n
Accept: */*\r\n
Host: 192.168.240.22:8080\r\n
Authorization: Basic =\r\n
\r\n

The server answer is the same as the other









 I think that its the requests that app is making, or possibly the 
 headers on the files coming out of the server.

 If you can get a trace of the request and response headers before they

 go into squid it would help a lot.

 Amos
 --
 Please use Squid 2.7.STABLE1 or 3.0.STABLE6
-Mensaje original-
De: Amos Jeffries [mailto:[EMAIL PROTECTED] 
Enviado el: jueves, 12 de junio de 2008 14:21
Para: Donoso Gabilondo, Daniel
CC: squid-users@squid-cache.org
Asunto: Re: [squid-users] Problems Using squid 2.6 as a transparent web
cache

Donoso Gabilondo, Daniel wrote:
 Hello again,
 Thank you very much for your help. 
 
 I suspect you are trying to do some sort of web mashup involving
Squid?
 I've found the best ways to do those is to have squid as the public 
 domain gateway and do the app-linking/routing in the squid config.
 
 I want to use squid to cache all the resources needed by the linux
application and only download again if they are modified.
 
 I have made the changes that you have indicated me.
 I am using firefox to make a test, because with the linux application
I can't test at this moment. I put squid as the proxy, but always
download the resource.
 
 I saw that the store.log file is updating with the asked resources.
This is the file content:
 
 1213266172.237 RELEASE 00 000F EAEEC8FE1A6E2D8434959FA6301A18A0
200 1213266
 171 1194446956-1 video/mpeg 6250477/386763 GET
http://192.168.240.158:808
 0/test/video.mpg
 1213266174.770 RELEASE 00 0010 197E8B6BA5687EDF00E293B32088D2E7
200 1213266
 174 1194446956-1 video/mpeg 6250477/251763 GET
http://192.168.240.158:808
 0/test/video.mpg
 
 I put maximum_object_size 30 KB because the video.mpg is higher
than 8 MB (10 MB exactly), but I tried to ask small resources (images)
and the results are the same.
 
 I read squid configuration and for default squid allow all to be
catched. 
 
 What am I doing wrong? 
 
 Thank you again for your help.
 
 Daniel
 

I think that its the requests that app is making, or possibly the 
headers on the files coming out of the server.

If you can get a trace of the request and response headers before they 
go into squid it would help a lot.

Amos
-- 
Please use Squid 2.7.STABLE1 or 3.0.STABLE6


[squid-users] Problems Using squid 2.6 as a transparent web cache

2008-06-11 Thread Donoso Gabilondo, Daniel
Hello,
I have an application in linux that uses http resources (videos,
images..). These resources are in other machine with a http server
running (under windows).

The linux application always download the resources. I installed and
configured squid in the linux machine to cache these resources, but the
linux application always downloads them from the http server. I don't
know how can I resolve the problem. I need some help, please.

The linux ip address is: 192.168.240.23 and the windows with http server
ip is: 192.168.233.158

This is my squid.conf file content:

acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access deny all
icp_access allow all
hierarchy_stoplist cgi-bin ?
access_log /var/log/squid/access.log squid
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
coredump_dir /var/spool/squid
cache_dir ufs /var/spool/squid 700 32 512
http_port 3128 transparent
icp_port0
cache_peer  localhost.home.nl parent 8080 0 default
acl HOMEdstdomain .home.nl
always_direct  allow all
never_directallow all


I executed these commands:

iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to
192.168.240.23:3128
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT
--to-port 3128


The cache.log content is this:

2008/06/11 11:30:52| Starting Squid Cache version 2.6.STABLE19 for
i386-redhat-linux-gnu...
2008/06/11 11:30:52| Process ID 8617
2008/06/11 11:30:52| With 1024 file descriptors available
2008/06/11 11:30:52| Using epoll for the IO loop
2008/06/11 11:30:52| ipcacheAddEntryFromHosts: Bad IP address 'tele1'
2008/06/11 11:30:52| ipcacheAddEntryFromHosts: Bad IP address 'svc1'
2008/06/11 11:30:52| DNS Socket created at 0.0.0.0, port 42897, FD 6
2008/06/11 11:30:52| Adding nameserver 192.168.202.11 from
/etc/resolv.conf
2008/06/11 11:30:52| Adding nameserver 192.168.202.13 from
/etc/resolv.conf
2008/06/11 11:30:52| User-Agent logging is disabled.
2008/06/11 11:30:52| Referer logging is disabled.
2008/06/11 11:30:52| Unlinkd pipe opened on FD 11
2008/06/11 11:30:52| Swap maxSize 716800 KB, estimated 55138 objects
2008/06/11 11:30:52| Target number of buckets: 2756
2008/06/11 11:30:52| Using 8192 Store buckets
2008/06/11 11:30:52| Max Mem  size: 8192 KB
2008/06/11 11:30:52| Max Swap size: 716800 KB
2008/06/11 11:30:52| Local cache digest enabled; rebuild/rewrite every
3600/3600 sec
2008/06/11 11:30:52| Rebuilding storage in /var/spool/squid (CLEAN)
2008/06/11 11:30:52| Using Least Load store dir selection
2008/06/11 11:30:52| Set Current Directory to /var/spool/squid
2008/06/11 11:30:52| Loaded Icons.
2008/06/11 11:30:53| Accepting transparently proxied HTTP connections at
0.0.0.0, port 3128, FD 13.
2008/06/11 11:30:53| WCCP Disabled.
2008/06/11 11:30:53| Ready to serve requests.
2008/06/11 11:30:53| Configuring Parent localhost.home.nl/8080/0
2008/06/11 11:30:53| Done reading /var/spool/squid swaplog (0 entries)
2008/06/11 11:30:53| Finished rebuilding storage from disk.
2008/06/11 11:30:53| 0 Entries scanned
2008/06/11 11:30:53| 0 Invalid entries.
2008/06/11 11:30:53| 0 With invalid flags.
2008/06/11 11:30:53| 0 Objects loaded.
2008/06/11 11:30:53| 0 Objects expired.
2008/06/11 11:30:53| 0 Objects cancelled.
2008/06/11 11:30:53| 0 Duplicate URLs purged.
2008/06/11 11:30:53| 0 Swapfile clashes avoided.
2008/06/11 11:30:53|   Took 0.3 seconds (   0.0 objects/sec).
2008/06/11 11:30:53| Beginning Validation Procedure
2008/06/11 11:30:53|   Completed Validation Procedure
2008/06/11 11:30:53|   Validated 0 Entries
2008/06/11 11:30:53|   store_swap_size = 0k
2008/06/11 11:30:53| storeLateRelease: released 0 objects