RE: [squid-users] binary install for SOLARIS

2008-08-20 Thread Van Camp Jan
Txs Amos, Henrik for the reply ,


Can you tell me then where I can find/download a binary verion 3.0 stable 8 for 
solaris please ??

Please could you give u url ?

Greetings,
Jan

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, August 19, 2008 11:52 PM
To: Van Camp Jan
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] binary install for SOLARIS

mån 2008-08-18 klockan 14:55 +0200 skrev Van Camp Jan:
> Hello,
> 
> my team would like to download a binary version for solaris 9 of squid
> 3.1 .

Squid-3.1 hasn't been released yet, so it's very unlikely you'll find
binaries of Squid-3.1 for any platform..

Current Squid release is 3.0.STABLE8.

Note: CoolStack seems to include Squid-3.0 (exact version not known),
but that's for Solaris 10..

Regards
Henrik



Re: [squid-users] external_acl program...

2008-08-20 Thread John Doe
> 
> Sounds like you are having problems with output buffering.  Try adding...
> 
> fflush(stdout);
> 
> ...after your fprintf statement.
> 
> Chris

Arg... so simple.
Been a while since I wrote some C code.

Thx a lot!
JD


  



Re: [squid-users] external_acl children...

2008-08-20 Thread John Doe
> 2008/08/19 17:54:35| WARNING: All filter processes are busy.
> 2008/08/19 17:54:35| WARNING: up to 1 pending requests queued
> 2008/08/19 17:54:35| aclMatchExternal: 'filter' queue overload. Request 
> rejected '/path/to/image2.gif'.
> ...

quick question: does that mean I need as many children as the max number of 
'requests/s' (at a given time)...?
By exampe, if I have up to 100 concurrent requests, do I need 100 children to 
prevent rejected requests?
It says "1 pending requests queued", and immediatly after, "queue overload".
What is the queue capacity?
In my test, I only had 5 images...

Thx,
JD


  



Re: [squid-users] external_acl children...

2008-08-20 Thread Amos Jeffries

John Doe wrote:

2008/08/19 17:54:35| WARNING: All filter processes are busy.
2008/08/19 17:54:35| WARNING: up to 1 pending requests queued
2008/08/19 17:54:35| aclMatchExternal: 'filter' queue overload. Request 
rejected '/path/to/image2.gif'.

...


quick question: does that mean I need as many children as the max number of 
'requests/s' (at a given time)...?
By exampe, if I have up to 100 concurrent requests, do I need 100 children to 
prevent rejected requests?
It says "1 pending requests queued", and immediatly after, "queue overload".
What is the queue capacity?
In my test, I only had 5 images...



Sort of, you need one helper 'slot' for each concurrent request.

You can increase the number of 'slots' available by increasing the 
number of children/helpers and the number of concurrency=N each can handle.


at concurrency=1 you need 100 children for 100 requests,
at concurrency=2 you need 50 children for 100 requests
etc.

This is mitigated further by caching the helper results for TTL=N time. 
But worst-case is still 1 slot per request until the ACL result 
mini-cache finds a duplicate.


Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE8


Fwd: Re: [squid-users] Squid Re-cache problem

2008-08-20 Thread Ana Leal

Amos,

I was reading your email again and saw this part at the end:
"
> Amos
> --
> I'm in Sydney for a few days now:
>
> Please use Squid 2.7.STABLE3 or 3.0.STABLE8
>
"

That means, we should use one of those versions of SQUID?
Did you saw the configuration I send you?

Regards
Ana Leal

- Mensaje reenviado de [EMAIL PROTECTED] -
   Fecha: Mon, 18 Aug 2008 08:27:33 +0200
  De: Ana Leal <[EMAIL PROTECTED]>
Responder-A: Ana Leal <[EMAIL PROTECTED]>
 Asunto: Re: [squid-users] Squid Re-cache problem
Para: Amos Jeffries <[EMAIL PROTECTED]>


Hi Amos,

The Proxy Verson is: 2.4.STABLE6
The Proxy Configuration is:
http_port $LNXIP:9080
visible_hostname lnx$OFCOME
error_directory /usr/lib/squid/errors/Spanish cache_mgr Centro_Soporte
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \? \.asp$ no_cache deny QUERY cache_access_log
/var/log/squid/access.log cache_store_log none request_body_max_size 8 MB acl
alldst dst 0.0.0.0/0.0.0.0 acl all src 0.0.0.0/0.0.0.0 acl $OFCOME src $LNXNET
acl cisicret src 10.5.4.0/255.255.254.0 acl manager proto cache_object acl
localhost src 127.0.0.1/255.255.255.255 acl SSL_ports port 443 563 444
acl SSL_ports port 448  # U. Rioja ademas del 82 usa el 448
acl SSL_ports port 8443 # Acceso zona segura a 060.es
acl Safe_ports port 80-83   # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 563 444 # https, snews
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 901 # SWAT
acl Safe_ports port 448 # U. Rioja ademas del 82 usa el 448
acl Safe_ports port 8443
$SQD_ACL1
$SQD_ACL2
acl purge method PURGE
acl CONNECT method CONNECT
acl wwwicex dstdomain www.icex.es
acl portalcma dstdomain portal.icex.es cdc.portal.icex.es cache_peer 10.5.5.18
parent 9080 0 no-query no-digest no-netdb-exchange acl badex dstdomain
"/etc/squid/badex.acl"
acl badexip dst "/etc/squid/badexip.acl"
no_cache deny badex
no_cache deny badexip
no_cache deny wwwicex
cache_peer_access 10.5.5.18 allow badex
cache_peer_access 10.5.5.18 allow badexip cache_peer_access 10.5.5.18 allow
badex CONNECT cache_peer_access 10.5.5.18 allow badexip CONNECT
cache_peer_access 10.5.5.18 allow wwwicex cache_peer_access 10.5.5.18 allow
portalcma http_access allow manager localhost http_access deny manager
http_access allow purge localhost http_access deny purge http_access deny
!Safe_ports http_access deny CONNECT !SSL_ports http_access allow localhost
http_access allow $OFCOME http_access deny all icp_access allow all
append_domain .icex.es never_direct allow badex never_direct allow badexip
never_direct allow portalcma never_direct deny alldst

There's anything wrong or missing whith this configuration?

Do you think that using the Squid 2.7.STABLE3 or 3.0.STABLE8 will resolve the
case? If so, there's anything to pay attention for when configuring it?

Regards
Ana Leal

Quoting Amos Jeffries <[EMAIL PROTECTED]>:

> Ana Leal wrote:
>> Hi everyone!
>>
>> I?m having a problem with the cash of an application I use (ITSM from BMC)
>> with Internet Explorer 6.0.
>>
>> The normal architecture for this application to work is: Internet
>> client 6.0 à
>> Squid Proxy à Mid-tier (BMC) à Application (ITSM BMC).
>>
>> But when the proxy is connected, this application doesn?t work as expect.
>>
>> I had prove that if I ?oblige? the system to ?re-cache? the web every
>> time is accessed (configuring the Internet Explorer to check new
>> changes every
>> time a page is visited)
>
> understood.
>
>> and not allowing the access trough the Proxy to the
>> Mid-tier that supports the application
>
> huh! how? and what is the proxy config
>
>> (since it doesn?t have the cache of
>> the proxy it has to re-cache every time) it works perfectly!
>
> Sounds like:
>  a) you are playing with refresh_pattern to ignore the application
> required Cache-Controls.
>  b) the application is broken its own cache-controls.
>
>> But this situation creates a problem to the web users:
>>
>> This RE-CACHE, provokes a very slow internet performance since the
>> cache has to
>> be done every time the page is open.
>>
>> Does anyone have/had a similar problem? How did you resolve it? It?s a Squid
>> Proxy bug? It could be a version problem?
>>
>> Can anyone help me on this issue, please?
>>
>> Thanks
>>
>> Ana Leal
>>
>
> Amos
> --
> I'm in Sydney for a few days now:
>
> Please use Squid 2.7.STABLE3 or 3.0.STABLE8
>



Re: Fwd: Re: [squid-users] Squid Re-cache problem

2008-08-20 Thread Amos Jeffries

Ana Leal wrote:

Amos,

I was reading your email again and saw this part at the end:
"

Amos
--
I'm in Sydney for a few days now:

Please use Squid 2.7.STABLE3 or 3.0.STABLE8


"

That means, we should use one of those versions of SQUID?



We always like people to be on the latest, as it has more bugs fixed 
etc. and we are usually more familiar with it.



Did you saw the configuration I send you?



Yes. Just puzzling my way through the wrapping :-)

Nothing that I can see wrong in that config, but I'm only familiar with 
2.6 and later. I always recomend testing a later version for anyone on 
2.5 or lower.


If it is an actual unresolved bug we are better positioned to fix it in 
current releases.


The free support available is also better for supported releases, that 
currently means very high numbered 2.6 stables, or 2.7, or 3.0.


Just check the release notes for both 2.6 and whichever you upgrade to, 
for anything regarding your currently used features.


The config lines you use only 'no_cache', 'cache_access_log' have 
changed. To just 'cache' and 'access_log'.


Amos


Regards
Ana Leal

- Mensaje reenviado de [EMAIL PROTECTED] -
   Fecha: Mon, 18 Aug 2008 08:27:33 +0200
  De: Ana Leal <[EMAIL PROTECTED]>
Responder-A: Ana Leal <[EMAIL PROTECTED]>
 Asunto: Re: [squid-users] Squid Re-cache problem
Para: Amos Jeffries <[EMAIL PROTECTED]>


Hi Amos,





Regards
Ana Leal

Quoting Amos Jeffries <[EMAIL PROTECTED]>:


Ana Leal wrote:

Hi everyone!

I?m having a problem with the cash of an application I use (ITSM from BMC)
with Internet Explorer 6.0.

The normal architecture for this application to work is: Internet
client 6.0 à
Squid Proxy à Mid-tier (BMC) à Application (ITSM BMC).

But when the proxy is connected, this application doesn?t work as expect.

I had prove that if I ?oblige? the system to ?re-cache? the web every
time is accessed (configuring the Internet Explorer to check new
changes every
time a page is visited)

understood.


and not allowing the access trough the Proxy to the
Mid-tier that supports the application

huh! how? and what is the proxy config


(since it doesn?t have the cache of
the proxy it has to re-cache every time) it works perfectly!

Sounds like:
 a) you are playing with refresh_pattern to ignore the application
required Cache-Controls.
 b) the application is broken its own cache-controls.


But this situation creates a problem to the web users:

This RE-CACHE, provokes a very slow internet performance since the
cache has to
be done every time the page is open.

Does anyone have/had a similar problem? How did you resolve it? It?s a Squid
Proxy bug? It could be a version problem?

Can anyone help me on this issue, please?

Thanks

Ana Leal


Amos
--
I'm in Sydney for a few days now:

Please use Squid 2.7.STABLE3 or 3.0.STABLE8





Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE8


RE: [squid-users] Reverse Proxy

2008-08-20 Thread Mario Almeida
Hi,
After adding the below option

always_direct allow all

I get a different error

The following error was encountered:

* Connection to 172.27.1.10 Failed 

The system returned:

(111) Connection refused

The remote host or network may be down. Please try the request again.

Your cache administrator is root.

Regards,
Mario

-Original Message-
From: Amos Jeffries [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, August 20, 2008 10:14 AM
To: Chris Robertson
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Reverse Proxy

Chris Robertson wrote:
> Mario Almeida wrote:
>> Hi All,
>>
>> Below is the setting I have done to test a reverse proxy
>>
>> http_port 3128 accel defaultsite=xyz.example.com vhost
>>
>> cache_peer 172.27.1.10 parent 8080 0 no-query originserver name=server1
>> acl server1_acl dstdomain www.xyz.example.com xyz.example.com
>> cache_peer_access server1 allow server1_acl
>> cache_peer_access server1 deny all
>>
>> But could not get it done
>> Bellow is the error message what I get
>>
>>
>> ERROR
>> The requested URL could not be retrieved
>>
>> While trying to retrieve the URL: http:// xyz.example.com /
>>
>> The following error was encountered:
>>
>> * Unable to forward this request at this time.
>> This request could not be forwarded to the origin server or to any parent
>> caches. The most likely cause for this error is that:
>>
>> * The cache administrator does not allow this cache to make direct
>> connections to origin servers, and
>>   
> 
> This seems unlikely given the cache_peer_access line, so...
> 
>> * All configured parent caches are currently unreachable.   
> 
> This is far more likely the issue at hand.  Check your cache.log for any 
> clues.  Verify you have the right IP and port for your parent server, 
> and that there are no firewall rules preventing access.  Try using wget 
> or Lynx on your Squid server to grab a page off the origin server.
> 
>> Your cache administrator is root.
>>
>>
>>
>> Regards,
>> Remy
>>   
> 
> Chris

There is also a weird side-case rarely seen with dstdomain thst needs 
checking here.

Mario:
  does it work if you change the ACL line to:
   acl server1_acl dstdomain .xyz.example.com

If not, check your config for lines mentioning always_direct or 
never_direct, and the network linkage between test proxy and web server 
as mentioned by Chris.

Amos
-- 
Please use Squid 2.7.STABLE4 or 3.0.STABLE8



[squid-users] How do I configure Squid forward all requests to another proxy?

2008-08-20 Thread Wennie V. Lagmay
Dear all,

Using squid-2.5 and 2.6 forwarding all request to another proxy is simple:

"How do I configure Squid forward all requests to another proxy?
First, you need to give Squid a parent cache. Second, you need to tell Squid it 
can not connect directly to origin servers. This is done with three 
configuration file lines: 


cache_peer parentcache.foo.com parent 3128 0 no-query default
acl all src 0.0.0.0/0.0.0.0
never_direct allow allNote, with this configuration, if the parent cache fails 
or becomes unreachable, then every request will result in an error message. 

In case you want to be able to use direct connections when all the parents go 
down you should use a different approach: 

cache_peer parentcache.foo.com parent 3128 0 no-query
prefer_direct off"

However I am trying to do it with squid-2.7STABLE4 and it is not working. Can 
any body help me howto accomplish this?

Thank you very much.

Wennie


Re: [squid-users] Zero Sized Reply / Invalid response

2008-08-20 Thread Pedro Mansito Pérez


El 20/08/2008, a las 4:24, Amos Jeffries escribió:


Pedro Mansito Pérez wrote:

El 15/08/2008, a las 8:41, Amos Jeffries escribió:

Pedro Mansito Pérez wrote:

Hello,
Our Company is using Squid 2.6.STABLE14 on a Slackware 12.0 box.  
A few weeks ago we began to have errors accessing some web pages,  
but not all, on a supplier web site. If we do not use a proxy  
server we can access those web pages; we have tested it with  
Safari, Camino and Firefox on Mac OS X, and IE 6 and 7, and  
Firefox on Windows. On Squid 2.6 we get a Zero Sized Reply error;  
on a Squid 3.0.STABLE7 test box (also on Slackware 12.0) we have  
an Invalid Response Error. If, on Internet Explorer, we disable  
the use of HTTP/1.1 on proxy connections we can access the page  
using Squid.
The supplier insists that since we can access the pages without  
Squid, the problem must be ours. I replied him that it all begin  
a few weeks ago, so the problem is theirs.


It's a problem with the Source web server.
The Server is sending FORBIDDEN chunked-encoded data to a HTTP/1.0  
client (Squid).


http://squidproxy.wordpress.com/2008/04/29/chunked-decoding/

Hello Amos,
I am not sure that is the problem. I think that the problem is  
that, when the client asks for the use of HTTP/1.1, Squid  
identifies itself as 1.1 compliant:


Ah, for squid 2.6 to be doing that it has to be patched to do so.  
Such broken behavior is exactly why squid other than 2.7 are not  
released with that ability.


I still think its the chunk issue, as most in-use squid pass through  
a version of Accept-Encoding that triggers the bug in some HTTP/1.1  
servers.

[...]

Try the Accept-Encoding config hack and see if its lets you turn on  
HTTP/1.1 support in clients again.



Amos,

It fails on: 2.6 STABLE14, 2.7 STABLE4 and 3.0 STABLE7 (with and  
without the Accept-Encoding hack). By the way, http://squidproxy.wordpress.com/2008/04/29/chunked-decoding/ 
 mentions that


"This is currently only an issue in Squid 2.5 or earlier and 3.0,  
which is still highly modeled around 2.5."






[squid-users] bad file caching

2008-08-20 Thread Volodymyr Kostyrko
I got one file stuck into my cache for two month despite site having 
newer version:


1219244034.580  3 192.168.8.18 TCP_HIT/200 50599 GET 
http://www.freebsd.org/ports/auditfile.tbz - NONE/- 
application/x-bzip-compressed-tar


> fetch -v http://www.freebsd.org/ports/auditfile.tbz
scheme:   [http]
user: []
password: []
host: [www.freebsd.org]
port: [0]
document: [/ports/auditfile.tbz]
---> www.freebsd.org:80
looking up www.freebsd.org
connecting to www.freebsd.org:80
requesting http://www.freebsd.org/ports/auditfile.tbz
>>> GET /ports/auditfile.tbz HTTP/1.1
>>> Host: www.freebsd.org
>>> User-Agent: fetch libfetch/2.0
>>> Connection: close
>>>
<<< HTTP/1.0 200 OK
<<< Content-Type: application/x-bzip-compressed-tar
<<< Accept-Ranges: bytes
<<< ETag: "-943519291"
<<< Last-Modified: Wed, 04 Jun 2008 14:10:03 GMT
last modified: [2008-06-04 14:10:03]
<<< Content-Length: 50198
content length: [50198]
<<< Date: Wed, 04 Jun 2008 14:25:15 GMT
<<< Server: httpd/1.4.x LaHonda
<<< Age: 16663
<<< X-Cache: HIT from utwig.xim.bz
<<< X-Cache-Lookup: HIT from utwig.xim.bz:3128
<<< Via: 1.0 utwig.xim.bz (squid/3.0.STABLE8)
<<< Proxy-Connection: close
<<<
offset 0, length -1, size -1, clength 50198
local size / mtime: 50198 / 1212588603
remote size / mtime: 50198 / 1212588603
auditfile.tbz 100% of   49 kB   21 MBps

> fetch -v http://www.freebsd.org/ports/auditfile.tbz
scheme:   [http]
user: []
password: []
host: [www.freebsd.org]
port: [0]
document: [/ports/auditfile.tbz]
---> www.freebsd.org:80
looking up www.freebsd.org
connecting to www.freebsd.org:80
requesting http://www.freebsd.org/ports/auditfile.tbz
>>> GET /ports/auditfile.tbz HTTP/1.1
>>> Host: www.freebsd.org
>>> User-Agent: fetch libfetch/2.0
>>> Connection: close
>>>
<<< HTTP/1.1 200 OK
<<< Connection: close
<<< Content-Type: application/x-bzip-compressed-tar
<<< Accept-Ranges: bytes
<<< ETag: "-1888419386"
<<< Last-Modified: Wed, 20 Aug 2008 14:40:01 GMT
last modified: [2008-08-20 14:40:01]
<<< Content-Length: 51141
content length: [51141]
<<< Date: Wed, 20 Aug 2008 14:56:35 GMT
<<< Server: httpd/1.4.x LaHonda
<<<
offset 0, length -1, size -1, clength 51141
remote size / mtime: 51141 / 1219243201
auditfile.tbz 100% of   49 kB   28 kBps

config:
cache_replacement_policy heap LFUDA
memory_replacement_policy heap GDSF
refresh_pattern ^ftp: 1440 20% 1440 refresh-ims
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern . 0 20% 4320

--
Sphinx of black quartz judge my vow.



[squid-users] if this is posted somewhere.. please tell me where to go... AD groups

2008-08-20 Thread nairb rotsak
Hello all,

I have squid 2.5STABLE12 running on an Ubuntu 6.06 box.  I have it joined to an 
AD domain and it works great.  

I want to add a group in AD that allows Inet use.  If they aren't in that 
group, they can't get out.  I would like it to stay seamless.. no login box.  
This is not a transparent setup.

I have seen this:

http://wiki.squid-cache.org/ConfigExamples/SquidAndLDAP
and this:
http://wiki.squid-cache.org/ConfigExamples/WindowsAuthenticationNTLM

The 2nd one is what I pretty much used to get this far... 

I just don't know how to tie it all together.. and I have looked at the 
wbinfo_group.pl.. but not sure if I need to go that far??

Again, if this is covered somewhere, sorry.. I have looked (obviously at the 
wiki.. but also on Google)

Thanks to all



  


Re: [squid-users] Mingw(patch for long file pointers) --with-large-files

2008-08-20 Thread chudy

even using the 2.7 stable 4 version(binary for windows) with newly created
swap files still the same. i've been using storeurl and aufs feature since
from the squid head. now that im trying to use coss this warnings came up.

Henrik Nordstrom-5 wrote:
> 
> sön 2008-08-17 klockan 20:41 -0700 skrev chudy:
> 
>> one thing i've seeing Warnings about failed to unpack meta data that i've
>> never seen in aufs. 
> 
> Did you wipe your cache when changing the file size api?
> 
> 32-bit and 64-bit caches may be incompatible..
> 
> Regards
> Henrik
> 
> 
> 

...or maybe storeurl is not final. bec. storeurl mismatch when the content
is store in memory and revalidated. but on the second thought no need to use
storeurl on smaller objects since speed is our concern. bec. this objects
usually give warnings about meta data are smaller objects. i've tried
storeurl_access deny smaller_content that are smaller than
maximum_object_size_in_memory it seems works fine. 

but still i need confirmations. 

on the other thought i've been thinking if the objects being canceled by the
clients, i want to continue downloading in squid but in lowest priority of
bandwidth... is it possible? or any workaround to make it happen?
quick_abort_max to -1 (correct me if i'm wrong) uses same bandwidth. it will
be total congestion if these files are videos. its really nice if it will be
on lowest priority and what makes ever better if the client retry to
download the priority back to normal.
-- 
View this message in context: 
http://www.nabble.com/Mingw%28patch-for-long-file-pointers%29---with-large-files-tp19025674p19070570.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] Generating Squid hash

2008-08-20 Thread John =)

Hi, this probably seems like a trivial question, but I have not been able to 
find any help in the mail archive.
 
How is the hash value used to index the object in the cache generated please? 
My intentions are to be able to manually introduce new items to the cache and 
update the logs accordingly.
 
I have tried md5 hashing the http method + URL of current cache items, but the 
hash value does not match :( what format is it supposed to be done as... 'GET 
http://www.myurl.com' ?
 
Thanks in advance,

John Redford.
_
Win a voice over part with Kung Fu Panda & Live Search   and   100’s of Kung Fu 
Panda prizes to win with Live Search
http://clk.atdmt.com/UKM/go/107571439/direct/01/

Re: [squid-users] external_acl children...

2008-08-20 Thread John Doe
> Sort of, you need one helper 'slot' for each concurrent request.
> 
> You can increase the number of 'slots' available by increasing the 
> number of children/helpers and the number of concurrency=N each can handle.
> 
> at concurrency=1 you need 100 children for 100 requests,
> at concurrency=2 you need 50 children for 100 requests
> etc.
> 
> This is mitigated further by caching the helper results for TTL=N time. 
> But worst-case is still 1 slot per request until the ACL result 
> mini-cache finds a duplicate.

Ok, thx.
I first thought squid had buffers (waiting queues) for helpers because of the 
"up to 1 pending requests queued" and "queue overload" messages.
What do they mean?

Also, what are the "negative lookups" of negative_ttl of external_acl_type?
First I thought they were the ERR results, but apparently not.

I tried some "stress" tests (ab -n 1 -c 20 http://path/to/image.gif) and 
get aroung 770req/s.
It seems low for a Xeon 3.40 Ghz with 3GB of RAM (cache_mem 2GB) and 200GB 
cache_dir on a RAID1 (with the system)... no?
I tried to comment as much params as I could in the conf (removed siblings, 
store logs, etc) but it does not change anything...
What's a normal number of reqs/s for such config?

Also, while the url_rewrite logs lines would appear 1 times, I only get 
like 14 external_acl logs...
First 2 lines, then like 1 every seconds
When I do x wgets, I get x external_acl logs.
I have ttl=0, so it should not be a cache issue.

Thx,
JD


  



RE: [squid-users] binary install for SOLARIS

2008-08-20 Thread Henrik Nordstrom
coolstack is at http://cooltools.sunsource.net/coolstack/



On ons, 2008-08-20 at 10:19 +0200, Van Camp Jan wrote:
> Txs Amos, Henrik for the reply ,
> 
> 
> Can you tell me then where I can find/download a binary verion 3.0 stable 8 
> for solaris please ??
> 
> Please could you give u url ?
> 
> Greetings,
> Jan
> 
> -Original Message-
> From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
> Sent: Tuesday, August 19, 2008 11:52 PM
> To: Van Camp Jan
> Cc: squid-users@squid-cache.org
> Subject: Re: [squid-users] binary install for SOLARIS
> 
> mån 2008-08-18 klockan 14:55 +0200 skrev Van Camp Jan:
> > Hello,
> > 
> > my team would like to download a binary version for solaris 9 of squid
> > 3.1 .
> 
> Squid-3.1 hasn't been released yet, so it's very unlikely you'll find
> binaries of Squid-3.1 for any platform..
> 
> Current Squid release is 3.0.STABLE8.
> 
> Note: CoolStack seems to include Squid-3.0 (exact version not known),
> but that's for Solaris 10..
> 
> Regards
> Henrik
> 


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] bad file caching

2008-08-20 Thread Chris Robertson

Volodymyr Kostyrko wrote:
I got one file stuck into my cache for two month despite site having 
newer version:


1219244034.580  3 192.168.8.18 TCP_HIT/200 50599 GET 
http://www.freebsd.org/ports/auditfile.tbz - NONE/- 
application/x-bzip-compressed-tar


> fetch -v http://www.freebsd.org/ports/auditfile.tbz
scheme:   [http]
user: []
password: []
host: [www.freebsd.org]
port: [0]
document: [/ports/auditfile.tbz]
---> www.freebsd.org:80
looking up www.freebsd.org
connecting to www.freebsd.org:80
requesting http://www.freebsd.org/ports/auditfile.tbz
>>> GET /ports/auditfile.tbz HTTP/1.1
>>> Host: www.freebsd.org
>>> User-Agent: fetch libfetch/2.0
>>> Connection: close
>>>
<<< HTTP/1.0 200 OK
<<< Content-Type: application/x-bzip-compressed-tar
<<< Accept-Ranges: bytes
<<< ETag: "-943519291"
<<< Last-Modified: Wed, 04 Jun 2008 14:10:03 GMT
last modified: [2008-06-04 14:10:03]
<<< Content-Length: 50198
content length: [50198]
<<< Date: Wed, 04 Jun 2008 14:25:15 GMT
<<< Server: httpd/1.4.x LaHonda
<<< Age: 16663
<<< X-Cache: HIT from utwig.xim.bz
<<< X-Cache-Lookup: HIT from utwig.xim.bz:3128
<<< Via: 1.0 utwig.xim.bz (squid/3.0.STABLE8)
<<< Proxy-Connection: close
<<<
offset 0, length -1, size -1, clength 50198
local size / mtime: 50198 / 1212588603
remote size / mtime: 50198 / 1212588603
auditfile.tbz 100% of   49 kB   21 MBps


Mmmm.  No expiry information, so Squid has to do a best-effort 
approach.  Let's learn about refresh patterns 
(http://www.squid-cache.org/Versions/v3/3.0/cfgman/refresh_pattern.html).  
The default for Squid includes...


refresh_pattern . 0 20% 4320

...which would match the request mentioned.  So what does this mean?  In 
the absence of expiry information, Squid should use the age (gathered 
from the Last Modified date) to infer how long the object will be 
fresh.  The first number specifies there should be no lower limit on the 
freshness of the object.  If the freshness calculation concludes that 
the object is only fresh for 30 seconds, so be it.  The last number 
states that the maximum object freshness is 4320 minutes (3 days).  Even 
if the freshness calculation states the object could be fresh for 
another year, we'll verify freshness every 3 days.  The middle number is 
where the calculation comes in.  The cached object was last modified on 
June 4, 2008 at 14:10:03 GMT.  As the object gets older, it's assumed 
that it is less likely to change (we are predicting the future based on 
past performance), so after an hour of no changes, we assume that the 
object is not going to change in the next 12 minutes (60 * 20%).  After 
a day of no changes, we assume the object will not change for around 5 
hours.  At 15 days of no changes we hit the ceiling on freshness (15 * 
20%) and our freshness calculation becomes superfluous.


Obviously the object has changed, so you have a few options:
*  Use the PURGE method with squidclient
* force a refresh with your browser (hold down shift or control when you 
press the refresh or reload button, use the -r switch with squidclient 
or --cache=off for wget.  fetch does not appear to have a method of 
forcing a refresh.)

* Add a cache deny for this domain
* Wait for the freshness calculation to expire (3 days at the most)



> fetch -v http://www.freebsd.org/ports/auditfile.tbz
scheme:   [http]
user: []
password: []
host: [www.freebsd.org]
port: [0]
document: [/ports/auditfile.tbz]
---> www.freebsd.org:80
looking up www.freebsd.org
connecting to www.freebsd.org:80
requesting http://www.freebsd.org/ports/auditfile.tbz
>>> GET /ports/auditfile.tbz HTTP/1.1
>>> Host: www.freebsd.org
>>> User-Agent: fetch libfetch/2.0
>>> Connection: close
>>>
<<< HTTP/1.1 200 OK
<<< Connection: close
<<< Content-Type: application/x-bzip-compressed-tar
<<< Accept-Ranges: bytes
<<< ETag: "-1888419386"
<<< Last-Modified: Wed, 20 Aug 2008 14:40:01 GMT
last modified: [2008-08-20 14:40:01]
<<< Content-Length: 51141
content length: [51141]
<<< Date: Wed, 20 Aug 2008 14:56:35 GMT
<<< Server: httpd/1.4.x LaHonda
<<<
offset 0, length -1, size -1, clength 51141
remote size / mtime: 51141 / 1219243201
auditfile.tbz 100% of   49 kB   28 kBps

config:
cache_replacement_policy heap LFUDA
memory_replacement_policy heap GDSF
refresh_pattern ^ftp: 1440 20% 1440 refresh-ims
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern . 0 20% 4320



Chris


[squid-users] squid reports with sarg

2008-08-20 Thread Luis Enrique

Hello list
Is possible to make the reports of the squid using sarg that shows me
the user and the address ip at the same time in the reports?
I tryed with the TAG: usertab
putting in the file
192.168.0.1 user1
192.1680.2 user2
but this doesn't work  as i  would, my users change the ip address
constantly .I would also like to make it that
it showme both things. I tries it with the TAG:  user_ip
but this only showme ip or the username, but never  both togethere
Somebody  has made it some time? or there is some trick making something in 
the

acces.log of the squid.???

Es posible hacer los reportes del squid usando sarg que me muestra a la ves
el usuario y la direccion ip en los reportes???
intente con el TAG: usertab
poniendo en el fichero:
192.168.0.1 usuario1
192.1680.2 usuario2
pero  esto no me funciona  como deseo mis usuarios cambian de maquina/ip
constantemente y para el acceso remoto tambien me gustaria hacerlo que
mostrara ambas cosas. Con lo orto  que  trate fue con el TAG:  user_ip que
pero este o bien muestra  la ip o el usuario no las dos cosas como deceo.
Alguien ha hecho esto alguna vez? o hay algun truco haciendo algo en el
acces.log del squid.??? 





Re: [squid-users] Zero Sized Reply / Invalid response

2008-08-20 Thread Henrik Nordstrom
On ons, 2008-08-20 at 15:15 +0100, Pedro Mansito Pérez wrote:

> It fails on: 2.6 STABLE14, 2.7 STABLE4 and 3.0 STABLE7 (with and  
> without the Accept-Encoding hack).

You don't need the Accept-Encoding hack with Suqid-2.7.

What you should do now is to fire up wireshark on the proxy server and
look in detail what the response from the web server looks like, and
post the result here..

1. Start wireshark.

2. Trigger the issue.

3. Locate the GET /... request sent to the web server where the problem
is seen.

4. Anaylyze -> Follow TCP stream to get it nicely formatted.


Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Squid is aborting and restarting its child process very often

2008-08-20 Thread Henrik Nordstrom
On ons, 2008-08-20 at 14:00 +0800, Adrian Chadd wrote:
> Run the latest Squid-3.0 ; PRE5 is old and buggy.
> 
> Shout at the debian distribution for shipping such an old version.

Nor only old & buggy, also not a stable release for production use only
a pre-release for early adopter testing.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] squid/ftps

2008-08-20 Thread soltani
 Hello all , 

I have to do a job , but it seems a kind of impossible . I have tried to get a 
kind of full info to explain it .
first , the version is squid-2.5.STABLE14-1.4E.el4_6.2.i386.rpm
For instant i have from #squid in freenode

 hello all , i have to do that : something in java --ftps--> squid 
--ftps--> vsftpd
 imad: Then you need to abuse the CONNECT method to establish tunnels over 
the proxy.
 for instant , to be honest i'm trying to understand what is this "ftps" 
.. :) ... by the way , why "abuse" ?
 ftps is SSL encrypted FTP.
 yah i know , but i always see about sftp , ftps is a kind of unusual
 the abuse is because you need to open CONNECT to pretty much any port, 
when CONNECT is designed to only allow a very limited number of well known 
ports for security reasons.. 

and this from a website

FTPS (FTP-SSL) is a real ftp that uses TSL/SSL to encrypt the control session 
and if required the data session. With FTPS the control session is always 
encrypted, but the data session might not be. Why is this? Because with the 
control session encrypted the authentication is protected and you always want 
this (normal ftp uses clear text). If you are NOT pre-encrypting the file, you 
want the data session encrypted so that the file is encrypted while the data is 
in flight. However, if you are pre-encrypting the file then you do not need to 
have the data connection encrypted as you do not need to add the overhead of 
encrypting the data connection, since the file is already encrypted. Understand 
that SFTP is SSH file transfer and FTPS is FTP with SSL, FTPS is a file 
transport layer on top of SSL or TLS. The FTPS adds SSL-enabled FTP send and 
receive capabilities, uses the FTP protocol to transfer files to and from 
SSL-enabled FTP servers


i know that ftps is not "usual" , by the way if someone have experience about 
proxying ftps with squid or can explain why we can't do it , thx for your 
answers

IS 




Re: [squid-users] external_acl children...

2008-08-20 Thread Henrik Nordstrom
On ons, 2008-08-20 at 09:49 -0700, John Doe wrote:

> Ok, thx.
> I first thought squid had buffers (waiting queues) for helpers because of the 
> "up to 1 pending requests queued" and "queue overload" messages.
> What do they mean?

It has buffers. The buffer is as large as there is children. So if you
have 1 children then 2 requests may be processed (one currently
processed, one queued)

Generally one should use the concurrency=
children= when making your own helper. You only need
a lot of children if your helper may block for extended periods of time
for example performing DNS lookups..

> Also, what are the "negative lookups" of negative_ttl of external_acl_type?
> First I thought they were the ERR results, but apparently not.

It is.

> I tried some "stress" tests (ab -n 1 -c 20 http://path/to/image.gif) and 
> get aroung 770req/s.
> It seems low for a Xeon 3.40 Ghz with 3GB of RAM (cache_mem 2GB) and 200GB 
> cache_dir on a RAID1 (with the system)... no?

Very low indeed.

> I tried to comment as much params as I could in the conf (removed siblings, 
> store logs, etc) but it does not change anything...
> What's a normal number of reqs/s for such config?

A simple TCP_MEM_HIT test on Pentium 3 some years ago was several
thousand requests/s. Proxied requests is significantly less.

> Also, while the url_rewrite logs lines would appear 1 times, I only get 
> like 14 external_acl logs...
> First 2 lines, then like 1 every seconds
> When I do x wgets, I get x external_acl logs.
> I have ttl=0, so it should not be a cache issue.

Not sure ttl=0 really is "no cache". It may well be cached for 0 seconds
truncated downwards (integer math using whole seconds).. The point of
the external acl interface is to get cacheability and request merging.
If you don't want these then use the url rewriter interface.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: Fwd: Re: [squid-users] Squid Re-cache problem

2008-08-20 Thread Henrik Nordstrom
On ons, 2008-08-20 at 23:04 +1200, Amos Jeffries wrote:

> The config lines you use only 'no_cache', 'cache_access_log' have 
> changed. To just 'cache' and 'access_log'.

The old names is still understood however..

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] How do I configure Squid forward all requests to another proxy?

2008-08-20 Thread Henrik Nordstrom
On ons, 2008-08-20 at 16:28 +0300, Wennie V. Lagmay wrote:
> Dear all,
> 
> Using squid-2.5 and 2.6 forwarding all request to another proxy is simple:
> 
> "How do I configure Squid forward all requests to another proxy?
> First, you need to give Squid a parent cache. Second, you need to tell Squid 
> it can not connect directly to origin servers. This is done with three 
> configuration file lines: 
> 
> 
> cache_peer parentcache.foo.com parent 3128 0 no-query default
> acl all src 0.0.0.0/0.0.0.0
> never_direct allow allNote, with this configuration, if the parent cache 
> fails or becomes unreachable, then every request will result in an error 
> message. 
> 
> In case you want to be able to use direct connections when all the parents go 
> down you should use a different approach: 
> 
> cache_peer parentcache.foo.com parent 3128 0 no-query
> prefer_direct off"
> 
> However I am trying to do it with squid-2.7STABLE4 and it is not working. Can 
> any body help me howto accomplish this?

It's identical in 2.7.

cache_peer to tell Squid where it may forward.

never_direct to tell squid that it may not go direct.


What does cache.log say on the first request after a restart?

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] if this is posted somewhere.. please tell me where to go... AD groups

2008-08-20 Thread Henrik Nordstrom
On ons, 2008-08-20 at 08:39 -0700, nairb rotsak wrote:
> The 2nd one is what I pretty much used to get this far... 
> 
> I just don't know how to tie it all together.. and I have looked at the 
> wbinfo_group.pl.. but not sure if I need to go that far??

far?

wbinfo_group.pl is the easiest way to get group lookups if you have
already done NTLM via Samba..

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Generating Squid hash

2008-08-20 Thread Henrik Nordstrom
On ons, 2008-08-20 at 16:24 +, John =) wrote:
> Hi, this probably seems like a trivial question, but I have not been able to 
> find any help in the mail archive.
>  
> How is the hash value used to index the object in the cache generated please? 
> My intentions are to be able to manually introduce new items to the cache and 
> update the logs accordingly.

The hash is only used internally.

method + URL -> md5 hash -> internal store object -> on-disk file number

The actual hash function can be seen in storeKeyPublic() in src/store_key_md5.cc

Regards
Henrik



signature.asc
Description: This is a digitally signed message part


[squid-users] I enable offline_mode, after 3 days... access denied error occurs

2008-08-20 Thread Mr Crack
If offline_mode is disable, connection is slow but Ok
To speed up connection speed, I enable offline_mode and connection is fast.
But after 3 days, the following error occurs when accessing some sites...
These site are not banned from our ISP
If this error occurs, i refresh 3-5 times and works...
I do not set any time limit...


ERROR:  The requested URL could not be retrieved

While trying to retrieve the URL:
http://z.about.com/d/paranormal/1/0/C/T/mumified_mermaid.jpg
The following error was encountered:

Access Denied.
Access control configuration prevents your request from being allowed
at this time. Please contact your service provider if you feel this is
incorrect.
Your cache administrator is root.

Generated Wed, 20 Aug 2008 06:04:30 GMT by test.abc.net.mm (squid/2.6.STABLE6)



So I disable offline_mode again but this error still occurs...