Re: [squid-users] if this is posted somewhere.. please tell me where to go... AD groups

2008-08-21 Thread nairb rotsak
Fantastic!  I will try this in the morning!  Thanks Chris!  This is exactly 
what I was looking for!



- Original Message 
From: chris brain <[EMAIL PROTECTED]>
To: squid-users@squid-cache.org
Sent: Thursday, August 21, 2008 10:26:15 PM
Subject: Re: [squid-users] if this is posted somewhere.. please tell me where 
to go... AD groups

Hi From my experience with NTLM and AD this is the best way we found to 
implement group membership :

ntlm_auth already has a mechanism to provide this its just that the doco is 
difficult to follow.

squid.conf :

auth_param basic program 
/usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic 
--require-membership-of="our_ad_domain\\proxyusers_group"

auth_param ntlm program 
/usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp 
--require-membership-of="our_ad_domain\\proxyusers_group"

where our_ad_domain = the AD domain
where proxyusers_group = the group of users allowed to access the proxy

We found that  \\ and " must be included for this top work correctly.

Thanks Chris 




West Australian Newspapers Group

 
Privacy and Confidentiality Notice

The information contained herein and any attachments are intended solely for 
the named recipients. It may contain privileged confidential information.  If 
you are not an intended recipient, please delete the message and any 
attachments then notify the sender. Any use or disclosure of the contents of 
either is unauthorised and may be unlawful. Any liability for viruses is 
excluded to the fullest extent permitted by law.

Advertising Terms & Conditions
Please refer to the current rate card for advertising terms and conditions.  
The rate card is available on request or via www.thewest.com.au

Unsubscribe
If you do not wish to receive emails such as this in future please reply to it 
with "unsubscribe" in the subject line.


  


Re: [squid-users] (111) Connection refused

2008-08-21 Thread ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░
can you give the the iptables rule ?
:(

On Fri, Aug 22, 2008 at 10:50 AM, Michael Alger <[EMAIL PROTECTED]> wrote:
> On Fri, Aug 22, 2008 at 10:07:53AM +0700, ? z??up?? ??z?? 
> ? wrote:
>> ## Forward port 80 ke mail server
>> /sbin/iptables -t nat -A PREROUTING -p tcp -i eth0 -d 202.169.51.119
>>--dport 80 -j DNAT --to-destination 172.16.0.2
>
> This looks like you're redirecting from your external interface's
> port 80 to another server. Presumably there's nothing listening on
> port 80 on your DMZ server?
>
>> problem :
>> i cant browse to my-sub.domain.ext from network
>> but i can browse my-sub.domain.ext from external ( other place )
>>
>> The following error was encountered:
>>
>>* Connection to 202.169.51.119 Failed
>>
>> The system returned:
>>
>>(111) Connection refused
>
> Your proxy is connecting from a different interface (eth2 I think)
> and therefore the connection to port 80 is not being redirected to
> the mail server. You *may* be able to solve this by also redirecting
> the connection from your proxy server, but you'll also need to use
> source NAT so your mail server's www service sends its replies to
> your DMZ server. Without the SNAT, the mail server will reply
> directly to the proxy server, and that will confuse the proxy
> because it thinks it's talking to your external IP.
>
> The other common solution to this problem is to use so-called "split
> horizon DNS", whereby you have internal DNS servers which return the
> internal address (i.e. my-sub.domain.ext will resolve to 172.16.0.1,
> rather than your external IP) but your external DNS servers will
> return your external address. That way your clients inside the
> network get the correct address.
>
> Depending on how your squid is doing DNS lookups, you may be able to
> add an entry to the /etc/hosts file on your proxy server and then
> explicitly configure the proxy for your clients. If that works this
> might provide an acceptable short-term solution.
>



-- 
-=-=-=-=


Re: [squid-users] (111) Connection refused

2008-08-21 Thread Michael Alger
On Fri, Aug 22, 2008 at 10:07:53AM +0700, ? z??up?? ??z?? 
? wrote:
> ## Forward port 80 ke mail server
> /sbin/iptables -t nat -A PREROUTING -p tcp -i eth0 -d 202.169.51.119
>--dport 80 -j DNAT --to-destination 172.16.0.2

This looks like you're redirecting from your external interface's
port 80 to another server. Presumably there's nothing listening on
port 80 on your DMZ server?

> problem :
> i cant browse to my-sub.domain.ext from network
> but i can browse my-sub.domain.ext from external ( other place )
> 
> The following error was encountered:
> 
>* Connection to 202.169.51.119 Failed
> 
> The system returned:
> 
>(111) Connection refused

Your proxy is connecting from a different interface (eth2 I think)
and therefore the connection to port 80 is not being redirected to
the mail server. You *may* be able to solve this by also redirecting
the connection from your proxy server, but you'll also need to use
source NAT so your mail server's www service sends its replies to
your DMZ server. Without the SNAT, the mail server will reply
directly to the proxy server, and that will confuse the proxy
because it thinks it's talking to your external IP.

The other common solution to this problem is to use so-called "split
horizon DNS", whereby you have internal DNS servers which return the
internal address (i.e. my-sub.domain.ext will resolve to 172.16.0.1,
rather than your external IP) but your external DNS servers will
return your external address. That way your clients inside the
network get the correct address.

Depending on how your squid is doing DNS lookups, you may be able to
add an entry to the /etc/hosts file on your proxy server and then
explicitly configure the proxy for your clients. If that works this
might provide an acceptable short-term solution.


Re: [squid-users] if this is posted somewhere.. please tell me where to go... AD groups

2008-08-21 Thread chris brain
Hi From my experience with NTLM and AD this is the best way we found to 
implement group membership :

ntlm_auth already has a mechanism to provide this its just that the doco is 
difficult to follow.

squid.conf :

auth_param basic program 
/usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic 
--require-membership-of="our_ad_domain\\proxyusers_group"

auth_param ntlm program 
/usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp 
--require-membership-of="our_ad_domain\\proxyusers_group"

where our_ad_domain = the AD domain
where proxyusers_group = the group of users allowed to access the proxy

We found that  \\ and " must be included for this top work correctly.

Thanks Chris 




West Australian Newspapers Group

 
Privacy and Confidentiality Notice

The information contained herein and any attachments are intended solely for 
the named recipients. It may contain privileged confidential information.  If 
you are not an intended recipient, please delete the message and any 
attachments then notify the sender. Any use or disclosure of the contents of 
either is unauthorised and may be unlawful. Any liability for viruses is 
excluded to the fullest extent permitted by law.

Advertising Terms & Conditions
Please refer to the current rate card for advertising terms and conditions.  
The rate card is available on request or via www.thewest.com.au

Unsubscribe
If you do not wish to receive emails such as this in future please reply to it 
with "unsubscribe" in the subject line.



Re: [squid-users] Cache URL with "?"

2008-08-21 Thread Amos Jeffries
>> On tis, 2008-07-22 at 23:37 +1200, Amos Jeffries wrote:
>>
>> Only one issue has come up: under sibling relationships, squids method
>> of
>> relaying requests to sibling is slightly bad and may break the proper
>> expected
>> behavior. All other setups should be fine.
>>
>>
>>> Also seen in parent relations.
>>>
>>> Regards Henrik
>
> Hi,
>
> What is the nature of the problems with dynamic caching and
> sibling/parent relationships? I tried googling for more info but
> couldn't find anything.

In general dynamic pages should be cachable. There are two cases known to
me where that is false.

a) page is dynamic, without cache-control info and meant to be non-cachable.

  This is the reason we add the 0 0% 0 refresh pattern to replace the old
QUERY acl. It means consider all indeterminate dynamic requests as
non-cacheble. Just to be safe.

b) the page is one of the above but has been passed to a peer by squid.

  This is our fault, squid adds an otherwise unnoticeable cache-control
header to requests passed in peering. This breaks the refresh_pattern
safety check for (a).

None yet has coded the patch for (b). So dynamic caching works in squid,
but not on requests received from peers.

>
> I'm using 4 squid nodes [2.7STABLE3] as accelerators and encountered
> some odd behavior recently after enabling dynamic caching. There are
> sibling relationships among the 4 nodes using ICP and cache digests.
> Under medium load I've been able to successfully cache and serve
> dynamic URLs:
>
>   GET http://contentreal.foobar.com/clown_widget.swf?version=20080820
>   STORE_OK  IN_MEMORY SWAPOUT_NONE PING_NONE
>   CACHABLE,VALIDATED
>
> But under heavier load [20-50mbit per node] I get multiple objects [20-30]
> like:
>
>   GET http://contentreal.foobar.com/clown_widget.swf?version=20080820
>   STORE_PENDING NOT_IN_MEMORY SWAPOUT_NONE PING_DONE
>   RELEASE_REQUEST,DISPATCHED,PRIVATE,VALIDATED
>
>   GET http://contentreal.foobar.com/clown_widget.swf?version=20080820
>   STORE_OK  NOT_IN_MEMORY SWAPOUT_NONE PING_DONE
>   RELEASE_REQUEST,DISPATCHED,PRIVATE,VALIDATED,ABORTED
>
> and requests for this URL timeout after about 25-50% of the download is
> done.
>
> Headers are being set via apache/mod_expires:
>
>   [EMAIL PROTECTED] ~]# curl -I
> http://content5.foobar.com/clown_widget.swf?version=20080820
>   HTTP/1.0 200 OK
>   Date: Thu, 21 Aug 2008 09:53:50 GMT
>   Server: Apache
>   Last-Modified: Fri, 08 Aug 2008 06:07:10 GMT
>   ETag: "35b0f"
>   Accept-Ranges: bytes
>   Content-Length: 219919
>   Cache-Control: max-age=86400
>   Expires: Fri, 22 Aug 2008 09:53:50 GMT
>   Content-Type: application/x-shockwave-flash
>   X-Cache: MISS from squid03.foobar.com
>   X-Cache-Lookup: MISS from squid03.foobar.com:80
>   Via: 1.1 squid03.foobar.com:80 (squid)
>   Connection: close
>
> I followed http://wiki.squid-cache.org/ConfigExamples/DynamicContent,
> acl QUERY is disabled and the swf is < maximum_object_size_in_memory.
> Any assistance would be greatly appreciated.
>
> Thanks to all who maintain squid, this is great and extremely useful
> software.
>
> murray
>
> # squid.conf #
> http_port 80 defaultsite=contentreal.foobar.com
> cache_peer 110.0.2.99 parent 80 0 no-query no-digest originserver
> cache_peer 110.0.2.76 sibling 80 3130
> cache_peer 110.0.2.77 sibling 80 3130 proxy-only
> cache_peer 110.0.2.78 sibling 80 3130 proxy-only
> digest_generation on
> #hierarchy_stoplist cgi-bin ?
> #acl QUERY urlpath_regex cgi-bin \?
> #cache deny QUERY
> shutdown_lifetime 10 seconds
> strip_query_terms off
> acl apache rep_header Server ^Apache
> broken_vary_encoding allow apache
> cache_mem 5200 MB
> maximum_object_size_in_memory 500 KB
> ipcache_size 4096
> fqdncache_size 4096
> cache_dir null /tmp
> logformat squid  %tl.%03tu %6tr %>a %Ss/%03Hs % access_log /usr/local/squid/var/logs/access.log squid
> negative_ttl 30 seconds
> cache_access_log /dev/null
> cache_store_log  /dev/null
> buffered_logs on
> client_db off
> refresh_pattern html$020%   1
> refresh_pattern txt$ 020%   1
> refresh_pattern xml$ 020%   5
> refresh_pattern swf$ 020%   5
> refresh_pattern flv$ 020%  10
> refresh_pattern .020%4320
> refresh_pattern (cgi-bin|\?) 0 0%   0
> #refresh_pattern ^ftp:144020%   10080
> #refresh_pattern ^gopher:  14400%1440
> acl N1 src 110.0.2.76
> acl N2 src 110.0.2.77
> acl N3 src 110.0.2.78
> icp_access allow N1
> icp_access allow N2
> icp_access allow N3
> acl all src 0.0.0.0/0.0.0.0
> acl manager proto cache_object
> acl localhost src 127.0.0.1/255.255.255.255
> acl to_localhost dst 127.0.0.0/8
> acl SSL_ports port 443
> acl Safe_ports port 80# http
> acl CONNECT method CONNECT
> acl purge method PURGE
> http_access allow purge loc

[squid-users] (111)

2008-08-21 Thread ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░
hello
i have problem
please see h++p://amyhost[dot]com/data/1.jpg

and this is my squid conf...
-start---
#logformat squid %>a [%tl] "%rm %ru HTTP/%rv" %Hs % /proc/sys/net/ipv4/ip_forward
/etc/init.d/networking restart
#-
# eth0 = WAN1 = 202.169.51.119
# eth1 = DMZ = 192.168.222.1 ( Konek ke MAILSERVER & WEBSERVER -
sementara simulai hanya mailserver )
# eth2 = LAN = 192.168.222.2 ( Konek ke PROXY SERVER - sementara di
simulai PROXY SERVER = CLIENT )
#--

# Tukang sapu
/sbin/iptables --flush
/sbin/iptables --table nat --flush
/sbin/iptables --delete-chain
/sbin/iptables --table nat --delete-chain
/sbin/iptables -F -t nat

# Jembatan gantung DMZ <=> LAN
/sbin/iptables -A FORWARD -i eth2 -o eth1 -m state --state
NEW,ESTABLISHED,RELATED -j ACCEPT
/sbin/iptables -A FORWARD -i eth1 -o eth2 -m state --state
ESTABLISHED,RELATED -j ACCEPT

# Jembatan gantung DMZ <=> Mail Server & Webserver
/sbin/iptables -A FORWARD -i eth1 -o eth0 -m state --state
ESTABLISHED,RELATED -j ACCEPT
/sbin/iptables -A FORWARD -i eth0 -o eth1 -m state --state
NEW,ESTABLISHED,RELATED -j ACCEPT

# Jembatan gantung WAN1 <=> LAN
/sbin/iptables -A FORWARD -i eth2 -o eth0 -m state --state
ESTABLISHED,RELATED -j ACCEPT
/sbin/iptables -A FORWARD -i eth0 -o eth2 -m state --state
NEW,ESTABLISHED,RELATED -j ACCEPT

## Forward port 25 ke mail server
/sbin/iptables -t nat -A PREROUTING -p tcp -i eth0 -d 202.169.51.119
--dport 25 -j DNAT --to-destination 172.16.0.2

## Forward port 80 ke mail server
/sbin/iptables -t nat -A PREROUTING -p tcp -i eth0 -d 202.169.51.119
--dport 80 -j DNAT --to-destination 172.16.0.2

## Forward port 110 ke mail server
/sbin/iptables -t nat -A PREROUTING -p tcp -i eth0 -d 202.169.51.119
--dport 110 -j DNAT --to-destination 172.16.0.2

## Forward port 2810 ke mail server
/sbin/iptables -t nat -A PREROUTING -p tcp -i eth0 -d 202.169.51.119
--dport 2810 -j DNAT --to-destination 172.16.0.2
/sbin/iptables -t nat -A PREROUTING -p tcp -i eth0 -d 202.169.51.119
--dport 3810 -j DNAT --to-destination 172.16.0.3
# masqurade
/sbin/iptables --table nat --append POSTROUTING --out-interface eth0
-j MASQUERADE
/sbin/iptables --append FORWARD --in-interface  eth0 -j ACCEPT

## REDIRECT
# iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT
--to-port 8080

#transparant proxy - WARNING INI SEMENTARA - LIHAT eth2 -- pake
dansguard port 2211
/sbin/iptables -t nat -A PREROUTING -i eth2 -p tcp -s
192.168.222.0/255.255.255.0 --dport 80 -j DNAT --to 192.168.222.2:2211


exit 0
=

problem :
i cant browse to my-sub.domain.ext from network
but i can browse my-sub.domain.ext from external ( other place )

my squid = transparant

when i type http://my-sub.domain.ext
it say ( mozzila FF ) "Problem Loading page"

then i put squid / Proxy IP manually
192.168.222.2 and port 2210 at my mozzila FF
it say :
RROR
The requested URL could not be retrieved

While trying to retrieve the URL: http://my-sub.domain.ext/

The following error was encountered:

   * Connection to 202.169.51.119 Failed

The system returned:

   (111) Connection refused

The remote host or network may be down. Please try the request again.

Your cache administrator is [EMAIL PROTECTED]
Generated Fri, 22 Aug 2008 02:12:13 GMT by domain.ext (squid/2.6.STABLE18)
---

need help asap


--
-=-=-=-=



-- 
-=-=-=-=


Re: [squid-users] compiling, installing and running

2008-08-21 Thread Amos Jeffries
> hello I want to enable authentication digest.
> recently I configured I installed and executed squid3.0STABLE8 with these
> parameters:
> . / it
> configures--prefix=/usr--localstatedir=/var--with-default-user=proxy--sysconfdir=/etc/squid--datadir=${prefix}/share/squid--libexecdir=${prefix}/lib/squid--enable-linux-netfilter--enable-delay-pools--enable-useragent-log--enable-auth=digest,basic--enable-auth=digest--enable-useragent-log--enable-digest-auth-helpers=password--enable-default-err-language=Spanish
> make
> make install
>
> when I try to run the squid  show  this error:
>


It appears that your passwd file does not exist. The helpers are aborting
when they discover this. Squid itself aborts after a reasonable length of
time with 100% helper failure.

  "cannot stat /etc/apche2/passwd"

IMO there is probably a typo missing 'a' in that file path.

Amos




Re: [squid-users] Can Squid hide all 404s from clients?

2008-08-21 Thread Amos Jeffries
> Hello, Leonardo.
>
> Thanks for your prompt reply!
>
> On 8/21/08 3:02 PM, "Leonardo Rodrigues Magalhães"
> <[EMAIL PROTECTED]> wrote:
>> are you sure you wanna do this kind of configuration ???
> Yes. I am aware that my request is unusual, as this is to be a
> special-purpose installation of squid.
>
>> have you
>> ever imagined that you can be feeding your users with really
>> invalid/very outdated pages ???
> Of course -- it is implicit in my request. The "users" of this particular
> squid instance will not be humans with desktop browsers, but other
> software
> systems that need a layer of "availability fault-tolerance" between
> themselves and the URL resources they depend on.
>
>> If the site admin took the files off off
>> the webserver, maybe there's some reason for that 
> The servers this Squid server will pull URLs from are many and diverse,
> and
> maintained by admins with widely different competency levels and uptime
> expectations.
>
>> i dont know if this can be done,
> Bummer. After all that explanation of why I shouldn't, I thought for sure
> you were going to say "But if you REALLY want to...".   ;)
>
> So, now that I have explained my need, the question reamins unanswered:
> Is it possible to configure Squid so that it always serves the "latest
> available" version of any given URL, even if the URL is no longer
> available
> at the original source?
>

Thats how HTTP works, yes. 'latest available', only what you don't seem to
understand is that a 404 page IS the latest available copy when an object
has been administratively withdrawn.

Your fault-tolerance requirement, brings to mind the stale-if-error
capability recently sponsored into Squid 2.7. That will let squid serve a
stale copy of something if the origin source fails. I'm not too savy on
the details, but its worth a look. It should get you past a lot of the
possible temporary errors.

To achieve 100% retrieval of objects is not possible. Any attempt at it
ends up serving stale objects that should never have been stored.

The closest you can get is to:
 a) use the stale-if-error where you can be sure its safe to do so
 b) get as many of the source apps/servers to provide correct caching
control headers.
 c) admit that there will be occasional 404s coming out (but rarely in a
well tuned system)

Given the variance in levels of admin knowledge, you can confidantly say
the Squid layer handles a certain level of faults and recovery, and
provides a higher level than it accepts. Then you can start beating the
admins both at supply and drain, with clue-stick when their apps fail to
provide sufficient confidence on the Squid input or fail to handle a low
level of faults. As the problem is entirely theirs and you can prove it by
comparison with the working app feeds.

The using software MUST be able to gracefully cope with occasional network
idiocy. Anything less is a major design flaw.

Amos




Re: [squid-users] Blocking ".com" blocks "gmail.com"

2008-08-21 Thread Amos Jeffries
> Hi,
>
>I'm using Squidguard to manage users,groups and acl's of my proxy
> server but I have a problem: When I put ".com" on my expressionlist of
> blocks, "gmail.com" is blocked too...
>
>   My expressionlis file is
> "(.*)?(\.com(\?.*)?$|\.zip(\?.*)?$|message|messages)(.*)?"
>
>
>   Anyone knows what is the problem?
>

You are blocking (.*)\.com perhapse in your regex?

If you want actual help, please indicate what your attempted pattern
is/was and what you actually want it to do.

Saying what you started with before changes, and the result after changes
only tells us you don't understand regex enough to make the right change.

Amos




Re: [squid-users] Problem when adding another hard disk.. squid cannot create any files or dir on another hard disk

2008-08-21 Thread Mr Crack
chown squid.squid /mnt/cache2
and
Disable Selinux works for me
Sorry i forget to tell I use RHEL 5


Thanks you for your reply

On Thu, Aug 21, 2008 at 4:25 AM, John Doe <[EMAIL PROTECTED]> wrote:
>> My problem is i dont know how to mount /mnt/cache2 as squid user
>> If i can mount /mnt/cache2 as squid, the problem should be OK
>> My cache_effective_user is squid and cache_effective_group also squid
>
> Once /mnt/cache2 is mounted, do
>
>  chown squid:squid /mnt/cache2
>
> JD
>
>
>
>
>


Re: [squid-users] Can Squid hide all 404s from clients?

2008-08-21 Thread Benton Roberts
Hello, Leonardo.

Thanks for your prompt reply!

On 8/21/08 3:02 PM, "Leonardo Rodrigues Magalhães"
<[EMAIL PROTECTED]> wrote:
> are you sure you wanna do this kind of configuration ???
Yes. I am aware that my request is unusual, as this is to be a
special-purpose installation of squid.

> have you
> ever imagined that you can be feeding your users with really
> invalid/very outdated pages ???
Of course -- it is implicit in my request. The "users" of this particular
squid instance will not be humans with desktop browsers, but other software
systems that need a layer of "availability fault-tolerance" between
themselves and the URL resources they depend on.

> If the site admin took the files off off
> the webserver, maybe there's some reason for that 
The servers this Squid server will pull URLs from are many and diverse, and
maintained by admins with widely different competency levels and uptime
expectations.

> i dont know if this can be done,
Bummer. After all that explanation of why I shouldn't, I thought for sure
you were going to say "But if you REALLY want to...".   ;)

So, now that I have explained my need, the question reamins unanswered:
Is it possible to configure Squid so that it always serves the "latest
available" version of any given URL, even if the URL is no longer available
at the original source?

- benton



--
Starcut privacy notification:
If you are not the destined recipient of this message, please, delete it and 
notify the message originator of the mistake
--


[squid-users] compiling, installing and running

2008-08-21 Thread Luis Enrique

hello I want to enable authentication digest.
recently I configured I installed and executed squid3.0STABLE8 with these 
parameters:
. / it 
configures--prefix=/usr--localstatedir=/var--with-default-user=proxy--sysconfdir=/etc/squid--datadir=${prefix}/share/squid--libexecdir=${prefix}/lib/squid--enable-linux-netfilter--enable-delay-pools--enable-useragent-log--enable-auth=digest,basic--enable-auth=digest--enable-useragent-log--enable-digest-auth-helpers=password--enable-default-err-language=Spanish

make
make install

when I try to run the squid  show  this error:

2008/08/21 15:09:26| Starting Squid Cache version 3.0.STABLE8 for 
i686-pc-linux-gnu...

2008/08/21 15:09:26| Process ID 7395
2008/08/21 15:09:26| With 1024 file descriptors available
2008/08/21 15:09:26| Performing DNS Tests...
2008/08/21 15:09:27| Successful DNS name lookup tests...
2008/08/21 15:09:27| DNS Socket created at 169.158.83.181, port 32772, FD 8
2008/08/21 15:09:27| Adding nameserver 169.158.128.136 from squid.conf
2008/08/21 15:09:27| Adding nameserver 169.158.128.156 from squid.conf
2008/08/21 15:09:27| helperOpenServers: Starting 5 'digest_pw_auth' 
processes

cannot stat /etc/apche2/passwd
cannot stat /etc/apche2/passwd
cannot stat /etc/apche2/passwd
cannot stat /etc/apche2/passwd
2008/08/21 15:09:27| User-Agent logging is disabled.
cannot stat /etc/apche2/passwd
2008/08/21 15:09:27| Unlinkd pipe opened on FD 18
2008/08/21 15:09:27| Swap maxSize 102400 KB, estimated 7876 objects
2008/08/21 15:09:27| Target number of buckets: 393
2008/08/21 15:09:27| Using 8192 Store buckets
2008/08/21 15:09:27| Max Mem  size: 102400 KB
2008/08/21 15:09:27| Max Swap size: 102400 KB
2008/08/21 15:09:27| Version 1 of swap file without LFS support detected...
2008/08/21 15:09:27| Rebuilding storage in /var/spool/squid (DIRTY)
2008/08/21 15:09:27| Using Least Load store dir selection
2008/08/21 15:09:27| Set Current Directory to /var/spool/squid
2008/08/21 15:09:27| Loaded Icons.
2008/08/21 15:09:27| Accepting  HTTP connections at 192.168.158.5, port 
8080, FD 20.
2008/08/21 15:09:27| Accepting ICP messages at 169.158.83.181, port 3130, FD 
21.

2008/08/21 15:09:27| Outgoing ICP messages on port 3130, FD 22.
2008/08/21 15:09:27| Accepting HTCP messages on port 4827, FD 23.
2008/08/21 15:09:27| Outgoing HTCP messages on port 4827, FD 24.
2008/08/21 15:09:27| Ready to serve requests.
2008/08/21 15:09:27| Done reading /var/spool/squid swaplog (2 entries)
2008/08/21 15:09:27| Finished rebuilding storage from disk.
2008/08/21 15:09:27| 2 Entries scanned
2008/08/21 15:09:27| 0 Invalid entries.
2008/08/21 15:09:27| 0 With invalid flags.
2008/08/21 15:09:27| 2 Objects loaded.
2008/08/21 15:09:27| 0 Objects expired.
2008/08/21 15:09:27| 0 Objects cancelled.
2008/08/21 15:09:27| 0 Duplicate URLs purged.
2008/08/21 15:09:27| 0 Swapfile clashes avoided.
2008/08/21 15:09:27|   Took 0.01 seconds (248.91 objects/sec).
2008/08/21 15:09:27| Beginning Validation Procedure
2008/08/21 15:09:27| WARNING: digestauthenticator #5 (FD 13) exited
2008/08/21 15:09:27| WARNING: digestauthenticator #4 (FD 12) exited
2008/08/21 15:09:27| WARNING: digestauthenticator #3 (FD 11) exited
2008/08/21 15:09:27| WARNING: digestauthenticator #2 (FD 10) exited
2008/08/21 15:09:27| Too few digestauthenticator processes are running
FATAL: The digestauthenticator helpers are crashing too rapidly, need help!

Squid Cache (Version 3.0.STABLE8): Terminated abnormally.
CPU Usage: 0.024 seconds = 0.012 user + 0.012 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
total space in arena:3220 KB
Ordinary blocks: 3129 KB  7 blks
Small blocks:   0 KB  0 blks
Holding blocks:  1636 KB  7 blks
Free Small blocks:  0 KB
Free Ordinary blocks:  90 KB
Total in use:4765 KB 148%
Total free:90 KB 3%
2008/08/21 15:09:30| Starting Squid Cache version 3.0.STABLE8 for 
i686-pc-linux-gnu...

2008/08/21 15:09:30| Process ID 7403
2008/08/21 15:09:30| With 1024 file descriptors available
2008/08/21 15:09:30| Performing DNS Tests...
2008/08/21 15:09:31| Successful DNS name lookup tests...
2008/08/21 15:09:31| DNS Socket created at 169.158.83.181, port 32772, FD 8
2008/08/21 15:09:31| Adding nameserver 169.158.128.136 from squid.conf
2008/08/21 15:09:31| Adding nameserver 169.158.128.156 from squid.conf
2008/08/21 15:09:31| helperOpenServers: Starting 5 'digest_pw_auth' 
processes

cannot stat /etc/apche2/passwd
cannot stat /etc/apche2/passwd
cannot stat /etc/apche2/passwd
cannot stat /etc/apche2/passwd
2008/08/21 15:09:31| User-Agent logging is disabled.
cannot stat /etc/apche2/passwd
2008/08/21 15:09:31| Unlinkd pipe opened on FD 18
2008/08/21 15:09:31| Swap maxSize 102400 KB, estimated 7876 objects
2008/08/21 15:09:31| Target number of buckets: 393
2008/08/21 15:09:31| Using 8192 Store buckets
2008/08/21 15:09:31| Max M

Re: [squid-users] Cache URL with "?"

2008-08-21 Thread murray lotnicz
> On tis, 2008-07-22 at 23:37 +1200, Amos Jeffries wrote:
>
> Only one issue has come up: under sibling relationships, squids method of
> relaying requests to sibling is slightly bad and may break the proper expected
> behavior. All other setups should be fine.
>
>
>> Also seen in parent relations.
>>
>> Regards Henrik

Hi,

What is the nature of the problems with dynamic caching and
sibling/parent relationships? I tried googling for more info but
couldn't find anything.

I'm using 4 squid nodes [2.7STABLE3] as accelerators and encountered
some odd behavior recently after enabling dynamic caching. There are
sibling relationships among the 4 nodes using ICP and cache digests.
Under medium load I've been able to successfully cache and serve
dynamic URLs:

  GET http://contentreal.foobar.com/clown_widget.swf?version=20080820
  STORE_OK  IN_MEMORY SWAPOUT_NONE PING_NONE
  CACHABLE,VALIDATED

But under heavier load [20-50mbit per node] I get multiple objects [20-30] like:

  GET http://contentreal.foobar.com/clown_widget.swf?version=20080820
  STORE_PENDING NOT_IN_MEMORY SWAPOUT_NONE PING_DONE
  RELEASE_REQUEST,DISPATCHED,PRIVATE,VALIDATED

  GET http://contentreal.foobar.com/clown_widget.swf?version=20080820
  STORE_OK  NOT_IN_MEMORY SWAPOUT_NONE PING_DONE
  RELEASE_REQUEST,DISPATCHED,PRIVATE,VALIDATED,ABORTED

and requests for this URL timeout after about 25-50% of the download is done.

Headers are being set via apache/mod_expires:

  [EMAIL PROTECTED] ~]# curl -I
http://content5.foobar.com/clown_widget.swf?version=20080820
  HTTP/1.0 200 OK
  Date: Thu, 21 Aug 2008 09:53:50 GMT
  Server: Apache
  Last-Modified: Fri, 08 Aug 2008 06:07:10 GMT
  ETag: "35b0f"
  Accept-Ranges: bytes
  Content-Length: 219919
  Cache-Control: max-age=86400
  Expires: Fri, 22 Aug 2008 09:53:50 GMT
  Content-Type: application/x-shockwave-flash
  X-Cache: MISS from squid03.foobar.com
  X-Cache-Lookup: MISS from squid03.foobar.com:80
  Via: 1.1 squid03.foobar.com:80 (squid)
  Connection: close

I followed http://wiki.squid-cache.org/ConfigExamples/DynamicContent,
acl QUERY is disabled and the swf is < maximum_object_size_in_memory.
Any assistance would be greatly appreciated.

Thanks to all who maintain squid, this is great and extremely useful software.

murray

# squid.conf #
http_port 80 defaultsite=contentreal.foobar.com
cache_peer 110.0.2.99 parent 80 0 no-query no-digest originserver
cache_peer 110.0.2.76 sibling 80 3130
cache_peer 110.0.2.77 sibling 80 3130 proxy-only
cache_peer 110.0.2.78 sibling 80 3130 proxy-only
digest_generation on
#hierarchy_stoplist cgi-bin ?
#acl QUERY urlpath_regex cgi-bin \?
#cache deny QUERY
shutdown_lifetime 10 seconds
strip_query_terms off
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
cache_mem 5200 MB
maximum_object_size_in_memory 500 KB
ipcache_size 4096
fqdncache_size 4096
cache_dir null /tmp
logformat squid  %tl.%03tu %6tr %>a %Ss/%03Hs %

Re: [squid-users] Can Squid hide all 404s from clients?

2008-08-21 Thread Leonardo Rodrigues Magalhães



Benton Roberts escreveu:

Dear Squid-masters,

I would like to configure Squid so that it always serves the "latest
available" version of any given URL, even if the URL is no longer available
at the original server. In this way, Squid's clients would never receive an
error for a given URL, as long as that URL had been available to Squid at
some time in the past. So Squid should check for the latest version of any
URL through its normal cache behavior -- I just want it to treat HTTP
response codes other than 2xx (OK) as a special case, and "hide" them from
the requesting client, and serve from cache instead.
  


   are you sure you wanna do this kind of configuration ??? have you 
ever imagined that you can be feeding your users with really 
invalid/very outdated pages ??? If the site admin took the files off off 
the webserver, maybe there's some reason for that 


   i dont know if this can be done, as it would be a very agressive 
http standards violaton  anyway, i would recommend you to NOT do this.


--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
[EMAIL PROTECTED]
My SPAMTRAP, do not email it






[squid-users] Can Squid hide all 404s from clients?

2008-08-21 Thread Benton Roberts
Dear Squid-masters,

I would like to configure Squid so that it always serves the "latest
available" version of any given URL, even if the URL is no longer available
at the original server. In this way, Squid's clients would never receive an
error for a given URL, as long as that URL had been available to Squid at
some time in the past. So Squid should check for the latest version of any
URL through its normal cache behavior -- I just want it to treat HTTP
response codes other than 2xx (OK) as a special case, and "hide" them from
the requesting client, and serve from cache instead.

Is this possible? If so, would anyone care to point me to the relevant
configuration variables, and perhaps suggest values for them?

Thanks in advance,
- benton


--
Starcut privacy notification:
If you are not the destined recipient of this message, please, delete it and 
notify the message originator of the mistake
--


Re: [squid-users] how to force to refresh when at offline_mode is on

2008-08-21 Thread Chris Robertson

Vikram Goyal wrote:

Hello,

I am new to squid. Could you guide exactly which settings should be
tweaked and what values should be appropriate.

I am using squid-3.0.STABLE7-1.fc9.i386

Thanks!
  


For a start, see  
http://wiki.squid-cache.org/KnowledgeBase/PerformanceAnalysis:

and
http://wiki.squid-cache.org/SquidFaq/SquidProfiling

The answer to the question you are asking really depends on what you are 
using Squid for and what kind of a workload you expect.


Chris



[squid-users] Blocking ".com" blocks "gmail.com"

2008-08-21 Thread William Knob
Hi,

   I'm using Squidguard to manage users,groups and acl's of my proxy server but 
I have a problem: When I put ".com" on my expressionlist of blocks, "gmail.com" 
is blocked too...

  My expressionlis file is 
"(.*)?(\.com(\?.*)?$|\.zip(\?.*)?$|message|messages)(.*)?"


  Anyone knows what is the problem?


Regards,


Re: [squid-users] Problem when adding another hard disk.. squid cannot create any files or dir on another hard disk

2008-08-21 Thread John Doe
> My problem is i dont know how to mount /mnt/cache2 as squid user
> If i can mount /mnt/cache2 as squid, the problem should be OK
> My cache_effective_user is squid and cache_effective_group also squid

Once /mnt/cache2 is mounted, do

  chown squid:squid /mnt/cache2

JD


  



[squid-users] Problem when adding another hard disk.. squid cannot create any files or dir on another hard disk

2008-08-21 Thread Mr Crack
I add anohter disk for cache
here is what i have done

1. mkfs.ext3 /dev/sda1
2. add the following line to /etc/fstab
   /dev/sda1   /mnt/cache2 ext3defaults1 2
3. add the following to /etc/squid/squid.conf
   cache_dir ufs /mnt/cache2  27 16 256
4. squid -z
   #dir /mnt/cache2
Lost+found
No files are created
5. service squid start
init_cache_dir /mnt/cache2... Starting squid: /etc/init.d/squid: line
53:  2766 Aborted $SQUID $SQUID_OPTS
>>/var/log/squid/squid.out 2>&1
   [FAILED]
6. To trace what happened i tail /var/log/squid/squid.out
Here is squid.out error message

FATAL: Failed to make swap directory /mnt/cache2: (13) Permission denied
Squid Cache (Version 2.6.STABLE6): Terminated abnormally.
CPU Usage: 0.002 seconds = 0.000 user + 0.002 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 0
FATAL: cache_dir /mnt/cache2: (13) Permission denied
Squid Cache (Version 2.6.STABLE6): Terminated abnormally.
CPU Usage: 0.006 seconds = 0.001 user + 0.005 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 0

I think squid cannot create any files in /mnt/cache2
It is mount problem

My problem is i dont know how to mount /mnt/cache2 as squid user
If i can mount /mnt/cache2 as squid, the problem should be OK
My cache_effective_user is squid and cache_effective_group also squid



Mr. Crack007


Re: [squid-users] if this is posted somewhere.. please tell me where to go... AD groups

2008-08-21 Thread nairb rotsak
Sorry Henrik, think I just sent this reply back to you.. not the whole group.. 

Great.. thanks,

Just to clarify, to use wbinfo_group.pl, I need to:
1.  Add Domain Local group to Active Directory called Internet-Allowed (name 
not important)
2.  Add 'external_acl_type ADS %LOGIN /usr/lib/squid/wbinfo_group.pl' to 
squid.conf
3.  Add 'aclInternet-Allowed external ADS Internet-Allowed' to squid.conf
4.  Add 'http_access allow Internet-Allowed all'

That is what I am able to piece together from Google.. 

Two
questions.  In doing this before for other clients, I have used
DansGuardian and used filter groups.  This customer doesn't want to
filter, they just want to allow or deny access.  I was pretty sure
Squid could do this and that is why I am trying to figure out the
wbinfo_group stuff.  In the past, I have messed up where to put the
acl's (in which order) and the http_access (again, in which order). 
Any advice on where these would go (or where they HAVE to go)?

Second
question.. does this mean anyone not in this group will not have
Internet.. or do I have to do a deny acl/http_access combo?

Thanks for clearing this up... 




- Original Message 
From: Henrik Nordstrom <[EMAIL PROTECTED]>
To: nairb rotsak <[EMAIL PROTECTED]>
Cc: squid-users@squid-cache.org
Sent: Wednesday, August 20, 2008 5:44:48 PM
Subject: Re: [squid-users] if this is posted somewhere.. please tell me where 
to go... AD groups

On ons, 2008-08-20 at 08:39 -0700, nairb rotsak wrote:
> The 2nd one is what I pretty much used to get this far... 
> 
> I just don't know how to tie it all together.. and I have looked at the 
> wbinfo_group.pl.. but not sure if I need to go that far??

far?

wbinfo_group.pl is the easiest way to get group lookups if you have
already done NTLM via Samba..

Regards
Henrik



  


Re: [squid-users] how to force to refresh when at offline_mode is on

2008-08-21 Thread Mr Crack
I dont understand what you want
pls tell me exactly what you want...


On 8/21/08, Vikram Goyal <[EMAIL PROTECTED]> wrote:
> On Tue, Aug 19, 2008 at 03:04:48PM +0200, Matus UHLAR - fantomas wrote:
>> On 18.08.08 23:10, Mr Crack wrote:
>> > Subject: [squid-users] how to force to refresh when at offline_mode is
>> > on
>> >
>> > when offline_mode is off, it becomes slow
>> > when offline_mode is on, old webpages comes out from cache
>> > how to fix this problem
>> > as for me offline_mode is also very essential because our internet
>> > connection is very slow
>>
>> you can't refresh in offline mode because offline mode means that no pages
>> will be fetched at all.
>>
>> You probably need to tune your cache to effective cache everything
>> possible
>> - use large cache, use newest squid version and others
>
> Hello,
>
> I am new to squid. Could you guide exactly which settings should be
> tweaked and what values should be appropriate.
>
> I am using squid-3.0.STABLE7-1.fc9.i386
>
> Thanks!
> --
>


Re: [squid-users] how to force to refresh when at offline_mode is on

2008-08-21 Thread Vikram Goyal
On Tue, Aug 19, 2008 at 03:04:48PM +0200, Matus UHLAR - fantomas wrote:
> On 18.08.08 23:10, Mr Crack wrote:
> > Subject: [squid-users] how to force to refresh when at offline_mode is on
> > 
> > when offline_mode is off, it becomes slow
> > when offline_mode is on, old webpages comes out from cache
> > how to fix this problem
> > as for me offline_mode is also very essential because our internet
> > connection is very slow
> 
> you can't refresh in offline mode because offline mode means that no pages
> will be fetched at all.
> 
> You probably need to tune your cache to effective cache everything possible
> - use large cache, use newest squid version and others

Hello,

I am new to squid. Could you guide exactly which settings should be
tweaked and what values should be appropriate.

I am using squid-3.0.STABLE7-1.fc9.i386

Thanks!
-- 


Re: [squid-users] Advantages of Squid

2008-08-21 Thread Amos Jeffries

bijayant kumar wrote:

Hello to lists,

I was meeting a consultant to an ISP today who plans to implement a caching 
solution for his network. I guess they are already impressed with Blue Coats 
which undoubtedly seems to be a very good product which I have not used or seen 
physically.

Being a staunch believer of Open source (though not very knowledgeable 
technically), I am convinced that Squid would also nowhere be less than 
BlueCoats or any other commercial product. I would seek help from friends who 
have the knowledge, to share the features of Squid in terms of cache management 
and why Squid may be better that other caching solutions available in the 
market. It may be noted that the client is not interested to discuss financial 
advantage but would be more keen to learn about the technical advantages.


To play devils advocate, BlueCoat have had M$ to throw at pure grunt and 
streamlining. Squid still lacks a bit of that. Making up for it in a 
rich set of control abilities, not to mention your ability to hack a fix 
to the code if anything particularly troublesome gets found.




Any pointers would be highly appreciated. I would love to see squid cluster 
deployed at a site handling around 10Gbps of traffic.



For Big Users (TM), the the most well known high-bandwidth users of 
Squid are ...


Yahoo! ...
  http://www.mnot.net/blog/2007/04/29/squid
(Mark answers most of your questions right there in the blog.)

and Wikipedia ...
  http://www.nedworks.org/~mark/presentations/hd2006/
  http://meta.wikimedia.org/wiki/Hardware

who even have their cluster configuration out in public:
  http://www.scribd.com/doc/43868/Wikipedia-site-internals-workbook-2007


Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE8


[squid-users] Advantages of Squid

2008-08-21 Thread bijayant kumar
Hello to lists,

I was meeting a consultant to an ISP today who plans to implement a caching 
solution for his network. I guess they are already impressed with Blue Coats 
which undoubtedly seems to be a very good product which I have not used or seen 
physically.

Being a staunch believer of Open source (though not very knowledgeable 
technically), I am convinced that Squid would also nowhere be less than 
BlueCoats or any other commercial product. I would seek help from friends who 
have the knowledge, to share the features of Squid in terms of cache management 
and why Squid may be better that other caching solutions available in the 
market. It may be noted that the client is not interested to discuss financial 
advantage but would be more keen to learn about the technical advantages.

Any pointers would be highly appreciated. I would love to see squid cluster 
deployed at a site handling around 10Gbps of traffic.

Thanks & Regards,
Bijayant Kumar

Send instant messages to your online friends http://uk.messenger.yahoo.com


Re: [squid-users] Squid is aborting and restarting its child process very often

2008-08-21 Thread Amos Jeffries

Henrik Nordstrom wrote:

On ons, 2008-08-20 at 14:00 +0800, Adrian Chadd wrote:

Run the latest Squid-3.0 ; PRE5 is old and buggy.

Shout at the debian distribution for shipping such an old version.


Nor only old & buggy, also not a stable release for production use only
a pre-release for early adopter testing.


As I explained to people at the meeting this week, Debian and Ubuntu 
policy is to not alter their own stable release short of serious 
security bugs.


http://www.debian.org/security/faq#oldversion
https://wiki.ubuntu.com/StableReleaseUpdates#When

The high-priority security bugs found in squid since PRE5 account for 
the five patch releases they have made on the package. It's just missing 
countless cleanups, non-security bugs, and features.


But the 'testing' (their RC distribution) release squid is very current, 
and should work easily.


Amos
--
Please use Squid 2.7.STABLE4 or 3.0.STABLE8


Re: [squid-users] I enable offline_mode, after 3 days... access denied error occurs

2008-08-21 Thread Matus UHLAR - fantomas
On 21.08.08 10:37, Mr Crack wrote:
> If offline_mode is disable, connection is slow but Ok
> To speed up connection speed, I enable offline_mode and connection is fast.
> But after 3 days, the following error occurs when accessing some sites...

I guess those pages have expired and squid doesn't provide them anymore.

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
2B|!2B, that's a question!


Re: [squid-users] squid/ftps

2008-08-21 Thread Matus UHLAR - fantomas
On 21.08.08 00:30, [EMAIL PROTECTED] wrote:
> i know that ftps is not "usual" , by the way if someone have experience
> about proxying ftps with squid or can explain why we can't do it , thx for
> your answers

proxying FTPS, as long as any ssl-encrypted protocol, has not much usage.
You only can control who will FTPS clients connect to.

For HTTPS, you can desipher the connection and in fact do man-in-the-middle
attack by configuring squid that it behaves as desctination server, but as
long as you probably will not have its certificates, the client will
(probably) report that.

For FTPS, there's no way, because:
- squid does not support FTP on server sice (You can only talk http to
squid)
- squid does not support FTPS on clients side (I think)

You can configure the client to abuse squid by using CONNECT request to FTPS
ports, but the only thing you achieve is controlling on squid's side where
(IP:port) the clients may connect to...
-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Silvester Stallone: Father of the RISC concept.


Re: [squid-users] Reverse Proxy

2008-08-21 Thread Amos Jeffries

Mario Almeida wrote:

Hi,
After adding the below option

always_direct allow all



prevent peer being used?

An error there is kind of expected if the peer is not listening on port 
80 for web traffic.


Amos


I get a different error

The following error was encountered:

* Connection to 172.27.1.10 Failed 


The system returned:

(111) Connection refused

The remote host or network may be down. Please try the request again.

Your cache administrator is root.

Regards,
Mario

-Original Message-
From: Amos Jeffries [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, August 20, 2008 10:14 AM

To: Chris Robertson
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Reverse Proxy

Chris Robertson wrote:

Mario Almeida wrote:

Hi All,

Below is the setting I have done to test a reverse proxy

http_port 3128 accel defaultsite=xyz.example.com vhost

cache_peer 172.27.1.10 parent 8080 0 no-query originserver name=server1
acl server1_acl dstdomain www.xyz.example.com xyz.example.com
cache_peer_access server1 allow server1_acl
cache_peer_access server1 deny all

But could not get it done
Bellow is the error message what I get


ERROR
The requested URL could not be retrieved

While trying to retrieve the URL: http:// xyz.example.com /

The following error was encountered:

* Unable to forward this request at this time.
This request could not be forwarded to the origin server or to any parent
caches. The most likely cause for this error is that:

* The cache administrator does not allow this cache to make direct
connections to origin servers, and
  

This seems unlikely given the cache_peer_access line, so...

* All configured parent caches are currently unreachable.   
This is far more likely the issue at hand.  Check your cache.log for any 
clues.  Verify you have the right IP and port for your parent server, 
and that there are no firewall rules preventing access.  Try using wget 
or Lynx on your Squid server to grab a page off the origin server.



Your cache administrator is root.



Regards,
Remy
  

Chris


There is also a weird side-case rarely seen with dstdomain thst needs 
checking here.


Mario:
  does it work if you change the ACL line to:
   acl server1_acl dstdomain .xyz.example.com

If not, check your config for lines mentioning always_direct or 
never_direct, and the network linkage between test proxy and web server 
as mentioned by Chris.


Amos


[squid-users] Generating cache file hash - continued

2008-08-21 Thread John =)

Further to my request yesterday... I would prefer to be able to just generate 
the md5 hash manually, rather than writing code to use storeKeyPublic() in 
src/store_key_md5.c. However, I must not be interpreting that function 
correctly as my hashes do not match the hashes produced in store.log:

For example, for 'GET http://www.squid-cache.org/Images/img8.gif' - putting 
001http://www.squid-cache.org/Images/img8.gif into the hash generator gives 
d5bf8db92c34e66592faa82454b5d867, but store.log 
shows:F506597929DF2C9F8E51ED12E77E6548

Is there a simple way to produce the correct hash without touching the 
sourcecode? I am very new to this.


John Redford.

_
Get Hotmail on your mobile from Vodafone 
http://clk.atdmt.com/UKM/go/107571435/direct/01/

Re: [squid-users] external_acl children...

2008-08-21 Thread John Doe
> Generally one should use the concurrency=
> children=when making your own helper. You only need
> a lot of children if your helper may block for extended periods of time
> for example performing DNS lookups..

I just want to be sure that, in case of a traffic spike, there are no 
connections denied.
For now, I only use children=, but I guess I will have to learn to use 
pthread...   ^_^

> > Also, what are the "negative lookups" of negative_ttl of external_acl_type?
> > First I thought they were the ERR results, but apparently not.
> It is.

Indeed, I forgot that I was always replying OK and using user= to say allowed 
or blocked, my bad...

> Not sure ttl=0 really is "no cache". It may well be cached for 0 seconds
> truncated downwards (integer math using whole seconds).. The point of
> the external acl interface is to get cacheability and request merging.
> If you don't want these then use the url rewriter interface.

If it is due to caching it is ok.
I was just trying to benchmark the helper impact on the reqs/s...
I might even put the cache up to 5 minutes.

Thx,
JD