Re: [squid-users] Problems POST-Method on Squid 3

2008-11-19 Thread hdkutz
On Wed, Nov 19, 2008 at 12:31:30PM +1300, Amos Jeffries wrote:
> > Hello List,
> > I'am having problems with my squid 3 on Centos.
> > If I try to use POST-Method (e.g. Webmail, Bugzilla) the proxy returns
> >
> > "Read Timeout"
> > No Error
> 
> This error indicates a network issue below Squid. The remote server has
> been sent and accepted the request, but has not sent back any reply within
> 15 minutes.
> 
> My experience with this its always been a PMTU error somewhere on the
> Internet between Squid and the server combined with someone blocking ICMP.
> 
> Amos
Thanx Amos.
That did the trick.
I had to disable automatic PMTU discovering by setting
net.ipv4.ip_no_pmtu_disc=1
in my /etc/sysctl.conf.
After that it works like a charm.

Cheers,
ku
-- 
Darth Vader:
Your powers are weak, old man.
Ben (Obi-Wan) Kenobi:
You can't win, Darth. If you strike me down, I shall
become more powerful than you could possibly
imagine.


[squid-users] Squid 3 and SNMP

2008-11-19 Thread Phibee Network Operation Center

Hi

I am search a solution fow use snmp with Squid 3.0

I want if it's possible to get :
   Nbr of user connected (i use NTLM)
   Nbr of hit/s
and other with Nagios/Centreon graph

anyone know the process ?

thanks for your help
Jerome



Re: [squid-users] Squid 3 and SNMP

2008-11-19 Thread Amos Jeffries

Phibee Network Operation Center wrote:

Hi

I am search a solution fow use snmp with Squid 3.0

I want if it's possible to get :
   Nbr of user connected (i use NTLM)
   Nbr of hit/s
and other with Nagios/Centreon graph

anyone know the process ?

thanks for your help
Jerome



Hmm, auth doesn't have any stats.

The Squid MIB currently contains tables of the Squid server stats, 
configuration settings, performance counters (totals and averages), 
network conditions, and meshing heirarchy structure (peer cache details).




Amos
--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.2


[squid-users] Accessing a transparent cache on localhost

2008-11-19 Thread Jonathan Gazeley

Hi,

I'm new to Squid. I've successfully set up a transparent cache on a 
server which is also the gateway/firewall/NAT for a small LAN. All the 
clients on my LAN use the cache properly. However, the server running 
the cache doesn't use its own cache. I've inserted what I thought were 
the correct rules into my iptables config:


-A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3128
-A PREROUTING -s 127.0.0.1/32 -p tcp --dport 80 -j REDIRECT --to-port 3128
-A PREROUTING -s 192.168.0.1/32 -p tcp --dport 80 -j REDIRECT --to-port 3128
-A PREROUTING -s x.x.x.x/32 -p tcp --dport 80 -j REDIRECT --to-port 3128 
(external public IP)


where eth0 is the LAN-facing interface.

My Squid config allows proxying from localhost and localnet:

http_access allow localhost
http_access allow localnet
http_access deny all

Therefore I think I have not set up my iptables quite right. Can anyone 
confirm if this is the right way to go about catching HTTP requests from 
localhost?


Many thanks,
Jonathan


Jonathan Gazeley
Systems Support Specialist
ResNet | Wireless & VPN Team
Information Services
University of Bristol




Re: [squid-users] customize logformat to see header

2008-11-19 Thread zulkarnain
Hi Chris. I follow your description and its work now. Thank you.

Regards,
Zul


--- On Wed, 11/19/08, Chris Robertson <[EMAIL PROTECTED]> wrote:

> From: Chris Robertson <[EMAIL PROTECTED]>
> Subject: Re: [squid-users] customize logformat to see header
> To: squid-users@squid-cache.org
> Date: Wednesday, November 19, 2008, 6:08 AM
> zulkarnain wrote:
> > Hi,
> > 
> > I'm trying to modify logformat to display header
> of this folowing websites. My purpose is to be able to use
> the correct pattern for refresh_pattern. Here are my rules
> > 
> > acl googlevideo url_regex -i googlevideo\.com
> > acl kaspersky url_regex -i kaspersky\.com
> > acl kaspersky-labs url_regex -i
> kaspersky-labs\.com
> > acl metacafe url_regex -i metacafe\.com
> > acl apple url_regex -i phobos\.apple\.com
> > acl pornhub url_regex -i pornhub\.com
> >   
> 
> Better to use dstdomain.
> 
> acl googlevideo dstdomain .googlevideo.com
> acl kapersky dstdomain .kapersky.com
> ...
> 
> > logformat squid %ts.%03tu %6tr %>a %Ss/%03Hs
> % > logformat analisa %{%H:%M:%S}tl %-13>a %-6 %03Hs %-17Ss %-24mt %-6tr %ru *REQ* *C:%{Cache-Control}>h
> *P:%"{Pragma}>h *LMS:
> > %"{Last-Modified}>h *REP*
> *C:%"{Cache-Control} *LMS:%"{Last-Modified} > 
> > access_log /var/log/squid/analisa.log analisa
> googlevideo kaspersky kaspersky-labs metacafe apple pornhub
> >   
> 
> According to
> http://www.squid-cache.org/Doc/config/access_log/*,  the
> ACLs are ANDed together, just like with http_access lines. 
> The only way something is going to be logged with this
> format is if the domain matches all of your url_regex lines.
>  
> http://gooGLevideo.compornhub.COMandKAPersky-labs.comMetacafe.com-anythinggoeshere-phobos.apple.com...
> 
> 
> You'll need one access_log line for each of the ACLs.
> 
> > access_log /var/log/squid/access.log squid
> > 
> > The rules above did not work. The file analisa.log is
> empty even after I accessed several websites above. Did I
> miss something? Any help would be greatly appreciated.
> > 
> > Rgds,
> > Zul
> >   
> 
> Chris
> 
> *"Will log to the specified file ... those entries
> which match ALL the acl's specified (which must be
> defined in acl clauses). If no acl is specified, all
> requests will be logged to this file."


  


RE: [squid-users] NTLM auth popup boxes && Solaris 8 tuning for upgrade into 2.7.4

2008-11-19 Thread vincent.blondel
 
 Before digging deep into OS settings check your squid.conf auth,
acl
>>> and
 http_access settings.
>>> 
>>> okay let's go concerning auth part of the squid.conf, I would like
to
>>> say, nothing special .. below the ntlm config part
>>> 
>>> auth_param ntlm program /usr/local/bin/ntlm_auth
>>> --helper-protocol=squid-2.5-ntlmssp
>>> auth_param ntlm children 128
>>> auth_param ntlm keep_alive on
>>> acl ntlmauth proxy_auth REQUIRED
>>> ...
>>> http_access allow ntlmauth all
>>> http_reply_access allow all
>>> http_access deny all
>>> deny_info TCP_RESET all
>>> 
>>
>>Hmm, what those lines do is:
>>  - test the request for auth details (allow ntlmauth),
>>  - if correct details found, allow (allow ntlmauth all).
>>  - if none are found, or bad details ignore (allow ntlmauth all)
>>  - but send a RESET on the TCP link (deny all + TCP_RESET)
>
>something I tried last week to see if it could solve my problem.
>
>>
>>The clients will never get any correction when auth details are
invalid. 
>>They will just get a completely new session, the browser will try to 
>>resend the same broken details until it gives up and re-asks the user.
>>
>>
>>The 'all' silencing hack is intended for situations where auth may be 
>>the preferred methods of access, but an alternative exists and can be 
>>taken easily when it fails. It prevents the browser being notified
when 
>>credentials are wrong.
>>
>>Does it work if you make that line just: http_access allow ntlmauth
>
>indeed seems also working, if no valid credential 'cache access denied'
otherwise goes to internet.

as announced in my previous mails, I migrated all my proxies servers
last night. this ran fine and the packages are running well.
I updated access ntlm rule by removing 'all' at the end of the line but
this does not chnage anything except it happened at most 37 times on one
of of the proxies. I got this more than 100 times a day before.

so can I still try something else ?

>
>does it change the internal squid behaviour by removing all ??
>
>
 Check the TTL settings on your auth config. If it's not long enough
>>> squid
 will re-auth between request and reply.
>>> 
>>> not really sure to understand what setting you are speaking about ??
>>> 
>>
>>auth_param ntlm ttl
>
>do you advice using it because I do not find any reference on it on
squid configuration guide website.
>

you spoke about ttl parameter .. do you advice using it ??

>

-
ATTENTION:
The information in this electronic mail message is private and
confidential, and only intended for the addressee. Should you
receive this message by mistake, you are hereby notified that
any disclosure, reproduction, distribution or use of this
message is strictly prohibited. Please inform the sender by
reply transmission and delete the message without copying or
opening it.

Messages and attachments are scanned for all viruses known.
If this message contains password-protected attachments, the
files have NOT been scanned for viruses by the ING mail domain.
Always scan attachments before opening them.
-




[squid-users] Different Auth methods depending who

2008-11-19 Thread Luis Daniel Lucio Quiroz
HI Squids,

I wonder to know if it is possible to tell squid to let some auth-methods 
depending Browser.  We have as a security policy to use digest, the fact is 
that some clients are not digest-aware.  Such as pidgin, so I wonder if we 
detect "pidging id string" we may let he to use basic-auth just and only for 
pidging, all other browsers must use digest.


Any idea,

Regards,

LD


Re: [squid-users] Different Auth methods depending who

2008-11-19 Thread Malte Schröder
On Wed, 19 Nov 2008 12:48:21 -0600
Luis Daniel Lucio Quiroz <[EMAIL PROTECTED]> wrote:

> HI Squids,
> 
> I wonder to know if it is possible to tell squid to let some auth-methods 
> depending Browser.  We have as a security policy to use digest, the fact is 
> that some clients are not digest-aware.  Such as pidgin, so I wonder if we 
> detect "pidging id string" we may let he to use basic-auth just and only for 
> pidging, all other browsers must use digest.
> 
> 
> Any idea,
> 
> Regards,
> 
> LD
> 

I would like such a feature, too. But my motivation behind that is that
there are some clients out there which try to do NTLM over Negotiate
(WMP 11 ...) and fail. I would think of something which can work with
(request-)ACLs.

Regards
Malte


signature.asc
Description: PGP signature


RE: [squid-users] NTLM auth popup boxes && Solaris 8 tuning for upgrade into 2.7.4

2008-11-19 Thread Henrik Nordstrom
On ons, 2008-11-19 at 19:39 +0100, [EMAIL PROTECTED] wrote:

> >>auth_param ntlm ttl
> >
> >do you advice using it because I do not find any reference on it on
> squid configuration guide website.
> >
> 
> you spoke about ttl parameter .. do you advice using it ??

Not sure who spoke about an auth_param ntlm ttl parameter, but there is
no such parameter.

The ntlm scheme only has three parameters

  program

  children

  keep_alive

there the first (program) specifies the helper to use, the second
(children) needs to be tuned to at least fit your load or there will be
issues with rejected access or sporatic authentication prompts, and the
third is a minor optimization.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Different Auth methods depending who

2008-11-19 Thread Henrik Nordstrom
On ons, 2008-11-19 at 12:48 -0600, Luis Daniel Lucio Quiroz wrote:

> I wonder to know if it is possible to tell squid to let some auth-methods 
> depending Browser.  We have as a security policy to use digest, the fact is 
> that some clients are not digest-aware.  Such as pidgin, so I wonder if we 
> detect "pidging id string" we may let he to use basic-auth just and only for 
> pidging, all other browsers must use digest.

Not today. But it's a relatively often asked feature and we would
happily accept any contributions implementing such controls.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


RE: [squid-users] NTLM auth popup boxes && Solaris 8 tuning for upgrade into 2.7.4

2008-11-19 Thread vincent.blondel
 
>> >>auth_param ntlm ttl
>> >
>> >do you advice using it because I do not find any reference on it on
>> squid configuration guide website.
>> >
>> 
>> you spoke about ttl parameter .. do you advice using it ??
>
>Not sure who spoke about an auth_param ntlm ttl parameter, but there is
>no such parameter.
>
>The ntlm scheme only has three parameters
>
>  program
>
>  children
>
>  keep_alive
>
>there the first (program) specifies the helper to use, the second
>(children) needs to be tuned to at least fit your load or there will be
>issues with rejected access or sporatic authentication prompts, and the
>third is a minor optimization.
>

okay but I already get 128 ntlm_auth processes running .. is this enough
for a load of 250 req/sec ??

on the other hand, and this is also the meaning of this conversation, it
seems this popup box not always come with some load issues but can
happen for other reasons I totally ignore .. and the way to troubleshoot
this really ?

>Regards
>Henrik
-
ATTENTION:
The information in this electronic mail message is private and
confidential, and only intended for the addressee. Should you
receive this message by mistake, you are hereby notified that
any disclosure, reproduction, distribution or use of this
message is strictly prohibited. Please inform the sender by
reply transmission and delete the message without copying or
opening it.

Messages and attachments are scanned for all viruses known.
If this message contains password-protected attachments, the
files have NOT been scanned for viruses by the ING mail domain.
Always scan attachments before opening them.
-




Re: [squid-users] Accessing a transparent cache on localhost

2008-11-19 Thread Chris Robertson

Jonathan Gazeley wrote:

Hi,

I'm new to Squid. I've successfully set up a transparent cache on a 
server which is also the gateway/firewall/NAT for a small LAN. All the 
clients on my LAN use the cache properly. However, the server running 
the cache doesn't use its own cache. I've inserted what I thought were 
the correct rules into my iptables config:


-A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3128
-A PREROUTING -s 127.0.0.1/32 -p tcp --dport 80 -j REDIRECT --to-port 
3128
-A PREROUTING -s 192.168.0.1/32 -p tcp --dport 80 -j REDIRECT 
--to-port 3128
-A PREROUTING -s x.x.x.x/32 -p tcp --dport 80 -j REDIRECT --to-port 
3128 (external public IP)


I think it would need to be part of the OUTPUT chain.  But you would 
have to do some sort of packet marking to avoid matching packets from 
Squid to the internet (lest you create a forwarding loop).


It's probably far easier to set the environment variable "http_proxy" 
(e.g. "export http_proxy=http://localhost:3128";).  Many utilities (YUM , 
apt, wget, etc) honor this.




where eth0 is the LAN-facing interface.

My Squid config allows proxying from localhost and localnet:

http_access allow localhost
http_access allow localnet
http_access deny all

Therefore I think I have not set up my iptables quite right. Can 
anyone confirm if this is the right way to go about catching HTTP 
requests from localhost?


Many thanks,
Jonathan


Jonathan Gazeley
Systems Support Specialist
ResNet | Wireless & VPN Team
Information Services
University of Bristol




Chris



RE: [squid-users] NTLM auth popup boxes && Solaris 8 tuning for upgrade into 2.7.4

2008-11-19 Thread Henrik Nordstrom
On ons, 2008-11-19 at 20:29 +0100, [EMAIL PROTECTED] wrote:

> okay but I already get 128 ntlm_auth processes running .. is this enough
> for a load of 250 req/sec ??

Can't say. Do you get any relevant warnings in cache.log? And what does
cachemgr say about the helper usage?

> on the other hand, and this is also the meaning of this conversation, it
> seems this popup box not always come with some load issues but can
> happen for other reasons I totally ignore .. and the way to troubleshoot
> this really ?

wireshark is a good tool for troubleshooting these issues, combined with
increased logging in Squid.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] ICP queries for 'dynamic' urls?

2008-11-19 Thread Steve Webb

Hello.

I'm caching dynamic content (urls with ? and & in them) and everything's 
working fine with one exception.


I'm seeing ICP queries for only static content and not dynamic content 
even though squid is actually caching dynamic content.


Q: Is there a setting somewhere to ask squid to also do ICP queries for 
dynamic content like there was with the no-cache directive to originally 
not cache dynamic content (aka cgi-bin and ? content)?


I'm using squid version 2.5 (I know, I should upgrade to 3.x, but I'm 
trying to stick with the same versions across the board and I don't have 
time to run my config through QA with 3.0 at this time.  Please don't tell 
me to upgrade.)


My cache_peer lines look like:

cache_peer 10.23.14.4   sibling 80  3130  proxy-only

This is for a reverse proxy setup.

Dataflow is:

Customer -> Internet -> Akamai -> LB -> squid -> LB -> apache -> LB -> storage

The "apache" layer does an image resize (which I want to cache) and the 
url is http://xxx/resize.php?w=xx&h=xx&;...


The "storage" layer is just another group of apache servers that serve-up 
the raw files.


LB is a load-balancer.

- Steve

--
Steve Webb - Lead System Administrator for Pronto.com
Email: [EMAIL PROTECTED]  (Please send any work requests to: [EMAIL PROTECTED])
Cell: 303-564-4269, Office: 303-497-9367, YIM: scumola


[squid-users] I Need Help!

2008-11-19 Thread Leandro Lustosa
Hi!

Please! I need help.

I have a content analyser and i am using squid-2.4-stable14 to
authenticate users to access remote desktop (terminal service).

The Squid Proxy provides information about users to the content
analyser this way:


"http://www.site.com default:://user" (That's the wrong way. It should
be like below:)

"http://www.site.com user" (that's the right way)


How can I disable this string: "default://" in my Squid?

I am using Squid only to provide credentials for the content analyser.
It's searching for users in Novell's edirectory.

Thanks for the attention,


Leandro Lustosa.


Re: [squid-users] ICP queries for 'dynamic' urls?

2008-11-19 Thread Chris Robertson

Steve Webb wrote:

Hello.

I'm caching dynamic content (urls with ? and & in them) and 
everything's working fine with one exception.


I'm seeing ICP queries for only static content and not dynamic content 
even though squid is actually caching dynamic content.


Q: Is there a setting somewhere to ask squid to also do ICP queries 
for dynamic content like there was with the no-cache directive to 
originally not cache dynamic content (aka cgi-bin and ? content)?


http://www.squid-cache.org/Doc/config/hierarchy_stoplist/



I'm using squid version 2.5 (I know, I should upgrade to 3.x, but I'm 
trying to stick with the same versions across the board and I don't 
have time to run my config through QA with 3.0 at this time.  Please 
don't tell me to upgrade.)


My cache_peer lines look like:

cache_peer 10.23.14.4   sibling 80  3130  proxy-only

This is for a reverse proxy setup.

Dataflow is:

Customer -> Internet -> Akamai -> LB -> squid -> LB -> apache -> LB -> 
storage


The "apache" layer does an image resize (which I want to cache) and 
the url is http://xxx/resize.php?w=xx&h=xx&;...


The "storage" layer is just another group of apache servers that 
serve-up the raw files.


LB is a load-balancer.

- Steve



Chris


Re: [squid-users] ICP queries for 'dynamic' urls?

2008-11-19 Thread Steve Webb

That did it.  Thanks!

- Steve

On Wed, 19 Nov 2008, Chris Robertson wrote:


Date: Wed, 19 Nov 2008 11:42:07 -0900
From: Chris Robertson <[EMAIL PROTECTED]>
To: squid-users@squid-cache.org
Subject: Re: [squid-users] ICP queries for 'dynamic' urls?

Steve Webb wrote:

Hello.

I'm caching dynamic content (urls with ? and & in them) and everything's 
working fine with one exception.


I'm seeing ICP queries for only static content and not dynamic content even 
though squid is actually caching dynamic content.


Q: Is there a setting somewhere to ask squid to also do ICP queries for 
dynamic content like there was with the no-cache directive to originally 
not cache dynamic content (aka cgi-bin and ? content)?


http://www.squid-cache.org/Doc/config/hierarchy_stoplist/



I'm using squid version 2.5 (I know, I should upgrade to 3.x, but I'm 
trying to stick with the same versions across the board and I don't have 
time to run my config through QA with 3.0 at this time.  Please don't tell 
me to upgrade.)


My cache_peer lines look like:

cache_peer 10.23.14.4   sibling 80  3130  proxy-only

This is for a reverse proxy setup.

Dataflow is:

Customer -> Internet -> Akamai -> LB -> squid -> LB -> apache -> LB -> 
storage


The "apache" layer does an image resize (which I want to cache) and the url 
is http://xxx/resize.php?w=xx&h=xx&;...


The "storage" layer is just another group of apache servers that serve-up 
the raw files.


LB is a load-balancer.

- Steve



Chris



--
Steve Webb - Lead System Administrator for Pronto.com
Email: [EMAIL PROTECTED]  (Please send any work requests to: [EMAIL PROTECTED])
Cell: 303-564-4269, Office: 303-497-9367, YIM: scumola


RE: [squid-users] ICP queries for 'dynamic' urls?

2008-11-19 Thread Gregori Parker
I'm curious about this as well - so is the answer that siblings cannot
be queried for dynamic content and you need to use hierarchy_stoplist to
keep squid from trying?  Or is there a way to get ICP/HTCP to query
siblings with the entire URI, query arguments and all?  I have a very
similar setup and have been considering eliminating sibling
relationships altogether because of this...


-Original Message-
From: Steve Webb [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 19, 2008 12:54 PM
To: Chris Robertson
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] ICP queries for 'dynamic' urls?

That did it.  Thanks!

- Steve

On Wed, 19 Nov 2008, Chris Robertson wrote:

> Date: Wed, 19 Nov 2008 11:42:07 -0900
> From: Chris Robertson <[EMAIL PROTECTED]>
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] ICP queries for 'dynamic' urls?
> 
> Steve Webb wrote:
>> Hello.
>> 
>> I'm caching dynamic content (urls with ? and & in them) and
everything's 
>> working fine with one exception.
>> 
>> I'm seeing ICP queries for only static content and not dynamic
content even 
>> though squid is actually caching dynamic content.
>> 
>> Q: Is there a setting somewhere to ask squid to also do ICP queries
for 
>> dynamic content like there was with the no-cache directive to
originally 
>> not cache dynamic content (aka cgi-bin and ? content)?
>
> http://www.squid-cache.org/Doc/config/hierarchy_stoplist/
>
>> 
>> I'm using squid version 2.5 (I know, I should upgrade to 3.x, but I'm

>> trying to stick with the same versions across the board and I don't
have 
>> time to run my config through QA with 3.0 at this time.  Please don't
tell 
>> me to upgrade.)
>> 
>> My cache_peer lines look like:
>> 
>> cache_peer 10.23.14.4   sibling 80  3130  proxy-only
>> 
>> This is for a reverse proxy setup.
>> 
>> Dataflow is:
>> 
>> Customer -> Internet -> Akamai -> LB -> squid -> LB -> apache -> LB
-> 
>> storage
>> 
>> The "apache" layer does an image resize (which I want to cache) and
the url 
>> is http://xxx/resize.php?w=xx&h=xx&;...
>> 
>> The "storage" layer is just another group of apache servers that
serve-up 
>> the raw files.
>> 
>> LB is a load-balancer.
>> 
>> - Steve
>> 
>
> Chris
>

-- 
Steve Webb - Lead System Administrator for Pronto.com
Email: [EMAIL PROTECTED]  (Please send any work requests to:
[EMAIL PROTECTED])
Cell: 303-564-4269, Office: 303-497-9367, YIM: scumola


Re: [squid-users] ICP queries for 'dynamic' urls?

2008-11-19 Thread Chris Robertson

Gregori Parker wrote:

I'm curious about this as well - so is the answer that siblings cannot
be queried for dynamic content and you need to use hierarchy_stoplist to
keep squid from trying?  Or is there a way to get ICP/HTCP to query
siblings with the entire URI, query arguments and all?  I have a very
similar setup and have been considering eliminating sibling
relationships altogether because of this...
  


Way back when the web was young and dynamic content was rare,  query 
strings just about always indicated personalized, non-cacheable 
content.  Prior to version 2.7 and 3.1 (I think) Squid, by default did 
not even attempt to cache anything with "cgi-bin" or a question mark in 
the URL (acl QUERY urlpath_regex cgi-bin ?/no_cache deny QUERY).  Since 
this content was not cached, there was no reason to check if it is 
cached on siblings (hierarchy_stoplist cgi-bin ?).


If you are using the now recommended refresh_pattern (refresh_pattern -i 
(/cgi-bin/|\?) 0 0% 0 ), dynamic content that provides freshness 
information can be cached (and that which doesn't, will not be) so 
removing the default hierarchy_stoplist might net you a few more hits.


Hope that clears it up.

Chris


RE: [squid-users] ICP queries for 'dynamic' urls?

2008-11-19 Thread Gregori Parker
I understand all that and am not using or questioning the default
config.  My config lacks definition for hierarchy_stoplist completely,
which means it's defined as internal default (which should be nada).

What I'm asking is: are my inter-cache/sibling/ICP/HTCP queries
including full URI's or is it stripping at the '?' (i.e. s/?.*//) ?


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, November 19, 2008 2:40 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] ICP queries for 'dynamic' urls?

Gregori Parker wrote:
> I'm curious about this as well - so is the answer that siblings cannot
> be queried for dynamic content and you need to use hierarchy_stoplist
to
> keep squid from trying?  Or is there a way to get ICP/HTCP to query
> siblings with the entire URI, query arguments and all?  I have a very
> similar setup and have been considering eliminating sibling
> relationships altogether because of this...
>   

Way back when the web was young and dynamic content was rare,  query 
strings just about always indicated personalized, non-cacheable 
content.  Prior to version 2.7 and 3.1 (I think) Squid, by default did 
not even attempt to cache anything with "cgi-bin" or a question mark in 
the URL (acl QUERY urlpath_regex cgi-bin ?/no_cache deny QUERY).  Since 
this content was not cached, there was no reason to check if it is 
cached on siblings (hierarchy_stoplist cgi-bin ?).

If you are using the now recommended refresh_pattern (refresh_pattern -i

(/cgi-bin/|\?) 0 0% 0 ), dynamic content that provides freshness 
information can be cached (and that which doesn't, will not be) so 
removing the default hierarchy_stoplist might net you a few more hits.

Hope that clears it up.

Chris


[squid-users] cachemgr no Cache Client List

2008-11-19 Thread Rick Chisholm
anytime I check the Cache Client List I get:

Cache Clients:
TOTALS
ICP : 0 Queries, 0 Hits (  0%)
HTTP: 0 Requests, 0 Hits (  0%)

even when I know the Cache has clients... it's weird.

Rick



Re: [squid-users] ICP queries for 'dynamic' urls?

2008-11-19 Thread Chris Robertson

Gregori Parker wrote:

I understand all that and am not using or questioning the default
config.


Right.  Sorry 'bout that, then.


  My config lacks definition for hierarchy_stoplist completely,
which means it's defined as internal default (which should be nada).

What I'm asking is: are my inter-cache/sibling/ICP/HTCP queries
including full URI's or is it stripping at the '?' (i.e. s/?.*//) ?
  


Full URL including query, no headers (Cache-Control, Vary, etc.) no 
request method information (GET is assumed).


More info than you probably want is available from 
http://www.caida.org/outreach/papers/1998/icp-sq/icp-sq.ps.gz.


Chris


[squid-users] Squid very slow

2008-11-19 Thread Wilson Hernandez - MSD, S. A.
I am running Nocat with along with squid3 and I am experiencing some 
problems:


Sometimes everything works fine but, sometimes the system is extremely 
slow and I get the following error on browser:


The requested URL could not be retrieved

While trying to retrieve the URL: 
http://us.mc625.mail.yahoo.com/mc/showFolder;_ylt=ArsEohpYUGGoVGGsFGqujqJjk70X?

The following error was encountered:

   Unable to determine IP address from host name for us.mc625.mail.yahoo.com 


The dnsserver returned:

   Refused: The name server refuses to perform the specified operation. 


This means that:

The cache was not able to resolve the hostname presented in the URL. 
Check if the address is correct. 


Your cache administrator is [EMAIL PROTECTED]
Generated Thu, 20 Nov 2008 01:07:31 GMT by localhost (squid/3.0.PRE5) 


--

I added my dns servers to the squid.conf with the dns_nameservers 





Re: [squid-users] Squid very slow

2008-11-19 Thread Rick Chisholm
are you running DNS on the same box as squid or on the LAN?

create a list of domain names, like 20-25 in a file like names.lst and
run dig -f names.lst - gives you an idea how well DNS is working.


> Sometimes everything works fine but, sometimes the system is extremely
> slow and I get the following error on browser:
> 
> The requested URL could not be retrieved
> 
> While trying to retrieve the URL:
> http://us.mc625.mail.yahoo.com/mc/showFolder;_ylt=ArsEohpYUGGoVGGsFGqujqJjk70X?
> 
> 
> The following error was encountered:
> 
>Unable to determine IP address from host name for
> us.mc625.mail.yahoo.com
> The dnsserver returned:
> 
>Refused: The name server refuses to perform the specified operation.
> This means that:
> 
> The cache was not able to resolve the hostname presented in the URL.
> Check if the address is correct.
> Your cache administrator is [EMAIL PROTECTED]
> Generated Thu, 20 Nov 2008 01:07:31 GMT by localhost (squid/3.0.PRE5)
> --
> 
> I added my dns servers to the squid.conf with the dns_nameservers 
> 
> 
> 
> 



[squid-users] DG and Squid 1 Machine

2008-11-19 Thread ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░
hi all
sorry for my cross posting but this is urgent :(
i have problem here

eth0 192.168.222.100 =>> Go to LAN and act as Client's GW and DNS (
Installed DG and Squid )
eth1 10.0.0.2 =>> Go to LoadBalancing + DMZ server ( IP PUBLIC
forwarded ( got DMZ to this machine )

squid.conf :
http_port 2210 transparent

dansguardian.conf :
filterport = 2211
proxyip = 127.0.0.1
proxyport = 2210

rc.local
/sbin/iptables --table nat --append POSTROUTING --out-interface eth1
-j MASQUERADE
/sbin/iptables --append FORWARD --in-interface  eth1 -j ACCEPT
/sbin/iptables -t nat -A PREROUTING -i eth0 -p tcp -s
192.168.0.0/255.255.0.0 --dport 80 -j DNAT --to 192.168.222.100:2211
/sbin/iptables -t nat -A PREROUTING -p tcp -i eth1 -d 10.0.0.2 --dport
2210 -j DNAT --to-destination 192.168.222.100


output :
ERROR
The requested URL could not be retrieved

While trying to retrieve the URL: http://google.com/
The following error was encountered:
Access Denied.



what wrong ?


[squid-users] disable-internal-dns not working on 2.6.18

2008-11-19 Thread Joseph Jamieson
Hello,

I am trying to set up a Squid reverse proxy server in order to direct different 
web addresses to different servers.   The caching function is just an added 
bonus.

As I understand it, I need to use --disable-internal-dns build option to do 
this, and put the various host names in /etc/hosts.

This is an Ubuntu box and I've downloaded all of the packages necessary to 
build squid, and it does build correctly.   I added the --disable-internal-dns 
option into debian/rules, built binary packages, and installed them.

Output of squid -v:

Squid Cache: Version 2.6.STABLE18
configure options:  '--prefix=/usr' '--exec_prefix=/usr' '--bindir=/usr/sbin' 
'--sbindir=/usr/sbin' '--libexecdir=/usr/lib/squid' '--sysconfdir=/etc/squid' 
'--localstatedir=/var/spool/squid' '--datadir=/usr/share/squid' 
'--enable-async-io' '--with-pthreads' 
'--enable-storeio=ufs,aufs,coss,diskd,null' '--enable-linux-netfilter' 
'--enable-arp-acl' '--enable-epoll' '--disable-internal-dns' 
'--enable-removal-policies=lru,heap' '--enable-snmp' '--enable-delay-pools' 
'--enable-htcp' '--enable-cache-digests' '--enable-underscores' 
'--enable-referer-log' '--enable-useragent-log' 
'--enable-auth=basic,digest,ntlm' '--enable-carp' 
'--enable-follow-x-forwarded-for' '--with-large-files' '--with-maxfd=65536' 
'i386-debian-linux' 'build_alias=i386-debian-linux' 
'host_alias=i386-debian-linux' 'target_alias=i386-debian-linux' 'CFLAGS=-Wall 
-g -O2' 'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS='

The --disable-internal-dns option is listed.

However, when I try to run squid:

FATAL: cache_dns_program /usr/lib/squid/dnsserver: (2) No such file or directory
Squid Cache (Version 2.6.STABLE18): Terminated abnormally.
CPU Usage: 0.010 seconds = 0.000 user + 0.010 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 0
Aborted

I've tried to put in an empty script in that location to work around the issue 
but it doesn't work; here's the output from syslog:

Nov 19 22:47:32 vpn squid[22414]: Too few dnsserver processes are running
Nov 19 22:47:32 vpn squid[22414]: The dnsserver helpers are crashing too 
rapidly, need help!
Nov 19 22:47:32 vpn squid[22376]: Squid Parent: child process 22414 exited due 
to signal 6
Nov 19 22:47:32 vpn squid[22376]: Exiting due to repeated, frequent failures

So, the option is set and the compile doesn't build dnsserver, but squid is 
still looking for it and I don't know why.

Any ideas?   I'd love to get this up and running.  Squid 2.6's reverse proxy 
looks like it's going to be a lot easier to manage than older versions.