[squid-users] ACLs based on users based on Samba PDC?

2008-11-01 Thread Adam McCarthy
After much fussing, I seem to have a working Squid 2.6 working against
a Samba 3 PDC.

My only question is now, can I say, ok, if you finds my username, give
it complete access.

Then perhaps, if it sees user, "bob" perhaps, then it says, only give
them windowsupdate.microsoft.com.

Then if it sees user "tony, perhaps, only give it, www.tony.com.

Can I do all of these Internet limiting features?


[squid-users] no response from squid while telnetting

2008-11-01 Thread anujstha



hiii,
   i m using squid Version 3.0.STABLE9, while i telnet on the squid box then it
only shows

[EMAIL PROTECTED] ~] % telnet proxy1.zodiac.com.np 80
Trying 202.79.40.131...
Connected to proxy1.zodiac.com.np.
Escape character is '^]'.

it doesn't send any bad error as older squid did.

[EMAIL PROTECTED] ~] % telnet proxy3.wlink.com.np 80
Trying 202.79.62.13...
Connected to proxy3.wlink.com.np.
Escape character is '^]'.
aa
HTTP/1.0 400 Bad Request
Server: squid/2.6.STABLE14
Date: Sun, 02 Nov 2008 05:52:43 GMT
Content-Type: text/html
Content-Length: 1209
Expires: Sun, 02 Nov 2008 05:52:43 GMT
X-Squid-Error: ERR_INVALID_REQ 0
X-Cache: MISS from proxy3.wlink.com.np
X-Cache-Lookup: NONE from proxy3.wlink.com.np:3128
Via: 1.0 proxy3.wlink.com.np:3128 (squid/2.6.STABLE14)
Proxy-Connection: close

http://www.w3.org/TR/html4/loose.dtd";>

ERROR: The requested URL could not be retrieved


ERROR
The requested URL could not be retrieved


While trying to process the request:

aa



The following error was encountered:



Invalid Request




Some aspect of the HTTP Request is invalid.  Possible problems:

Missing or unknown request method
Missing URL
Missing HTTP Identifier (HTTP/1.0)
Request is too large
Content-Length missing for POST or PUT requests
Illegal character in hostname; underscores are not allowed

Your cache administrator is mailto:[EMAIL PROTECTED]">[EMAIL PROTECTED]. 




Generated Sun, 02 Nov 2008 05:52:43 GMT by proxy3.wlink.com.np 
(squid/2.6.STABLE14)


Connection closed by foreign host.
 Msg sent via Webmail 


Re: [squid-users] Questions on research into using digest auth against MS AD2003

2008-11-01 Thread Chuck Kollars
> >  ... Digest authentication is a hashed authentication scheme, 
> > exchanging one-time hashes instead of passwords on the wire. ...

Please excuse what may be a real dumb question; I'm trying to grok how Digest 
authentication actually works with Squid, and this doesn't seem to me to quite 
add up. My current understanding is as follows:

"One-time" generally refers to the 'nonce' (and 'cnonce') used by 
challenge-response authentication protocols. But verifying the 
nonce-hashed-by-password would require using the actual original cleartext 
password, something proxies don't have (and can't obtain reliably yet 
securely). 

So proxies like Squid instead use the H{username:realm:password} field (which 
was originally intended for use mainly for identification). Most importantly 
this H(A1) field that Squid uses is the same every time (since Squid is always 
in the same 'realm'); it's *not* "one-time" in the sense of never ever 
repeating. 

What's wrong with this picture?

thanks! -Chuck Kollars


  


Re: [squid-users] SQUID + FIREFOX + ACTIVE DIRECTORY

2008-11-01 Thread nairb rotsak
If there is anything else I can post, please let me know.. I never even knew 
this was an issue..  The one client I started with a couple of years ago loves 
it, but they never would have let me go forward if some people had to log in 
and other didn't (half the users are on a TS farm.. and they all get IE).. so I 
can see how this would be an issue.



- Original Message 
From: Chris Nighswonger <[EMAIL PROTECTED]>
To: Amos Jeffries <[EMAIL PROTECTED]>
Cc: nairb rotsak <[EMAIL PROTECTED]>; matlor <[EMAIL PROTECTED]>; 
squid-users@squid-cache.org
Sent: Saturday, November 1, 2008 4:47:24 PM
Subject: Re: [squid-users] SQUID + FIREFOX + ACTIVE DIRECTORY

On Sat, Nov 1, 2008 at 12:37 AM, Amos Jeffries <[EMAIL PROTECTED]> wrote:
> Um, I'm not so sure the people having trouble are using the right helper.
>
> There is a thing calling itself 'ntlm_auth' bundled with squid 3.0 and
> Squid-2 releases that is incapable of doing full NTLM for modern windows
> domains.
>
> There is also something calling itself 'ntlm_auth' bundled with Samba, which
> provides full working NTLM functionality.
>
> We have fixed this mixup in 3.1, but please check the helper you are using.
> Please prefer to use the one by Samba.

We're using the Samba flavor. To be exact

[EMAIL PROTECTED] ~]# /usr/bin/ntlm_auth -V
Version 3.0.23c-2

>
> IE7 is more advanced than the ealier IE and seems to be actually capable of
> proper negotiate auth. But can be expected fail with the limits imposed by
> Squid's 'ntlm_auth' thing.

The issues we are having are with FF (see Mozilla bug referenced
earlier in this thread). IE7 works fine on computers which are domain
members.

I'd still love to know what Nairb's config has that makes it work.

Regards,
Chris

>> - Original Message 
>> From: matlor <[EMAIL PROTECTED]>
>> To: squid-users@squid-cache.org
>> Sent: Thursday, October 30, 2008 9:15:55 AM
>> Subject: Re: [squid-users] SQUID + FIREFOX + ACTIVE DIRECTORY
>>
>>
>> I have tried your configuration... but I have the same problem.
>> squid version is 3.0.5
>>
>> in attachment there is one of my tested squid.conf.
>> only IE7 is working properly
>>
>> thanks in advance
>>
>>
>>
>>
>> nairb rotsak wrote:
>>>
>>> Always forget to hit the 'reply to all' instead of the 'reply'.. sorry..
>>> below is what I sent Chris:
>>>
>>> Below is for w2k3 AD and Ubuntu 6.06.1:
>>>
>>> auth_param ntlm program /usr/bin/ntlm_auth
>>> --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 15
>>> auth_param ntlm max_challenge_reuses 0
>>> auth_param ntlm max_challenge_lifetime 2 minutes
>>> #auth_param ntlm use_ntlm_negotiate off
>>> auth_param basic program /usr/bin/ntlm_auth
>>> --helper-protocol=squid-2.5-basic
>>> auth_param basic children 5
>>> auth_param basic realm Squid proxy-caching web server
>>> auth_param basic credentialsttl 2 hours
>>> auth_param basic casesensitive off
>>> acl NTLMUsers proxy_auth REQUIRED
>>> acl our_networks src 192.168.0.0/16
>>> http_access allow all NTLMUsers
>>> http_access allow our_networks
>>>
>>> Here is our current setup (w2k8 and Ubuntu 8.04.1):
>>>
>>> auth_param ntlm program /usr/bin/ntlm_auth
>>> --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 15
>>> auth_param ntlm keep_alive on
>>> acl our_networks src 192.168.0.0/16
>>> acl NTLMUsers proxy_auth REQUIRED
>>> external_acl_type ntgroup %LOGIN /usr/lib/squid/wbinfo_group.pl
>>> acl NOINTERNET external ntgroup no-internet
>>> http_access deny NOINTERNET
>>> http_access allow all NTLMUsers
>>> http_access allow our_networks
>>> http_access allow localhost
>>>
>>>
>>> We
>>> have a group policy do the IE browser, but with Firefox, we have to set
>>> it manually.  Once it is set, there is no prompt... I use SARG to get
>>> the results.. Been doing it for almost three years.. I would get
>>> evangelical on people using iPrism/Barracuda/Websense.. but now I
>>> figure I will just let them spend the money.. ;-)
>>>
>>>
>>> - Original Message 
>>> From: Chris Nighswonger <[EMAIL PROTECTED]>
>>> To: nairb rotsak <[EMAIL PROTECTED]>
>>> Cc: matlor <[EMAIL PROTECTED]>; squid-users@squid-cache.org
>>> Sent: Wednesday, October 29, 2008 9:31:32 AM
>>> Subject: Re: [squid-users] SQUID + FIREFOX + ACTIVE DIRECTORY
>>>
>>> On Wed, Oct 29, 2008 at 10:23 AM, nairb rotsak <[EMAIL PROTECTED]>
>>> wrote:

 I am totally confused by this statement?.. as I have 300 people using
 firefox right now.. using Ubuntu 6.06, Samba3, Squid2.. and not a single
 one gets a user/pass prompt?  I am not using it as a transparent proxy,
 it is listed in firefox under proxy settings (8080 because it goes to DG
 first.. but I have tested just Squid at 3128 and it works as well).. and
 I haven't touched anything else in firefox
>>>
>>> I'd be very interested in knowing what is different about your setup.
>>> I have fought this problem for several years now.
>>>
>>>


 - Original Message 
 From: Chris Nighswonger <[EMAIL PROT

Re: [squid-users] SQUID + FIREFOX + ACTIVE DIRECTORY

2008-11-01 Thread Chris Nighswonger
On Sat, Nov 1, 2008 at 12:37 AM, Amos Jeffries <[EMAIL PROTECTED]> wrote:
> Um, I'm not so sure the people having trouble are using the right helper.
>
> There is a thing calling itself 'ntlm_auth' bundled with squid 3.0 and
> Squid-2 releases that is incapable of doing full NTLM for modern windows
> domains.
>
> There is also something calling itself 'ntlm_auth' bundled with Samba, which
> provides full working NTLM functionality.
>
> We have fixed this mixup in 3.1, but please check the helper you are using.
> Please prefer to use the one by Samba.

We're using the Samba flavor. To be exact

[EMAIL PROTECTED] ~]# /usr/bin/ntlm_auth -V
Version 3.0.23c-2

>
> IE7 is more advanced than the ealier IE and seems to be actually capable of
> proper negotiate auth. But can be expected fail with the limits imposed by
> Squid's 'ntlm_auth' thing.

The issues we are having are with FF (see Mozilla bug referenced
earlier in this thread). IE7 works fine on computers which are domain
members.

I'd still love to know what Nairb's config has that makes it work.

Regards,
Chris

>> - Original Message 
>> From: matlor <[EMAIL PROTECTED]>
>> To: squid-users@squid-cache.org
>> Sent: Thursday, October 30, 2008 9:15:55 AM
>> Subject: Re: [squid-users] SQUID + FIREFOX + ACTIVE DIRECTORY
>>
>>
>> I have tried your configuration... but I have the same problem.
>> squid version is 3.0.5
>>
>> in attachment there is one of my tested squid.conf.
>> only IE7 is working properly
>>
>> thanks in advance
>>
>>
>>
>>
>> nairb rotsak wrote:
>>>
>>> Always forget to hit the 'reply to all' instead of the 'reply'.. sorry..
>>> below is what I sent Chris:
>>>
>>> Below is for w2k3 AD and Ubuntu 6.06.1:
>>>
>>> auth_param ntlm program /usr/bin/ntlm_auth
>>> --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 15
>>> auth_param ntlm max_challenge_reuses 0
>>> auth_param ntlm max_challenge_lifetime 2 minutes
>>> #auth_param ntlm use_ntlm_negotiate off
>>> auth_param basic program /usr/bin/ntlm_auth
>>> --helper-protocol=squid-2.5-basic
>>> auth_param basic children 5
>>> auth_param basic realm Squid proxy-caching web server
>>> auth_param basic credentialsttl 2 hours
>>> auth_param basic casesensitive off
>>> acl NTLMUsers proxy_auth REQUIRED
>>> acl our_networks src 192.168.0.0/16
>>> http_access allow all NTLMUsers
>>> http_access allow our_networks
>>>
>>> Here is our current setup (w2k8 and Ubuntu 8.04.1):
>>>
>>> auth_param ntlm program /usr/bin/ntlm_auth
>>> --helper-protocol=squid-2.5-ntlmssp auth_param ntlm children 15
>>> auth_param ntlm keep_alive on
>>> acl our_networks src 192.168.0.0/16
>>> acl NTLMUsers proxy_auth REQUIRED
>>> external_acl_type ntgroup %LOGIN /usr/lib/squid/wbinfo_group.pl
>>> acl NOINTERNET external ntgroup no-internet
>>> http_access deny NOINTERNET
>>> http_access allow all NTLMUsers
>>> http_access allow our_networks
>>> http_access allow localhost
>>>
>>>
>>> We
>>> have a group policy do the IE browser, but with Firefox, we have to set
>>> it manually.  Once it is set, there is no prompt... I use SARG to get
>>> the results.. Been doing it for almost three years.. I would get
>>> evangelical on people using iPrism/Barracuda/Websense.. but now I
>>> figure I will just let them spend the money.. ;-)
>>>
>>>
>>> - Original Message 
>>> From: Chris Nighswonger <[EMAIL PROTECTED]>
>>> To: nairb rotsak <[EMAIL PROTECTED]>
>>> Cc: matlor <[EMAIL PROTECTED]>; squid-users@squid-cache.org
>>> Sent: Wednesday, October 29, 2008 9:31:32 AM
>>> Subject: Re: [squid-users] SQUID + FIREFOX + ACTIVE DIRECTORY
>>>
>>> On Wed, Oct 29, 2008 at 10:23 AM, nairb rotsak <[EMAIL PROTECTED]>
>>> wrote:

 I am totally confused by this statement?.. as I have 300 people using
 firefox right now.. using Ubuntu 6.06, Samba3, Squid2.. and not a single
 one gets a user/pass prompt?  I am not using it as a transparent proxy,
 it is listed in firefox under proxy settings (8080 because it goes to DG
 first.. but I have tested just Squid at 3128 and it works as well).. and
 I haven't touched anything else in firefox
>>>
>>> I'd be very interested in knowing what is different about your setup.
>>> I have fought this problem for several years now.
>>>
>>>


 - Original Message 
 From: Chris Nighswonger <[EMAIL PROTECTED]>
 To: matlor <[EMAIL PROTECTED]>
 Cc: squid-users@squid-cache.org
 Sent: Wednesday, October 29, 2008 8:48:39 AM
 Subject: Re: [squid-users] SQUID + FIREFOX + ACTIVE DIRECTORY

 On Tue, Oct 28, 2008 at 6:18 AM, matlor <[EMAIL PROTECTED]> wrote:
>
> I have configured squid with winbind integrated in the active directory
> of a
> windows 2003 domain.
> If I browse internet trough IE 7 everething is ok, no user and password
> prompted, because of the common login. While, if I open Firefox (2 or 3
> version), it prompts for user and password.

 One other note: While FF does support NTLM, it does not

Re: [squid-users] Ignoring query string from url

2008-11-01 Thread Henrik Nordstrom
On tor, 2008-10-30 at 19:50 +0530, nitesh naik wrote:

> url rewrite helper script works fine for few requests ( 100 req/sec )
> but slows down response as number of requests increase and it takes
> 10+ second to deliver the objects.

I'v run setups like this at more than thousand requests/s.

> Is there way to optimise it further ?
> 
> url_rewrite_program  /home/zdn/bin/redirect_parallel.pl
> url_rewrite_children 2000
> url_rewrite_concurrency 5

Those two should be the other way around.

url_rewrite_concurrency 2000
url_rewrite_children 2

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Clients running amok - what can one do?

2008-11-01 Thread Henrik Nordstrom
On tor, 2008-10-30 at 09:25 +0100, Ralf Hildebrandt wrote:
> Ever so often we have clients (browsers) that are somehow (?) caught
> in a tight loop, resulting in a LOT of queries - one example
> 
> 7996 10.39.108.198 
> http://cdn.media.zylom.com/images/site/whitelabel/promo/deluxefeature/button_up.gif
> 
> (7996 requests per hour from 10.39.108.198 for
> http://cdn.media.zylom.com/images/site/whitelabel/promo/deluxefeature/button_up.gif)
> 
> How can I automatically throttle such clients?
> I'm either looking for an iptables or squid solution.

Use iptables to blacklist the client until it behaves.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Pushing HTTP-Response into the cache

2008-11-01 Thread Henrik Nordstrom
On lör, 2008-11-01 at 19:48 +0100, Willem Stender wrote:

> So here is my question: How to push the data directly into squid's 
> cache? Is there any interfaces? Some port, so i can use sockets or 
> something like that?

cache_peer, cache_peer_access, never_direct and a suitable HTTP request
sent to Squid.


Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Squid 3.1

2008-11-01 Thread Henrik Nordstrom
On lör, 2008-11-01 at 14:05 +0200, İsmail ÖZATAY wrote:
> > I'm suspecting it may be gcc-3.3 related. Is there a more recent gcc 
> > version you can upgrade to and try again?
> >
> > Amos
> Opps i am already using gcc version 3.3.5 .  ;) . I have just checked it...

Is there any newer GCC version than 3.3.X available for you?

GCC-3.3 was end-of-life some years ago.. 3.3.5 was released Sep 2004.

Refards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Questions on research into using digest auth against MS AD2003

2008-11-01 Thread Henrik Nordstrom
On fre, 2008-10-31 at 13:55 -0500, Richard wrote:
> * What specific piece of the puzzle on the client side is it about the 
> NTLM or kerberos authentication methods that allow the authentication 
> traffic secure by sending only the credential hashes?

The client talks to the microsoft SSP libraries and subsystem when
requested to provide authentication by a trusted proxy.

>   (Am I correct in 
> understanding that it is the ntlm_auth program that speaks to the NTLM 
> client and negotiates for the credential hashes to be exchanged?)

No and yes, that's the server side that Squid uses for speaking to the
domain controllers to verify the provided credentials. The first thing
this does is to send a challenge which is relayed by Squid to the
client.

> * When squid is configured to use *digest* authentication, I understand 
> that the traffic between the squid server and the LDAP server is 
> encrypted.  Is the traffic between the browser and the squid server 
> also encrypted when using Digest?   If so, how is it the client browser 
> know to encrypt/hash the communications for the return trip to the server?

Digest authentication is a hashed authentication scheme, exchanging
one-time hashes instead of passwords on the wire. The acutal password is
only known by the client, the server only knows how to verify that the
exchanged one-time hash corresponds to the password and current session.

> **Short of loading a program on a client machine, are there any 
> proxy servers out there that can prompt for credentials while keeping 
> secure the communication between the workstation and the proxy server?

Using digest authentication will do this.

> ** What is it that has to happen to ensure that the authentication 
> traffic from any browser to any proxy server is encrypted?

Neigher NTLM, kerberos or Digest is encrypted. But in all thre the
exchanged "password" is a one-time cryptographic hash of the password
and various session dependent details.

Modern windows versions provide single-sign-on for all three, but also
support prompting for credentials if the proxy isn't trusted or (Digest
ony) the realm is not the AD domain.

> * Considering the fact that I'm trying to use digest_ldap_auth against 
> an MS LDAP/AD 2003 server that should be storing several precomputed 
> digest hash versions of H(username:realm:password)

You can't use this helper to access the standard Active Directory
password details, but you can store an additional suitable DIgest hash
in another attribute and tell the helper to use this.

Or you can use a separade Digest password file on the proxy, and only
verify group memberships etc in the AD.


> A) Is it even possible to use digest_ldap_auth to do digest authenticate 
> against an Active Directory 2003's LDAP database server?

Yes, but not to the system password. At least not without writing and AD
extension.

> B) What would be a working example command line of a successful 
> digest_ldap_auth test against an AD 2003 server? (In my attempts, I have 
> been unable to identify the proper digest hash containing LDAP (-A) 
> attribute to use in a lookup.  I *THINK* this is because MS AD2003 
> expects the digest hash request to come via a SASL mechanism...which 
> begs the question...is there a  SASL mechanism that works with 
> squid+AD2003?)

The Microsoft AD Digest implementation expects to be fully responsible
for the Digest implementation itself from what I understand, but not
sure. One way to find out is to read the Microsoft protocol
documentation which is provided on request. I don't have access to these
documents.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] MSNT authentication - login window

2008-11-01 Thread Henrik Nordstrom
On fre, 2008-10-31 at 08:43 -0200, Luciano Cassemiro wrote:

> Everything is OK but what bothers me is: the login window shows up when an 
> user
> tries to connect to a forbidden site then he fill with his credentials BUT 
> after
>  OK button the login window appears again and again until the user click 
> cancel.

This happens is the last acl on the http_access deny line denying access
is realted to authentication.

Now I am a little confused as the http_access rules you posted did not
have this.. is there other http_access deny lines in your squid.conf?


Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] Pushing HTTP-Response into the cache

2008-11-01 Thread Willem Stender

Hi,

i try to build an application, which can transport HTTP messages over 
DTN (www.dtnrg.org), you can call it a DTN proxy. The webbrowser 
requests the squid, and if the cache request misses, the HTTP request 
will be forwarded to the DTN proxy, which will get the response over 
DTN. If the response arrives, the DTN proxy must push the data into the 
squid's cache, so the continuous refreshing webbrowser will get the page 
the next time.
So here is my question: How to push the data directly into squid's 
cache? Is there any interfaces? Some port, so i can use sockets or 
something like that?


Thanks for any hints!

Bye,
Willem


Re: [squid-users] caching webdav traffic

2008-11-01 Thread Henrik Nordstrom
On tor, 2008-10-30 at 11:29 -0400, Seymen Ertas wrote:

> I am trying to cache webdav traffic through a squid proxy, I have the
> squid proxy configured in accel mode and have turned on the
> "Cache-control: Public" on my server for the reason that every request
> I send does contain a "Authorization" header, however I am unable
> cache the data.

What does the response headers look like (and also request headers may
be relevant, but strip out authorization headers in such case or use a
dummy account)

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] High Load

2008-11-01 Thread Luis Daniel Lucio Quiroz
Thanks all

it was a uname -a  problem, 

> Pablo García wrote:
> > Luis, Please define "Heavy Load", how manu req/s, is this a forward,
> > transparent or reverse proxy ? is this a memory only cache ? what are
> > the vmstat outputs when it stops responding ? did run the ulimit -n
> > 16384 before start the squid ?
> > Are there any error messages in the cache.log ?
>
> Also...
>   how many cached objects are being handled when it starts the weirdness?
> what cache format are you using? on what OS? how big is it?
> what sort of traffic handling has it been subject to? (avg object sizes
> etc)
>
> > Regards, Pablo
> >
> > On Thu, Oct 30, 2008 at 4:09 PM, Luis Daniel Lucio Quiroz
> >
> > <[EMAIL PROTECTED]> wrote:
> >> Hi Squids,
> >> We are putting in a heavy load environment.  my squid is getting tired,
> >> after a while of load testing, 3128/tcp begins to stop responding
> >> (randomly) to requests.  All other ports at that servers responds ok
> >> i've recompile my squid with 16k file handlers, but this does not seems
> >> to help.  Is there any other suggestion?
> >> Regards,
> >> LD
>
> Amos
On Friday 31 October 2008 22:16:38 Amos Jeffries wrote:





Re: [squid-users] Squid 3.1

2008-11-01 Thread İsmail ÖZATAY

Amos Jeffries yazmış:

İsmail ÖZATAY wrote:

Amos Jeffries yazmış:

İsmail ÖZATAY wrote:

Amos Jeffries yazmış:

İsmail ÖZATAY wrote:

Hi there,

I can not configure squid 3.1 beta on my openbsd 4.3 server. When 
try to configure a get lots of errors. Has anybody ever tried this ?


Thanks

ismail


Some details about the errors would be helpful.
Others have managed to get it to work on OpenBSD.

Amos

Here is the some of output.


Okay those looks like something seriously wrong with the compilers 
found. Can you send me the full config.log created by configure 
pleaase?


Amos

Sure. Here it is.


Oh bugger. You have run into one of the configure bugs we have not 
been able to solve as yet. The mysterious ' missing terminating " 
character ' bug.


I'm suspecting it may be gcc-3.3 related. Is there a more recent gcc 
version you can upgrade to and try again?


Amos

Opps i am already using gcc version 3.3.5 .  ;) . I have just checked it...


Re: [squid-users] squid accelerator always requests peer to refresh

2008-11-01 Thread Amos Jeffries

Daniel Vollbrecht wrote:

Are the dynamically generated pages given proper expiry information?
(Expires: or Cache-Control: headers)


the dynamically generated page answers with these headers (wget -S):

  Expires: Mon, 26 Jul 1997 05:00:00 GMT
  Cache-Control: no-cache, must-revalidate
  Pragma: no-cache

Should this not be ignored by squid because of my following refresh_pattern 
setting?


The no-cache and expires should be. But the must-revalidate makes squid 
send an IMS (If-Modified-Since) request to the server asking if there 
are any changes.
If teh server responded with a basic 304, it swoudl show up as IMS_HIT 
same as the images. But as a dynamic page its sending back a full new 
object turning the result into REFRESH_MODIFIED.




refresh_pattern .   10 80% 30 override-expire override-lastmod 
ignore-reload ignore-no-cache ignore-no-store ignore-private


What the TCP_REFRESH_MODIFIED means is that a IMS request was sent to
verify the data, but the server returned a full object with changes
instead of a 304.


The problem is that the CMS is old and bit buggy. Therefore the best solution 
would be if squid would not care about these headers and cache everything up to 
30 min under any circumstances (for read-write access we have a separate sub 
domain, so that would not be the matter).


You are testing by using the force reload button on the browser right?
Thats sending a must-revalidate request through to Squid which triggers
an IMS.


Right, but at the same time also other requests from other clients show this 
behaviour. Wo do all requests to dynamic web page content give 
TCP_REFRESH_MODIFIED:FIRST_UP_PARENT, while all requests to gifs etc. geht a 
TCP_HIT?


Thanks,
Daniel



--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.1


Re: [squid-users] squid accelerator always requests peer to refresh

2008-11-01 Thread Daniel Vollbrecht
> Are the dynamically generated pages given proper expiry information?
> (Expires: or Cache-Control: headers)

the dynamically generated page answers with these headers (wget -S):

  Expires: Mon, 26 Jul 1997 05:00:00 GMT
  Cache-Control: no-cache, must-revalidate
  Pragma: no-cache

Should this not be ignored by squid because of my following refresh_pattern 
setting?

refresh_pattern .   10 80% 30 override-expire override-lastmod 
ignore-reload ignore-no-cache ignore-no-store ignore-private

> What the TCP_REFRESH_MODIFIED means is that a IMS request was sent to
> verify the data, but the server returned a full object with changes
> instead of a 304.

The problem is that the CMS is old and bit buggy. Therefore the best solution 
would be if squid would not care about these headers and cache everything up to 
30 min under any circumstances (for read-write access we have a separate sub 
domain, so that would not be the matter).

> You are testing by using the force reload button on the browser right?
> Thats sending a must-revalidate request through to Squid which triggers
> an IMS.

Right, but at the same time also other requests from other clients show this 
behaviour. Wo do all requests to dynamic web page content give 
TCP_REFRESH_MODIFIED:FIRST_UP_PARENT, while all requests to gifs etc. geht a 
TCP_HIT?


Thanks,
Daniel


Re: [squid-users] Squid 3.1

2008-11-01 Thread Amos Jeffries

İsmail ÖZATAY wrote:

Amos Jeffries yazmış:

İsmail ÖZATAY wrote:

Amos Jeffries yazmış:

İsmail ÖZATAY wrote:

Hi there,

I can not configure squid 3.1 beta on my openbsd 4.3 server. When 
try to configure a get lots of errors. Has anybody ever tried this ?


Thanks

ismail


Some details about the errors would be helpful.
Others have managed to get it to work on OpenBSD.

Amos

Here is the some of output.


Okay those looks like something seriously wrong with the compilers 
found. Can you send me the full config.log created by configure pleaase?


Amos

Sure. Here it is.


Oh bugger. You have run into one of the configure bugs we have not been 
able to solve as yet. The mysterious ' missing terminating " character ' 
bug.


I'm suspecting it may be gcc-3.3 related. Is there a more recent gcc 
version you can upgrade to and try again?


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.1


Re: [squid-users] squid accelerator always requests peer to refresh

2008-11-01 Thread Amos Jeffries

Daniel Vollbrecht wrote:

I configured Squid (3.0 STABLE7) in web accelerator mode to speed up a slow 
dynamic website.

Now all img and css files are completely served by squid. They don't show up on 
the slow machine 10.1.1.2. But the dynamic content itself always (page reload, 
other clients) leds to a request to the slow machine:

--- 8< ---
1225483916.728444 10.1.1.254 TCP_REFRESH_MODIFIED/200 8889 GET 
http://www.mydomain.de/test.html - FIRST_UP_PARENT/10.1.1.2 text/html
1225483917.113  0 10.1.1.254 TCP_IMS_HIT/304 283 GET 
http://www.mydomain.de/img/pic.gif - NONE/- image/gif
--- >8 ---

Even with all the http-standard violating options for refresh_pattern that I 
used (config see below).

The expected result should be that the slow machine only receives requests once 
within 30 min. Is something wrong in my config or what is your suggestion?



Are the dynamically generated pages given proper expiry information? 
(Expires: or Cache-Control: headers)


What the TCP_REFRESH_MODIFIED means is that a IMS request was sent to 
verify the data, but the server returned a full object with changes 
instead of a 304.


You are testing by using the force reload button on the browser right? 
Thats sending a must-revalidate request through to Squid which triggers 
an IMS.


Amos


 squid.conf --

http_port 10.1.1.1:80 accel defaultsite=www.mydomain.de vhost
cache_peer 10.1.1.2 parent 80 0 no-query originserver

cache_mgr [EMAIL PROTECTED]

# restrict access
acl okdomains dstdomain www.mydomain.de mydomain.de
http_access allow okdomains

acl cache urlpath_regex \?
never_direct allow cache

refresh_pattern .   10 80% 30 override-expire override-lastmod 
ignore-reload ignore-no-cache ignore-no-store ignore-private

url_rewrite_host_header off

# cache tuning
cache_mem 32 MB
maximum_object_size_in_memory 40 KB
maximum_object_size 32 MB
cache_replacement_policy heap GDSF

 squid.conf --


Thanks,
Daniel




--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.1


[squid-users] squid accelerator always requests peer to refresh

2008-11-01 Thread Daniel Vollbrecht
I configured Squid (3.0 STABLE7) in web accelerator mode to speed up a slow 
dynamic website.

Now all img and css files are completely served by squid. They don't show up on 
the slow machine 10.1.1.2. But the dynamic content itself always (page reload, 
other clients) leds to a request to the slow machine:

--- 8< ---
1225483916.728444 10.1.1.254 TCP_REFRESH_MODIFIED/200 8889 GET 
http://www.mydomain.de/test.html - FIRST_UP_PARENT/10.1.1.2 text/html
1225483917.113  0 10.1.1.254 TCP_IMS_HIT/304 283 GET 
http://www.mydomain.de/img/pic.gif - NONE/- image/gif
--- >8 ---

Even with all the http-standard violating options for refresh_pattern that I 
used (config see below).

The expected result should be that the slow machine only receives requests once 
within 30 min. Is something wrong in my config or what is your suggestion?

 squid.conf --

http_port 10.1.1.1:80 accel defaultsite=www.mydomain.de vhost
cache_peer 10.1.1.2 parent 80 0 no-query originserver

cache_mgr [EMAIL PROTECTED]

# restrict access
acl okdomains dstdomain www.mydomain.de mydomain.de
http_access allow okdomains

acl cache urlpath_regex \?
never_direct allow cache

refresh_pattern .   10 80% 30 override-expire override-lastmod 
ignore-reload ignore-no-cache ignore-no-store ignore-private

url_rewrite_host_header off

# cache tuning
cache_mem 32 MB
maximum_object_size_in_memory 40 KB
maximum_object_size 32 MB
cache_replacement_policy heap GDSF

 squid.conf --


Thanks,
Daniel



Re: [squid-users] Performance

2008-11-01 Thread Kinkie
I'd also check "df -i", maybe you're running out of inodes in your cache dir

On 11/1/08, Amos Jeffries <[EMAIL PROTECTED]> wrote:
> Marcel Grandemange wrote:
>> Good day users.
>>
>>
>> I seem to have a performance issue where my squid server doesn't seem to
>> exceed 400k on objects in cache, it is not the specs of the box as im able
>> to with
>> Different proxy software achieve 8m on a P3.
>>
>> Advise? Need More info?
>>
>
> Yes,
>   * version of squid (including release number)?
>   * some config info.
>
> Specific to your problem some things to check are
>   is 400k mean 400k objects cached or 400kB/sec fetch speeds?
>   delay pools in use?
>   single cache_dir per disk spindle?
>
> we may also need to check efficient use of access controls. Some types
> like regex are known to cause major speed bumps.
>
> Amos
> --
> Please be using
>Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
>Current Beta Squid 3.1.0.1
>


-- 
/kinkie


Re: [squid-users] Squid 3.1

2008-11-01 Thread Amos Jeffries

İsmail ÖZATAY wrote:

Amos Jeffries yazmış:

İsmail ÖZATAY wrote:

Hi there,

I can not configure squid 3.1 beta on my openbsd 4.3 server. When try 
to configure a get lots of errors. Has anybody ever tried this ?


Thanks

ismail


Some details about the errors would be helpful.
Others have managed to get it to work on OpenBSD.

Amos

Here is the some of output.


Okay those looks like something seriously wrong with the compilers 
found. Can you send me the full config.log created by configure pleaase?


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.1


Re: [squid-users] Squid 3.1

2008-11-01 Thread İsmail ÖZATAY

Amos Jeffries yazmış:

İsmail ÖZATAY wrote:

Hi there,

I can not configure squid 3.1 beta on my openbsd 4.3 server. When try 
to configure a get lots of errors. Has anybody ever tried this ?


Thanks

ismail


Some details about the errors would be helpful.
Others have managed to get it to work on OpenBSD.

Amos

Here is the some of output.

configure: WARNING: pwd.h: present but cannot be compiled
configure: WARNING: pwd.h: check for missing prerequisite headers?
configure: WARNING: pwd.h: see the Autoconf documentation
configure: WARNING: pwd.h: section "Present But Cannot Be Compiled"
configure: WARNING: pwd.h: proceeding with the preprocessor's result
configure: WARNING: pwd.h: in the future, the compiler will take precedence
configure: WARNING: ## 
--- ##
configure: WARNING: ## Report this to 
http://www.squid-cache.org/bugs/ ##
configure: WARNING: ## 
--- ##

configure: WARNING: regex.h: present but cannot be compiled
configure: WARNING: regex.h: check for missing prerequisite headers?
configure: WARNING: regex.h: see the Autoconf documentation
configure: WARNING: regex.h: section "Present But Cannot Be Compiled"
configure: WARNING: regex.h: proceeding with the preprocessor's result
configure: WARNING: regex.h: in the future, the compiler will take 
precedence
configure: WARNING: ## 
--- ##
configure: WARNING: ## Report this to 
http://www.squid-cache.org/bugs/ ##
configure: WARNING: ## 
--- ##

configure: WARNING: sched.h: present but cannot be compiled
configure: WARNING: sched.h: check for missing prerequisite headers?
configure: WARNING: sched.h: see the Autoconf documentation
configure: WARNING: sched.h: section "Present But Cannot Be Compiled"
configure: WARNING: sched.h: proceeding with the preprocessor's result
configure: WARNING: sched.h: in the future, the compiler will take 
precedence
configure: WARNING: ## 
--- ##
configure: WARNING: ## Report this to 
http://www.squid-cache.org/bugs/ ##
configure: WARNING: ## 
--- ##

configure: WARNING: signal.h: present but cannot be compiled
configure: WARNING: signal.h: check for missing prerequisite headers?
configure: WARNING: signal.h: see the Autoconf documentation
configure: WARNING: signal.h: section "Present But Cannot Be Compiled"
configure: WARNING: signal.h: proceeding with the preprocessor's result
configure: WARNING: signal.h: in the future, the compiler will take 
precedence
configure: WARNING: ## 
--- ##
configure: WARNING: ## Report this to 
http://www.squid-cache.org/bugs/ ##
configure: WARNING: ## 
--- ##

configure: WARNING: stdarg.h: present but cannot be compiled
configure: WARNING: stdarg.h: check for missing prerequisite headers?
configure: WARNING: stdarg.h: see the Autoconf documentation
configure: WARNING: stdarg.h: section "Present But Cannot Be Compiled"
configure: WARNING: stdarg.h: proceeding with the preprocessor's result
configure: WARNING: stdarg.h: in the future, the compiler will take 
precedence
configure: WARNING: ## 
--- ##
configure: WARNING: ## Report this to 
http://www.squid-cache.org/bugs/ ##
configure: WARNING: ## 
--- ##

configure: WARNING: stddef.h: present but cannot be compiled
configure: WARNING: stddef.h: check for missing prerequisite headers?
configure: WARNING: stddef.h: see the Autoconf documentation
configure: WARNING: stddef.h: section "Present But Cannot Be Compiled"
configure: WARNING: stddef.h: proceeding with the preprocessor's result
configure: WARNING: stddef.h: in the future, the compiler will take 
precedence
configure: WARNING: ## 
--- ##
configure: WARNING: ## Report this to 
http://www.squid-cache.org/bugs/ ##
configure: WARNING: ## 
--- ##

configure: WARNING: stdio.h: present but cannot be compiled
configure: WARNING: stdio.h: check for missing prerequisite headers?
configure: WARNING: stdio.h: see the Autoconf documentation
configure: WARNING: stdio.h: section "Present But Cannot Be Compiled"
configure: WARNING: stdio.h: proceeding with the preprocessor's result
configure: WARNING: stdio.h: in the future, the compiler will take 
precedence
configure: WARNING: ## 
--- ##
configure: WARNING: ## Report this to 
http://www.squid-cache.org/bugs/ ##
configure: WARNING: ## 
--- ##

configure: WARNING: sys/endian.h: present but cannot be compile