[squid-users] Squid Version

2011-11-13 Thread Malik Madni

Hello!
Dear All,
i am using following Squid version 
SQUID 3.0.STABLE19
 
is it ok??or should i update it???
please suggest.
  

Re: [squid-users] MemBuf issue in Squid 3.1.16 on Solaris

2011-11-13 Thread Amos Jeffries

On 14/11/2011 5:25 p.m., Justin Lawler wrote:

Hi,

We're just reviewing all the patches that went into 3.1.16, and we came across 
an old issue:

http://bugs.squid-cache.org/show_bug.cgi?id=2910

With running squid&  ICAP server on the same Solaris box.

A patch has been provided for this issue already. We're wondering is this patch 
planned to be in any official release of squid in the future?



Thanks for the reminder. It seems to have been hung waiting for a better 
fix. Although I am not sure what could be better. It has now been 
applied and should be in the next release.


Amos


Re: [squid-users] squid compilation

2011-11-13 Thread Benjamin

 On 11/14/2011 03:14 AM, Amos Jeffries wrote:

On Sun, 13 Nov 2011 18:33:12 +, Andrew Beverley wrote:

On Sun, 2011-11-13 at 23:12 +0530, Benjamin wrote:

On 11/13/2011 10:51 PM, Andrew Beverley wrote:
> On Sun, 2011-11-13 at 22:29 +0530, Benjamin wrote:
>> Hi,
>>
>> I want to use squid version on centos 6.So for that i wonder that 
do i
>> compile squid latest stable version from squid source code or 
should i

>> go with rpm package which i get from my distro.?
> You're normally best using the one provided with your distro, unless
> there are specific features you need from a later version.
>
>> Actually my concern is that installation of rpm / compilation from
>> source code are same while compare with squid features ?
> Try the one from the distro first and see if it meets your 
requirements.

>
>> And as per my purpose with squid, we want to use it for only high 
cache
>> performance, so for that do i need to take care of specific squid 
feature ?

> I don't know, but others will be able to advise, or you can check the
> list archives.
>
>> And please provide me any good document or link from where i can 
have
>> good understanding of each squid features which we get while 
compilation

>> process in ./configure command.
> ./configure --help
>
> Andy
>
>
Hi,

Thanks for your kind response.If i do not want any authentication 
module

from squid and when i install squid from distro rpm that time i  have
that authentication module by default enabled so in that case, does it
impact on performance.


Well if you've not configured any authentication in squid.conf then I
imagine that the impact will be minimal.


Actually we need squid for forward proxy and cache gain only.



I'm sure you could tune Squid for your particular use, but I'm afraid I
don't know exactly how much difference that will make.


From others reports over the last year it seems possible to get a gain 
of 5%-10% over the default settings when doing site specific tuning. 
The more policies and control logics you add to the config the slower 
Squid goes. So most of the optimization efforts I and others tend to 
talk about here are aimed towards getting complex configs to work 
without degrading the default performance.


The #1 bottleneck in caching is disk I/O. Performance gains in this 
area are usually down to the type, size and write speed of the 
hardware, since Squid constantly does a lot of writes randomly across 
the disk. Benchmarks and measurements by hardware people are the areas 
to look at there.


The #2 bottlneck is ACL processing and configuration complexity. 
Naturally the more config you have the slower Squid appears as it 
processes each request through all that logic.


The other bottleneck points are all down to the code optimizations and 
the local HTTP traffic behaviours. Test, measure, tune are the bywords 
there.


Amos


Hi Amos,

Thanks for your great response.You always guide us to resolve our 
queries.Your knowledge sharing is great to us.For better performance 
with squid, we need good h/w.



Thanks,
Benjamin





[squid-users] MemBuf issue in Squid 3.1.16 on Solaris

2011-11-13 Thread Justin Lawler
Hi,

We're just reviewing all the patches that went into 3.1.16, and we came across 
an old issue:

http://bugs.squid-cache.org/show_bug.cgi?id=2910

With running squid & ICAP server on the same Solaris box. 

A patch has been provided for this issue already. We're wondering is this patch 
planned to be in any official release of squid in the future?

Thanks and regards,
Justin


This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



Re: [squid-users] loosing ntlm connection

2011-11-13 Thread ftiaronsem

On 11/13/2011 04:03 AM, Amos Jeffries wrote:

On 11/11/2011 8:04 p.m., ftiaronsem wrote:

On 11/10/2011 03:27 AM, Amos Jeffries wrote:

On Wed, 09 Nov 2011 23:54:12 +0100, ftiaronsem wrote:

Hello alltogether

This one gives me a headache. I joined my ubuntu 10.04 LTS server
running squid 2.7.STABLE7 and samba 3.4.7 to my windows 2008 domain
without problems.

Squid also started fine using

/usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
/usr/lib/squid/wbinfo_group.pl

for authentication. However after some while, some users get DENIED
messages. A few hours after that, squid crashes completly complaining:

2011/11/08 15:22:56| WARNING: up to 50 pending requests queued
2011/11/08 15:22:56| Consider increasing the number of
ntlmauthenticator processes to at least 60 in your config file.
FATAL: Too many queued ntlmauthenticator requests (51 on 10)



Read that message again.

Your Squid is dying if it has to handle 51 or more parallel TCP
connections being opened during the time period taken to do NTLM
handshake.

One client browser will open at least 8 connections for most popular
websites.



Winbind logs show up a lot of stuff like

[2011/11/08 15:19:06, 0]
winbindd/winbindd_dual.c:186(async_request_timeout_handler)
async_request_timeout_handler: child pid 25224 is not responding.
Closing connection to it.
[2011/11/08 15:19:06, 1] winbindd/winbindd_util.c:303(trustdom_recv)
Could not receive trustdoms

So i am tempted to conclude that this is a samba/winbind problem.
However I am often getting similar errors in the winbind logs at other
sites, which run smoothly.


It does seem to be problems in winbind. Regardless of whether it gets
bad enough to break Squid or not.

These will be making that handshake time period a longer. With that 50
limit getting closer every second of it.



Do you have similar warnings in your error logs? Judgig by your
experience, what would you think is the most likely fix? Upgrading
samba?


Lookup what those winbind errors are about first. It may be config
changes or other software upgrades needed as well.

This might be it:
http://lists.samba.org/archive/samba-technical/2008-June/059504.html

Amos


Thanks for your answer

I will have a try in resolving these winbind errors. Hopefully I'll
find something on the net.

Hitting the ntmlauthenticator limit seems not that likely, since I got
the first warning two minutes before


I was not guessing. That log WARNING only occurs when the helper load
capacity is passed, the FATAL only occurs when the queue limit is hit in
a period of overload.

Traffic spikes come in all sizes and durations. 2 minutes is not a very
long one.



2011/11/08 15:20:38| WARNING: All ntlmauthenticator processes are busy.
2011/11/08 15:20:38| WARNING: up to 10 pending requests queued


overload. (capacity + 10 connections)


2011/11/08 15:21:10| WARNING: All ntlmauthenticator processes are busy.
2011/11/08 15:21:10| WARNING: up to 26 pending requests queued
2011/11/08 15:21:10| Consider increasing the number of
ntlmauthenticator processes to at least 36 in your config file.


more overload. (capacity + 16 connections + earlier queue of 10)

16>10. The traffic load is increasing even further past the rate where
overload was hit.


2011/11/08 15:21:41| WARNING: All ntlmauthenticator processes are busy.
2011/11/08 15:21:41| WARNING: up to 38 pending requests queued
2011/11/08 15:21:41| Consider increasing the number of
ntlmauthenticator processes to at least 48 in your config file.


even more overload. (capacity + 12 connections + earlier queue of 26)

12<16. traffic is starting to reduce, but is still well above overload
rate.


2011/11/08 15:22:12| WARNING: All ntlmauthenticator processes are busy.
2011/11/08 15:22:12| WARNING: up to 46 pending requests queued
2011/11/08 15:22:12| Consider increasing the number of
ntlmauthenticator processes to at least 56 in your config file.


even more overload. (capacity + 8 connections + earlier queue of 38)

8<12. traffic is reducing more, but slowly, and still well above
overload rate. The queue is getting very long...


2011/11/08 15:22:56| WARNING: All ntlmauthenticator processes are busy.
2011/11/08 15:22:56| WARNING: up to 50 pending requests queued
2011/11/08 15:22:56| Consider increasing the number of
ntlmauthenticator processes to at least 60 in your config file.


Queue limit exceeded. Crash.

4<8. the traffic rate is still in overload. But almost dropped back
below the point where helpers can start to catch up on the backlog.
Given another minute the queue might be cleared again. Too bad the
absolute maximum limit was hit already.

The solution is to raise the number of helper children. Each helper
child contributes some req/sec amount to the "capacity" number.


Amos


Thank you very much for the detailed analysis and explanations of the 
log-file-entries.  Especially the time-development of the problem, and 
the behaviour of the ntlmauthenticator processes was not that clear to 
me before.


I have 

Re: [squid-users] Problem with HTTP Headers

2011-11-13 Thread Amos Jeffries

On Sun, 13 Nov 2011 19:14:48 +0200, Ghassan Gharabli wrote:

Dear Amos,

After allowing access  "Head" method in Squid Config

I deleted www.facebook.com from cache andthen I tried executing

squidclient -m head http://www.facebook.com

Results :

HTTP/1.0 302 Moved Temporarily
Location: http://www.facebook.com/common/browser.php
P3P: CP="Facebook does not have a P3P policy. Learn why here:
http://fb.me/p3p";
Set-Cookie: datr=hfW_TtrAQmi_2SxwAUY4EjPH; expires=Tue, 12-Nov-2013
16:51:17 GMT
; path=/; domain=.facebook.com; httponly
Content-Type: text/html; charset=utf-8
X-FB-Server: 10.53.10.59
X-Cnection: close
Content-Length: 0
Date: Sun, 13 Nov 2011 16:51:17 GMT
X-Cache: MISS from Peer6.skydsl.net
X-Cache-Lookup: MISS from Peer6.skydsl.net:3128
Connection: close

I am not seeing any pragma or cache-control and expires! but redbot
shows the correct info there!.


Ah, your squidclient is not sending a user-agent header. You will need 
to add -H "user-Agent: foo"




BTW .. I am also using store_url but im sure nothing is bad there . I
am only playing with Dynamic URL regarding to Pictures and Videos
extensions so I have only one thing left for me to try which is 
unlike

to do it ..

acl facebookPages urlpath_regex -i /(\?.*|$)

First does this rule affect store_url?


This is just a pattern definition. It only has effect where and when 
the ACL is used. The config I gave you only used it in the "cache deny" 
access line.


That said, "cache deny" prevents things going to the cache, where 
storeurl* happens.





For example when we have url like

http://www.example.com/1.gif?v=1244&y=n

I can see that urlpath_regex requires Full URL which means this rule
matches :

http://www.example.com/stat?date=11


The pattern begins with '/' and the "cache" access line I gave you 
included another ACL. Which tested the domain name was *.facebook.com.


It will match things like:
  http://www.facebook.com/?v=1244&y=n

but *not* match things like:
  http://www.example.com/1.gif?v=1244&y=n



I will try to ignore this rule and let me focus on facebook problem
since we have more than 60% traffic on Facebook.



As you wish. I added that line because I noticed the front page for FB 
you wanted to non-cache has the URL path starting with the two 
characters "/?" instead of .html or .php.


Amos



Re: [squid-users] Usage / Log analysis specifically for a user / website

2011-11-13 Thread Amos Jeffries

On Sun, 13 Nov 2011 23:09:23 +0100, Markus Thüs wrote:

Hi,

here’s the case:   I’ve implemented a squid proxy at a school which 
requires
the users to authenticate against an LDAP Server. That means when the 
user

enters a web-address in the browser the Proxy requires the user to
authenticate himself, meanwhile squid logs everything in the 
background.

Day by day where gathering ~ 550 MB of Access.logs a day.

Fine so far… Now theoretically let’s say a note from the local police
station arrives saying that some user watched something illegal - via 
the
schools DSL Line - the data protection officer must be able to tell 
who of

the users did that.

How can I give that kind of functionality to that officer !?   In 
that case
he needs to analyze all logs of that year (365 Files) by means of per 
user
analysis and per Page / Domain. So an analysis which pages the 
user
visited when and how often from which place AND a search for which 
users

view a certain page / domain.


You are going beyond log analysis there (pretty graphs) and into data 
mining.


The old popular sarg, calamaris tools will give you graphs with a bit 
of drill-down into those categories. But not searching AFAIK.


The various database log tools and analysers are probably where you 
want to look. There are several appearing in popularity now that daemon 
loggers can be plugged into Squid and pipe the log entries to DB.


Amos



Re: [squid-users] missing username in squid log

2011-11-13 Thread Amos Jeffries

On Sun, 13 Nov 2011 12:35:13 +0100, Giovanni Rosini wrote:

Pherhaps i wasn't clear.
I know how sql queries work, i'm able to write down a select query,
this is not the question.
What i mean is that, looking at the actual access.log file, it seems
squid hasn't enough details to filter RADACCT table and extract the
right record.


The logged details are not the complete set of data available to Squid. 
It is a small subset which has been found to be useful for logging, and 
log analyser graphs for management people.


What I am talking about has been the external_acl_type helper. Which 
currently has an almost completely different set of format parameters:

  http://www.squid-cache.org/Doc/config/external_acl_type/



I think that the only way is having somewhere in squid files both nat
ip and local ip, as in RADACCT records.
For the duration of each session nat ip+local ip are associated
uniquely to one username.
Comparing date and time i could extract a unique record.


External ACL have:
  * %SRC %SRCPORT for client IP:port (before the local squid box SNAT, 
if any. After remote box SNAT).


  * %MYADDR %MYPORT for squid local IP:port (before local Squid box 
DNAT, if any. After remote box DNAT).

   ** With iptables REDIRECT %MYADDR is unreliable.

  * time 'now' can be identified by the helper without being passed in 
from Squid.


If you bump up to 3.2.0.8 you can also get the MAC / EUI addresses for 
more reliable source tracing. But in your case with remote boxes doing 
relays this will only link which of those boxes it came through (subnet 
separation?).


Amos



Giovanni

p.s.: i hope i responded to the right address this time, and thanks
for previous answers


Il 13/11/2011 4.33, Amos Jeffries ha scritto:

On 13/11/2011 2:55 p.m., Giovanni Rosini wrote:

I'm not sure to understand.
How can the external script find the rigth username?
In radius db i have the RADCHECK table containing all user 
registered, and RADACCT table where you find a record for every 
session.


Take that above sentence, replace "where you find" with "where 
script finds".


Each record in RADACCT shows a lot of data (username, nat ip, local 
ip, time of start and end of each session, etc.) but how squid can 
match a page request with database entries to retrieve username?


By looking up the details Squid has and finding the matching record. 
Please find a beginners tutorial on how database queries work. It 
should cover how to find a database record by querying it with some 
few of the field details. The db_auth script I mentioned earlier does 
database queries. You adjust the script (either the code or teh 
command parameters passed to is in squid.conf) to create a query for 
the RADIUS database.


Amos
PS. and please consider responding to the mailing list address. I 
only do private answers for paid customers.






[squid-users] Usage / Log analysis specifically for a user / website

2011-11-13 Thread Markus Thüs
Hi,

here’s the case:   I’ve implemented a squid proxy at a school which requires
the users to authenticate against an LDAP Server. That means when the user
enters a web-address in the browser the Proxy requires the user to
authenticate himself, meanwhile squid logs everything in the background.
Day by day where gathering ~ 550 MB of Access.logs a day.

Fine so far… Now theoretically let’s say a note from the local police
station arrives saying that some user watched something illegal - via the
schools DSL Line - the data protection officer must be able to tell who of
the users did that.

How can I give that kind of functionality to that officer !?   In that case
he needs to analyze all logs of that year (365 Files) by means of per user
analysis and per Page / Domain. So an analysis which pages the user
visited when and how often from which place AND a search for which users
view a certain page / domain.

The Proxy itself is running Debian 6.0.3, Squid 2.7 and Webmin.


Any ideas ? How to do that via a web interface ?


Thanks in advance,

Markus



Re: [squid-users] squid compilation

2011-11-13 Thread Amos Jeffries

On Sun, 13 Nov 2011 18:33:12 +, Andrew Beverley wrote:

On Sun, 2011-11-13 at 23:12 +0530, Benjamin wrote:

On 11/13/2011 10:51 PM, Andrew Beverley wrote:
> On Sun, 2011-11-13 at 22:29 +0530, Benjamin wrote:
>> Hi,
>>
>> I want to use squid version on centos 6.So for that i wonder that 
do i
>> compile squid latest stable version from squid source code or 
should i

>> go with rpm package which i get from my distro.?
> You're normally best using the one provided with your distro, 
unless

> there are specific features you need from a later version.
>
>> Actually my concern is that installation of rpm / compilation 
from

>> source code are same while compare with squid features ?
> Try the one from the distro first and see if it meets your 
requirements.

>
>> And as per my purpose with squid, we want to use it for only high 
cache
>> performance, so for that do i need to take care of specific squid 
feature ?
> I don't know, but others will be able to advise, or you can check 
the

> list archives.
>
>> And please provide me any good document or link from where i can 
have
>> good understanding of each squid features which we get while 
compilation

>> process in ./configure command.
> ./configure --help
>
> Andy
>
>
Hi,

Thanks for your kind response.If i do not want any authentication 
module
from squid and when i install squid from distro rpm that time i  
have
that authentication module by default enabled so in that case, does 
it

impact on performance.


Well if you've not configured any authentication in squid.conf then I
imagine that the impact will be minimal.


Actually we need squid for forward proxy and cache gain only.



I'm sure you could tune Squid for your particular use, but I'm afraid 
I

don't know exactly how much difference that will make.


From others reports over the last year it seems possible to get a gain 
of 5%-10% over the default settings when doing site specific tuning. The 
more policies and control logics you add to the config the slower Squid 
goes. So most of the optimization efforts I and others tend to talk 
about here are aimed towards getting complex configs to work without 
degrading the default performance.


The #1 bottleneck in caching is disk I/O. Performance gains in this 
area are usually down to the type, size and write speed of the hardware, 
since Squid constantly does a lot of writes randomly across the disk. 
Benchmarks and measurements by hardware people are the areas to look at 
there.


The #2 bottlneck is ACL processing and configuration complexity. 
Naturally the more config you have the slower Squid appears as it 
processes each request through all that logic.


The other bottleneck points are all down to the code optimizations and 
the local HTTP traffic behaviours. Test, measure, tune are the bywords 
there.


Amos


Re: [squid-users] squid compilation

2011-11-13 Thread Andrew Beverley
On Sun, 2011-11-13 at 23:12 +0530, Benjamin wrote:
> On 11/13/2011 10:51 PM, Andrew Beverley wrote:
> > On Sun, 2011-11-13 at 22:29 +0530, Benjamin wrote:
> >> Hi,
> >>
> >> I want to use squid version on centos 6.So for that i wonder that do i
> >> compile squid latest stable version from squid source code or should i
> >> go with rpm package which i get from my distro.?
> > You're normally best using the one provided with your distro, unless
> > there are specific features you need from a later version.
> >
> >> Actually my concern is that installation of rpm / compilation from
> >> source code are same while compare with squid features ?
> > Try the one from the distro first and see if it meets your requirements.
> >
> >> And as per my purpose with squid, we want to use it for only high cache
> >> performance, so for that do i need to take care of specific squid feature ?
> > I don't know, but others will be able to advise, or you can check the
> > list archives.
> >
> >> And please provide me any good document or link from where i can have
> >> good understanding of each squid features which we get while compilation
> >> process in ./configure command.
> > ./configure --help
> >
> > Andy
> >
> >
> Hi,
> 
> Thanks for your kind response.If i do not want any authentication module 
> from squid and when i install squid from distro rpm that time i  have 
> that authentication module by default enabled so in that case, does it 
> impact on performance.

Well if you've not configured any authentication in squid.conf then I
imagine that the impact will be minimal.

> Actually we need squid for forward proxy and cache gain only.
> 

I'm sure you could tune Squid for your particular use, but I'm afraid I
don't know exactly how much difference that will make.

Andy





Re: [squid-users] squid compilation

2011-11-13 Thread Andrew Beverley
On Sun, 2011-11-13 at 22:29 +0530, Benjamin wrote:
> Hi,
> 
> I want to use squid version on centos 6.So for that i wonder that do i 
> compile squid latest stable version from squid source code or should i 
> go with rpm package which i get from my distro.?

You're normally best using the one provided with your distro, unless
there are specific features you need from a later version.

> Actually my concern is that installation of rpm / compilation from 
> source code are same while compare with squid features ?

Try the one from the distro first and see if it meets your requirements.

> And as per my purpose with squid, we want to use it for only high cache 
> performance, so for that do i need to take care of specific squid feature ?

I don't know, but others will be able to advise, or you can check the
list archives.

> And please provide me any good document or link from where i can have 
> good understanding of each squid features which we get while compilation 
> process in ./configure command.

./configure --help

Andy




Re: [squid-users] Problem with HTTP Headers

2011-11-13 Thread Ghassan Gharabli
Dear Amos,

After allowing access  "Head" method in Squid Config

I deleted www.facebook.com from cache andthen I tried executing

squidclient -m head http://www.facebook.com

Results :

HTTP/1.0 302 Moved Temporarily
Location: http://www.facebook.com/common/browser.php
P3P: CP="Facebook does not have a P3P policy. Learn why here: http://fb.me/p3p";
Set-Cookie: datr=hfW_TtrAQmi_2SxwAUY4EjPH; expires=Tue, 12-Nov-2013 16:51:17 GMT
; path=/; domain=.facebook.com; httponly
Content-Type: text/html; charset=utf-8
X-FB-Server: 10.53.10.59
X-Cnection: close
Content-Length: 0
Date: Sun, 13 Nov 2011 16:51:17 GMT
X-Cache: MISS from Peer6.skydsl.net
X-Cache-Lookup: MISS from Peer6.skydsl.net:3128
Connection: close

I am not seeing any pragma or cache-control and expires! but redbot
shows the correct info there!.

BTW .. I am also using store_url but im sure nothing is bad there . I
am only playing with Dynamic URL regarding to Pictures and Videos
extensions so I have only one thing left for me to try which is unlike
to do it ..

acl facebookPages urlpath_regex -i /(\?.*|$)

First does this rule affect store_url?

For example when we have url like

http://www.example.com/1.gif?v=1244&y=n

I can see that urlpath_regex requires Full URL which means this rule matches :

http://www.example.com/stat?date=11

I will try to ignore this rule and let me focus on facebook problem
since we have more than 60% traffic on Facebook.


Let me test denying only PHP , HTML for a day to see if facebook http
header is being saved in cache.


Ghassan


On Sun, Nov 13, 2011 at 2:26 AM, Amos Jeffries  wrote:
> On 13/11/2011 12:15 p.m., Ghassan Gharabli wrote:
>>
>> Hello Amos,
>>
>> I understand what you wrote to me but I really do not have any rule
>> that tells squid to cache .facebook.com header ..
>
> According to http://redbot.org/?uri=http%3A%2F%2Fwww.facebook.com%2F
>
> FB front page has Expires, no-store, private, and must-revalidate. Squid
> should not be caching these at all unless somebody has maliciously erased
> the control headers. Or your squid has ignore-* and override-*
> refresfh_patterns for them (I did not see any in your config, which is good)
>
> Can you use:
>   squidclient -m HEAD http://www.facebook.com/
>
> to see if those headers you get match the ones apparently being sent by the
> FB server.
>
>>
>> I only used refresh_pattern to match Pictures , Videos&  certain
>> extensions by using ignore-must-revalidate , ignore-no-store ,
>> ignore-no-cache , store-stale .. etc
>>
>> and howcome this rule doesnt work ?
>>
>> refresh_pattern -i \.(htm|html|jhtml|mhtml|php)(\?.*|$)               0 0%
>>
>> This rule tells squid not to cache these extensions if we had static
>> URL or dynamic URL.
>
> The refresh_pattern algorithm only gets used *if* there are no Expires or
> Cache-Control headers stating specific information.
>
> Such as "private" or "no-store" or "Expires: Sat, 01 Jan 2000 00:00:00 GMT".
>
>
>>
>> As I noticed every time you open a website for example www.mtv.com.lb
>> then you try to open it again next day but you get the same news (
>> yesterday) which confused me and allow me to think that maybe Squid
>> ignore all headers related to website if you cached for example
>> pictures and multimedia objects thats why I was asking which rule
>> might be affecting websites?.
>>
>> I cant spend my time on adding list to "cache deny" on websites that
>> were being cached so I thought of only removing the rule caused squid
>> to cached Websites .
>>
>> How to ignore www.facebook.com not to cache but at the same time I
>> want to cache pictures , FLV Videos , CSS , JS but not the header of
>> the main page (HTML/PHP).
>
> With this config:
>   acl facebook dstdomain .facebook.com
>   acl facebookPages urlpath_regex -i \.([jm]?htm[l]?|php)(\?.*|$)
>   acl facebookPages urlpath_regex -i /(\?.*|$)
>   cache deny facebook facebookPages
>
> and remove all the refresh_pattterns you had about FB content.
>
> Which will cause any FB HTML objects which *might* have been cachable to be
> skipped by your Squid cache.
>
> Note that FLV videos in FB often come directly from youtube, so are not
> easily cached. The JS and CSS will retain the static/dynamic properties they
> are assigned by FB. You have generic refresh_pattern rules later on in yoru
> config which extend their normal storage times a lot.
>
>>
>> refresh_pattern ^http:\/\/www\.facebook\.com$             0 0% 0
>>
>> I tried to use $ after .com as I only wanted not to cache the main
>> page of Facebook but still I want to cache Pictures and Videos at
>> Facebook and so on at other websites .
>
> And I said the main page is not "http://www.facebook.com"; but
> "http://www.facebook.com/";
>
> so you should have added "/$" instead of just "$".
>
> BUT, using "cache deny" as above this becomes not relevant any more.
>
> Amos
>


[squid-users] squid compilation

2011-11-13 Thread Benjamin

 Hi,

I want to use squid version on centos 6.So for that i wonder that do i 
compile squid latest stable version from squid source code or should i 
go with rpm package which i get from my distro.?


Actually my concern is that installation of rpm / compilation from 
source code are same while compare with squid features ?


And as per my purpose with squid, we want to use it for only high cache 
performance, so for that do i need to take care of specific squid feature ?


And please provide me any good document or link from where i can have 
good understanding of each squid features which we get while compilation 
process in ./configure command.



Warm Regards,
Bejno


Re: [squid-users] missing username in squid log

2011-11-13 Thread Giovanni Rosini

Pherhaps i wasn't clear.
I know how sql queries work, i'm able to write down a select 
query, this is not the question.
What i mean is that, looking at the actual access.log file, 
it seems squid hasn't enough details to filter RADACCT table 
and extract the right record.
I think that the only way is having somewhere in squid files 
both nat ip and local ip, as in RADACCT records.
For the duration of each session nat ip+local ip are 
associated uniquely to one username.

Comparing date and time i could extract a unique record.

Giovanni

p.s.: i hope i responded to the right address this time, and 
thanks for previous answers



Il 13/11/2011 4.33, Amos Jeffries ha scritto:

On 13/11/2011 2:55 p.m., Giovanni Rosini wrote:

I'm not sure to understand.
How can the external script find the rigth username?
In radius db i have the RADCHECK table containing all 
user registered, and RADACCT table where you find a 
record for every session.


Take that above sentence, replace "where you find" with 
"where script finds".


Each record in RADACCT shows a lot of data (username, nat 
ip, local ip, time of start and end of each session, 
etc.) but how squid can match a page request with 
database entries to retrieve username?


By looking up the details Squid has and finding the 
matching record. Please find a beginners tutorial on how 
database queries work. It should cover how to find a 
database record by querying it with some few of the field 
details. The db_auth script I mentioned earlier does 
database queries. You adjust the script (either the code 
or teh command parameters passed to is in squid.conf) to 
create a query for the RADIUS database.


Amos
PS. and please consider responding to the mailing list 
address. I only do private answers for paid customers.




Re: [squid-users] Squid Reserve Proxy Error - Connection to ::1 failed. Help me...

2011-11-13 Thread Amos Jeffries

On 13/11/2011 10:13 p.m., Bin Zhou wrote:

Thank you very much for your help. I did following changes and got
some new errors now

1. I changed " http_port 3128 transparent vhost vport" to "http_port
80 accel vhost vport" and removed line "always_direct allow all" . The
error message was

*
ERROR

The requested URL could not be retrieved

The following error was encountered while trying to retrieve the URL:
http://ark09.maya.com/hello.py


Note that "ark09" versus:

acl my_site dstdomain ark08.maya.com


Amos