Re: [squid-users] ntlm group acl's

2009-07-02 Thread Daniel van Soest
Hi B,

it's quite easy to use AD group based ACLs. First of all check if you get all 
groups right form your AD:

$ /usr/sbin/wbinfo_group.pl
$ M180D01+y2237 Internet
$ OK

If you get OK - proceed, else you had to check your samba settings.

In squid.conf you had to add the follwoing line:

external_acl_type AD_group ttl=3600 children=5 %LOGIN 
/usr/sbin/wbinfo_group.pl

Now you can define AD groups in squid:

ie
acl Administrator external ADS_Group domain-administrator
 # Def. Administrator as AD group domain-administrator
acl AuthUsers proxy_auth REQUIRED

From now on you can define ACL as described by Amos Jeffries.

Good luck,

 Daniel


Am Donnerstag, 2. Juli 2009 07:32:36 schrieb Beavis:
 is it possible for squid to have the option where it can be tailored
 to apply ACL's based on groups on AD?

 any help would be awesomely appreciated.

 regards,
 -b


[squid-users] blocking Vary header

2009-07-02 Thread Chudy Fernandez

What are the disadvantages if we block vary header and pass to frontend Squid.

www - squid2 - Squid1 - clients

squid2(vary header blocked, null cachedir,delay pool)
squid1(storeurl,always direct all html)

I no longer have issues(Bug#: 2678) on storeurl+vary for that setup, plus quick 
abort controllable by delay pool.
But I'm wondering what would be the disadvantages when blocking vary header.


  


AW: [squid-users] [squid 2.7 stable 3][ubuntu 9.04][shared usage with two proxies]

2009-07-02 Thread Volker Jahns
Hi Amos and Chris

 First question for general: does it work?
   
 
 So, for example, your P1 proxy has an IP address of 10.0.0.5 and your P2 
 proxy has an IP address of 10.10.10.5.  Your client (10.20.20.200) makes 
 a request for a web object from 10.0.0.5 and (since it has already made 
 a request and knows that authentication is required) sends it's 
 authentication credentials.  10.0.0.5 sends the request to 10.10.10.5.  
 There is no way for 10.10.10.5 to send a reply to 10.20.20.200, as there 
 is no TCP connection to send the reply on.
 
 Second question if it works: how do I configure this?

OK. This means in fact, it doesn't work. 

Thanks
Volker



[squid-users] how squid uses cache_dirs ?

2009-07-02 Thread Travel Factory S.r.l.

On a squid 2.7 Stable 6, 2 CPU 3.6 Ghz, 10 GB ram, 1 raid1 300gb disk, 
partition /u02 is ext3 mounted with noatime, noatimedir, I have these two 
cache_dirs - the setup for coss I took from a previous message :

cache_dir aufs /u02/squid 15 256 256 min-size=4288
cache_dir COSS /dev/cciss/c0d0p5  38000 max-stripe-waste=32768 block-size=4096 
maxfullbufs=10 max-size=524289

During normal use, I see that almost all objects are sent to the coss storage, 
and only big ones (  550 KB ) are sent to the aufs storage. Actually it stores 
flv, swf, jpg

If I stop squid and restart it, during the time coss reads its stripes I get 
SO_FAIL error for objects less than 4288 bytes (and it is correct), and I have 
objects more than 4288 bytes long correctly stored to aufs... 

From what I understand, these cache_dirs say:
- objects less than 4288 bytes ALLWAYS to coss
- objects more than 524289 ALLWAYS to aufs
- objects more than 4288 bytes AND less than 524289  split to aufs and coss

But I'm probably wrong...

.. can you tell me what is wrong in my reasoning ? Because 524289 is probably 
too big for 38 gb coss

Thank you,
Francesco

PS: Mean Object Size:   30.65 KB



Re: [squid-users] X-Cache regex need some help

2009-07-02 Thread Amos Jeffries

Chudy Fernandez wrote:

header:
X-Cache HIT from Server

the following doesn't work
acl hit rep_header X-Cache HIT\ from\ Server
or
acl hit rep_header X-Cache HIT.from.Server
or even
acl hit rep_header X-Cache HIT.*Server
it only match for
acl hit rep_header X-cache HIT

I'm using this for 
log_access deny hit


I'm wondering from Server is some kind of code?



Two issues here.

The header is created by:
  HDR_X_CACHE, %s from %s, (is_hit?HIT:MISS), getMyHostname());

So the word Server in the above pattern will vary for each install of 
Squid.


I would expect pattern 2,3,4 to work if the right hostname is entered.

Spaces are currently not permitted in the ACL lines of squid.conf, 
whether quoted or \ 'd .   To do this there is a hack the Windows 
people use, putting the data for the ACL into a file. One entry per line 
which _can_ take whitespace. And naming the file in the ACL definition.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE16
  Current Beta Squid 3.1.0.9


Re: [squid-users] fetch page error

2009-07-02 Thread Amos Jeffries

Tech W. wrote:

Hello,

When I fetch this page via Squid (latest 3.0 version):

html
body onload=document.submitFrm.submit()
form method=post name=submitFrm action=index2.shtml 
target=_self
/form
/body
/html



What a seemingly pointless page. Just for my interest; what is it doing?




Most time I got it successfully, but sometime got it failed.
Squid responsed with this header:

X-Squid-Error: ERR_CONNECT_FAIL 113


Why this happened? Thanks.


The browser requested the page and Squid was unable to open a TCP 
connection to the master Server.


Nothing unusual about that. Usually network or web server load related.

There should be other headers around that one which adds more details 
X-Cache-Miss: is mentioned as one such.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE16
  Current Beta Squid 3.1.0.9


Re: [squid-users] blocking Vary header

2009-07-02 Thread Amos Jeffries

Chudy Fernandez wrote:

What are the disadvantages if we block vary header and pass to frontend Squid.

www - squid2 - Squid1 - clients

squid2(vary header blocked, null cachedir,delay pool)
squid1(storeurl,always direct all html)

I no longer have issues(Bug#: 2678) on storeurl+vary for that setup, plus quick 
abort controllable by delay pool.
But I'm wondering what would be the disadvantages when blocking vary header.



The side effects depend on what the variants are produced.

The two most common to occur are:

Content display errors when Vary: Accept-Encoding is most used to send 
the correct encrypted or non-encrypted object to browsers. none, 
gzip/deflate are widely supported, but the others are browser specific.


With Vary: Accept-Language. You could end up trying to decipher a simple 
access denied error written in Korean simply because the last visitor to 
hit that page before you was from there.


Things get much worse when private information is coded in cookie or 
session Variants.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE16
  Current Beta Squid 3.1.0.9


Re: [squid-users] how squid uses cache_dirs ?

2009-07-02 Thread Amos Jeffries

Travel Factory S.r.l. wrote:

On a squid 2.7 Stable 6, 2 CPU 3.6 Ghz, 10 GB ram, 1 raid1 300gb disk, 
partition /u02 is ext3 mounted with noatime, noatimedir, I have these two 
cache_dirs - the setup for coss I took from a previous message :

cache_dir aufs /u02/squid 15 256 256 min-size=4288
cache_dir COSS /dev/cciss/c0d0p5  38000 max-stripe-waste=32768 block-size=4096 
maxfullbufs=10 max-size=524289

During normal use, I see that almost all objects are sent to the coss storage, and 
only big ones (  550 KB ) are sent to the aufs storage. Actually it stores 
flv, swf, jpg

If I stop squid and restart it, during the time coss reads its stripes I get SO_FAIL error for objects less than 4288 bytes (and it is correct), and I have objects more than 4288 bytes long correctly stored to aufs... 


From what I understand, these cache_dirs say:
- objects less than 4288 bytes ALLWAYS to coss
- objects more than 524289 ALLWAYS to aufs
- objects more than 4288 bytes AND less than 524289  split to aufs and coss

But I'm probably wrong...

.. can you tell me what is wrong in my reasoning ? Because 524289 is probably 
too big for 38 gb coss

Thank you,
Francesco

PS: Mean Object Size:   30.65 KB



Without looking I'd guess that Adrian and the others who tuned COSS and 
Squid-2.7 for high speed reads/writes did something to bias the storage 
towards the most efficient cache_dire types available.


If I had to guess I'd say make the split size the same and put it around 
128KB (about 4 times avg).  IIRC COSS uses 1MB strips to 2 objects per 
strip may or may not be good.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE16
  Current Beta Squid 3.1.0.9


[squid-users] FW: Squid whitelist entry still blocked

2009-07-02 Thread JEREMY POSEY


I'm using Squid for the first time as a whitelist proxy. So far it's
working great. I have come across 1 address that is still blocked even
though it is in my whitelist. As best I can tell, I've configured
everything correctly, since the other 50+ sites I've added are
accessible.

The site I'm attempting to access is : http://www.alabamaacn.org/ this
is for a State data reporting. In my whitelist I have this entry: (
.alabamaacn.org )

I can't seem to figure out why it will not let me onto the site. I've
set the site as a bypass proxy for site in IE and I still cannot
access it.

Any help would be appreciated.

If you need files, let me know, but as I said every other site in my
whitelist works, so I'm thinking it's something on this particular site
causing the problem. If so, is there a workaround.


Thank you,

Jeremy Posey



[squid-users] age in the refresh_pattern

2009-07-02 Thread Stand H

Hi,


FRESH if expires  now, else STALE
STALE if age  max
FRESH if lm-factor  percent, else STALE
FRESH if age  min
else STALE

Can someone show how to calculate the age here? Is it age=Now-LM or age=time 
since the object stored in the cache?

Thanks.

ST


  


[squid-users] TCP_MISS/500

2009-07-02 Thread Ilo Lorusso
Hi,

when I try access the following website http://www.flysaaspecials.com/
I get a  500 Internal Server Error from the browser
and the below logs from squid. yet when I try this without going
though my squid proxy it works and the proxy and my workstation go
though the same firewall.

1246540647.587  822 172.20.128.100 TCP_MISS/500 565 GET
http://www.flysaaspecials.com/ - DIRECT/8.12.42.47 text/html
1246540670.294  0 172.20.128.100 TCP_NEGATIVE_HIT/500 573 GET
http://www.flysaaspecials.com/ - NONE/- text/html
1246540675.024  1 172.20.128.100 TCP_NEGATIVE_HIT/500 573 GET
http://www.flysaaspecials.com/ - NONE/- text/html


what debugging section can I enable to get a clearer picture on what
is going on?

I have tried deleting the cache for that specific url
http://www.flysaaspecials.com/  and Ive told squid not to cache this
domain
using the always_direct statment with out anyluck..

any ideas?

Thanks

Regards


[squid-users] Squid Cluster

2009-07-02 Thread Serge Fonville
Hi,

I am in the process of setting up a two node cluster. Since I am
setting up Squid on the private side. I am  looking into how data is
stored. Basically I am trying to detemine what data can be shared
between instances of Squid. Say a host opens a connection to a remote
location and collects all kinds of state information and that squid
goes down. will that mean that all information is kept? Can a download
that is broken (due to the squid going down) halfway, still continue
while happily using the other (which has the same IP) Are there any
specific things I need to look in to?

I intend to set up a cluster consisting of the following
GlassFish
PostgreSQL
Nagios
Postfix
Squid
Heartbeat
Subversion
Named
DHCPd
TFTPd
Apache HTTPd
DRBD (dual primary with either GFS2 or OCFS2)
ldirectord or keepalived (all traffic is being balanced between the
two real servers and both nodes should be active)

It will probably run on either Gentoo or Centos x64

What are the important thing in regard to squid that I need to take
special attention to.
Things I can imagine
IPaddress sharing
Storage (cache) sharing
Synchronization of configuration
Sharing of logon data to squid (I intend to use form based
authentication for squid)

I am aware of the fact that the majority of this setup is in no way
relevant to squid, but it may impact it (I can not yet determine that)
I am especially interested in anything that is relevant to impact in
regard to availability, performance and load balancing

Any help is greatly appreciated!!

Regards,

Serge Fonville


Re: [squid-users] R: [squid-users] cache size and structure

2009-07-02 Thread Matus UHLAR - fantomas
 No, it is not. It may be 1mbps .OR. 76GB for week. it can't be anything
 per second for week. You may mean 1mbps during weekdays, 1mbps during
 the whole week, 1mbps average.

On 30.06.09 19:06, Riccardo Castellani wrote:
 I want to say if weekly average of http traffic is 1 Mbps (monitored by 
 mrtg tools), all http traffic, which goes to my squid, is 76 GB in a 
 week.

Aha. So, you should have 76-152 GiB of disk cache on a few fast disks, if
you can afford that... along with a few gigs fo ram.

don't put too much of cache on one disk, it can slow you down. You can start
with e.g. 15GiB and increase by time, until you'll notice increase fo
response times. In such case, keep the L1 on 128.

 infact mrtg gives me these information:
 maximum peak for day
 traffic average for day
 ...for week
 ...for month

 Do you understand my calculates ?

 ok, are your child caches configured as neighbours to each other?

 both squid  (B,C) have configured as parent cache squid A
 I don't know what means configured as neighbours to each other. Do you  
 reference B as neighbours  to C ?!

Exactly, and vice versa. If they can connect to each other. And, if they are
not behind slow links, although it's probably they are (otherwise there
would be no need for 3 caches, right?)


-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Emacs is a complicated operating system without good text editor.


Re: [squid-users] FW: Squid whitelist entry still blocked

2009-07-02 Thread Guy Helmer

JEREMY POSEY wrote:

I'm using Squid for the first time as a whitelist proxy. So far it's
working great. I have come across 1 address that is still blocked even
though it is in my whitelist. As best I can tell, I've configured
everything correctly, since the other 50+ sites I've added are
accessible.

The site I'm attempting to access is : http://www.alabamaacn.org/ this
is for a State data reporting. In my whitelist I have this entry: (
.alabamaacn.org )

I can't seem to figure out why it will not let me onto the site. I've
set the site as a bypass proxy for site in IE and I still cannot
access it.

Any help would be appreciated.

If you need files, let me know, but as I said every other site in my
whitelist works, so I'm thinking it's something on this particular site
causing the problem. If so, is there a workaround.


Thank you,

Jeremy Posey
  


The site www.alabamaacn.org makes HTTPS requests to 138.26.103.35 to 
fill its frames -- you apparently would need to whitelist 138.26.103.35 
as well.


Hope this helps,
Guy Helmer




Re: [squid-users] how squid uses cache_dirs ?

2009-07-02 Thread Chris Robertson

Amos Jeffries wrote:
Without looking I'd guess that Adrian and the others who tuned COSS 
and Squid-2.7 for high speed reads/writes did something to bias the 
storage towards the most efficient cache_dire types available.


This seems to be a Squid default (from at least 2.5)...

http://www.squid-cache.org/Doc/config/store_dir_select_algorithm/

How load is determined, I can't really comment on.



If I had to guess I'd say make the split size the same and put it 
around 128KB (about 4 times avg).  IIRC COSS uses 1MB strips to 2 
objects per strip may or may not be good.


Stripe size is adjustable at compile time (--with-coss-membuf-size).  It 
does default to 1MB.


I have not performed extensive tuning (haven't found the need), so I 
can't give any informed advice on proper object-per-stripe, or 
proper-maximum-size settings.  Personally, I use a 50KB cutoff (51200 
for both my min-size and max-size).




Amos


Chris



Re: [squid-users] TCP_MISS/500

2009-07-02 Thread Chris Robertson

Ilo Lorusso wrote:

Hi,

when I try access the following website http://www.flysaaspecials.com/
I get a  500 Internal Server Error from the browser
and the below logs from squid. yet when I try this without going
though my squid proxy it works and the proxy and my workstation go
though the same firewall.

1246540647.587  822 172.20.128.100 TCP_MISS/500 565 GET
http://www.flysaaspecials.com/ - DIRECT/8.12.42.47 text/html
1246540670.294  0 172.20.128.100 TCP_NEGATIVE_HIT/500 573 GET
http://www.flysaaspecials.com/ - NONE/- text/html
1246540675.024  1 172.20.128.100 TCP_NEGATIVE_HIT/500 573 GET
http://www.flysaaspecials.com/ - NONE/- text/html


what debugging section can I enable to get a clearer picture on what
is going on?
  


Likely related to http://wiki.squid-cache.org/KnowledgeBase/BrokenWindowSize


I have tried deleting the cache for that specific url
http://www.flysaaspecials.com/  and Ive told squid not to cache this
domain
using the always_direct statment


That doesn't do what you seem to think it does...

http://www.squid-cache.org/Doc/config/always_direct/

See...

http://www.squid-cache.org/Doc/config/cache/

...instead for controlling caching.


 with out anyluck..

any ideas?
  


The problem has nothing to do with whether the objects are cached or 
not, but instead the communication between the Squid server and the 
origin web server.



Thanks

Regards
  


Chris



Re: [squid-users] Squid Cluster

2009-07-02 Thread Chris Robertson

Serge Fonville wrote:

Hi,

I am in the process of setting up a two node cluster. Since I am
setting up Squid on the private side. I am  looking into how data is
stored. Basically I am trying to detemine what data can be shared
between instances of Squid. Say a host opens a connection to a remote
location and collects all kinds of state information and that squid
goes down. will that mean that all information is kept? Can a download
that is broken (due to the squid going down) halfway, still continue
while happily using the other (which has the same IP)


The download-in-progress will be interrupted.  If a range request is 
made, asking for the rest of the object, the remaining Squid will 
honor that request, but the object fragment will not be cached 
(depending on your range_offset_limit).



Are there any specific things I need to look in to?

I intend to set up a cluster consisting of the following
GlassFish
PostgreSQL
Nagios
Postfix
Squid
Heartbeat
Subversion
Named
DHCPd
TFTPd
Apache HTTPd
DRBD (dual primary with either GFS2 or OCFS2)
ldirectord or keepalived (all traffic is being balanced between the
two real servers and both nodes should be active)

It will probably run on either Gentoo or Centos x64

What are the important thing in regard to squid that I need to take
special attention to.
Things I can imagine
IPaddress sharing
  


Works fine.


Storage (cache) sharing
  


Not supported.


Synchronization of configuration
  


To a point.  cache_peer and either visible_hostname or 
unique_hostname will be different.  But you can use an include to 
achieve that.



Sharing of logon data to squid (I intend to use form based
authentication for squid)
  


This will have to be accomplished by your external_acl_type helper.


I am aware of the fact that the majority of this setup is in no way
relevant to squid, but it may impact it (I can not yet determine that)
I am especially interested in anything that is relevant to impact in
regard to availability, performance and load balancing

Any help is greatly appreciated!!

Regards,

Serge Fonville
  


Chris




[squid-users] Configuring Squid for use on a web browser - beginner question

2009-07-02 Thread John Martin
I'm having trouble getting Squid setup. I installed squid on a Linux
server using yum, and in squid.conf uncommented the http_port 3128
line. Then I added the lines below:

# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
acl my_pc 123.123.123.123
http_access allow my_pc

(where 123.123.123.123 is my home PC's IP address)

After editing squid.conf I restarted squid, not sure if that's
necessary. squid -k parse returns no errors, and squidclient
http://www.google.com; works properly. When I setup my browser though,
it doesn't work. In IE I enter my server's IP as the proxy address,
and 3128 as the port. I also tried http_access allow all.

What am I doing incorrectly, or what step am I missing?


Re: [squid-users] age in the refresh_pattern

2009-07-02 Thread Amos Jeffries

Stand H wrote:

Hi,


FRESH if expires  now, else STALE
STALE if age  max
FRESH if lm-factor  percent, else STALE
FRESH if age  min
else STALE

Can someone show how to calculate the age here? Is it age=Now-LM or age=time 
since the object stored in the cache?



Age = time in seconds since value of Date: header received with the Object.

Date: header in future are cropped to NOW.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE16
  Current Beta Squid 3.1.0.9


Re: [squid-users] Configuring Squid for use on a web browser - beginner question

2009-07-02 Thread Amos Jeffries

John Martin wrote:

I'm having trouble getting Squid setup. I installed squid on a Linux
server using yum, and in squid.conf uncommented the http_port 3128
line. Then I added the lines below:


If you needed to uncomment the http_port line your Squid is probably 
obsolete before you downloaded it.  Which distro are you using? and what 
Squid release did it give you?


Note; the currently old but supported releases are 2.6 and higher, and 
the current most-stable production releases are 2.7.STABLE6 and 3.0.STABLE16




# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
acl my_pc 123.123.123.123


There is an ACL above with a name and  a value, but no type.
Try:
 acl my_pc src 123.123.123.123


http_access allow my_pc

(where 123.123.123.123 is my home PC's IP address)

After editing squid.conf I restarted squid, not sure if that's
necessary. squid -k parse returns no errors, and squidclient
http://www.google.com; works properly. When I setup my browser though,
it doesn't work. In IE I enter my server's IP as the proxy address,
and 3128 as the port. I also tried http_access allow all.

What am I doing incorrectly, or what step am I missing?


Browser config sounds right. The only questionable things are the Squid 
release and the ACL I pointed to.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE16
  Current Beta Squid 3.1.0.9


Re: [squid-users] fetch page error

2009-07-02 Thread Tech W.



--- On Thu, 2/7/09, Amos Jeffries squ...@treenet.co.nz wrote:

 
 What a seemingly pointless page. Just for my interest; what
 is it doing?
 

Hi Amos,

That page index2.shtml is just a regular webpage (static).


 
  
  Most time I got it successfully, but sometime got it
 failed.
  Squid responsed with this header:
  
  X-Squid-Error: ERR_CONNECT_FAIL 113
  
  
  Why this happened? Thanks.
 
 The browser requested the page and Squid was unable to open
 a TCP connection to the master Server.
 
 Nothing unusual about that. Usually network or web server
 load related.
 
 There should be other headers around that one which adds
 more details X-Cache-Miss: is mentioned as one such.
 


These are the full headers:

(Status-Line)   HTTP/1.0 503 Service Unavailable
Connection  close
Content-Length  1864
Content-Typetext/html
DateFri, 03 Jul 2009 03:20:42 GMT
Expires Fri, 03 Jul 2009 03:20:42 GMT
Mime-Version1.0
Server  squid/3.0STABLE16
Via 1.0 localhost.localdomain (squid/3.0STABLE16)
X-Cache MISS from localhost.localdomain
X-Squid-Error   ERR_CONNECT_FAIL 113


Please help again. Thanks.

Regards.



  

Access Yahoo!7 Mail on your mobile. Anytime. Anywhere.
Show me how: http://au.mobile.yahoo.com/mail