[squid-users] Announcement: pacparser - a c library to parse proxy auto-config (pac) files

2007-12-17 Thread Manu Garg
Hi Folks,

I am very pleased to announce the release of pacparser - a C library
to parse proxy auto-config (PAC) scripts. Needless to say, PAC files
are now a widely accepted method for proxy configuration management
and almost all popular browsers support them. The idea behind
pacparser is to make it easy to add this PAC file parsing capability
to other programs. It comes as a shared C library with a clear API.
You can use it to make any C or python (using ctypes) program PAC
scripts intelligent.

For documentation and available packages, please visit project home page at:
http://code.google.com/p/pacparser

For the ones who like to start with source code, here is the link to
direct download for source code:
http://pacparser.googlecode.com/files/pacparser-1.0.0.tar.gz.

Cheers :-),
Manu
-- 
Manu Garg
http://www.manugarg.com
Journey is the destination of the life.


Re: [squid-users] How to redirect http://gmail.com to https://gmail.com

2007-12-17 Thread Amos Jeffries

Amos Jeffries wrote:

Dear All,
I would like to redirect http://gmail.com to https://gmail.com
because http://gmail.com was banned by our ISP
But they allow https://gmail.com
User only know http://gmail.com but when they visit, it was banned.
Almost the time, I have to go to user and say to use https://gmail.com ..

Any suggestions is appirciated



Two approaches:

1) automatic teaching clue-by-4 stick for your users:

acl badGmail dstdomain gmail.com
acl HTTP proto HTTP
deny_info http://yourhostname/gmail_is_broken.html badGmail
http_access deny HTTP badGmail


2) leave the peons ignorant and MAKE it work:

acl badGmail dstdomain gmail.com
acl HTTP proto HTTP

cache_peer gmail.com 443 0 no-query originserver


Sorry I forgot to add there may be ssl* options needed for SSL traffic 
between squid and the gmail HTTPS server.



cache_peer_access gmail.com allow badGmail
cache_peer_access gmail.com deny !badGmail
never_direct deny HTTP badGmail



Amos


Re: [squid-users] Authentication question

2007-12-17 Thread Amos Jeffries

Monah Baki wrote:

Hi All,

If users require authentication in squid before browsing, is there a way 
for example to tell squid since user has authenticated in IE, if the 
user plans on using firefox while IE is still running, do not authenticate.




Most web-things are possible.
Try automatic NTLM auth, see if you can get it authenticating in the 
background without either IE or Firefox needing to show the login box.


There are other ways, but, they open MAJOR security holes you REALLY do 
not want to open.


FWIW: if you have Firefox installed why do you even let the users see IE 
as present on the PC? It's only needed for WindowsUpdate and then 
marginally. Removing IE from under temptations fingers closes a lot of 
security holes in windows (94% of the current SANS list) with one action.


Amos



[squid-users] Squid Cache and Load Balancing...

2007-12-17 Thread Andy McCall
Hi Folks,

I am looking at setting up two Linux-based caching servers for two
WebMarshal servers for around 120 schools.  The WebMarshal servers
perform all the content checking and Squid will do all the caching.

The two WebMarshal servers are using Windows load balancing so I would
like to have a similar configuration for the Squid servers.  I
understand I can share the cache's between the two Squid servers and I
have read around Pen, L7SW, LVS and keepalived but its not clear to me
which bits I need to do what in the setup.

Please can someone give me some pointers as to what I would need to get
load balanced / HA Squid cache servers without buying a dedicated piece
of load balancing hardware.

Thanks in advance for any replies,

Andy McCall
**
The information in this e-mail is confidential and may be legally privileged.
It is intended solely for the addressee. Access to this email by anyone else 
is unauthorised. If you have received it in error, please notify us immediately 
by replying to this e-mail and then delete it from your system.

This note confirms that this email message has been swept for the presence of
computer viruses, however we advise that in keeping with good IT practice the 
recipient should ensure that the e-mail together with any attachments are virus 
free by running a virus scan themselves.  We cannot accept any responsibility 
for
any damage or loss caused by software viruses. 

The Unity Partnership Ltd, registered in England at West Hall, Parvis Road, 
West Byfleet, Surrey UK KT14 6EZ. 
Registered No : 5916336.  VAT No : 903761336.
**


Re: [squid-users] Squid Cache and Load Balancing...

2007-12-17 Thread Amos Jeffries

Andy McCall wrote:

Hi Folks,

I am looking at setting up two Linux-based caching servers for two
WebMarshal servers for around 120 schools.  The WebMarshal servers
perform all the content checking and Squid will do all the caching.

The two WebMarshal servers are using Windows load balancing so I would
like to have a similar configuration for the Squid servers.  I
understand I can share the cache's between the two Squid servers and I


Where did you hear that?

If you setup each squid as a sibling peer they can pull data from each 
other instead of the network.



have read around Pen, L7SW, LVS and keepalived but its not clear to me
which bits I need to do what in the setup.


keepalived should not be needed, squid has RunCache built-in that does 
the same job since 2.6.




Please can someone give me some pointers as to what I would need to get
load balanced / HA Squid cache servers without buying a dedicated piece
of load balancing hardware.


cache_peer and peering algorithms. carp, round-robin, 
weighted-round-robin, etc.


You usually want the WebMarshal's setup as 'parents' of the squid so the 
squid cache any denials they issue.


However if load-balancing the squid is not practical or bndwidth is not 
a huge issue you could reverse that and have squid caching everything 
requested and the webmarshal's assigned a squid each as their source.


That would mean the squid receive only traffic load their child 
webmarshall sends out. No special explicit balancing needed for the 
squid itself.




Thanks in advance for any replies,

Andy McCall


Amos

The information in this response is confidential and should be forgotten 
by all recipients before it is read. If you cannot do his or have 
received it in error, please stand on leg and sing your national anthem 
immediately then delete it from your computer.


This note confirms that this email message has been sent through an 
unspecified server on the Internet. However we advise that in keeping 
with good IT practice the recipient should connect their PC to the 
Internet so that communication is not delayed.


We cannot accept any responsibility for any embarassment or unsightly 
noises caused by the wetware of your computer.




[squid-users] storeUpdateCopy errors

2007-12-17 Thread Tony Dodd
Running Squid-2.HEAD, I'm seeing lots of 'storeUpdatecopy: Error at ###
(-1)' in the cache.log.  I'm not sure if it has anything to do with the
ctx errors I'm also seeing:

2007/12/17 18:37:39| ctx: enter level  0:
'http://ws.audioscrobbler.com/1.0/user/goldiesogay/profile.xml?widget_id=59b43a5fbbd8fb5e5a1b94ac9de7f2d9'
2007/12/17 18:37:39| storeSetPublicKey: unable to determine vary_id for
'http://ws.audioscrobbler.com/1.0/user/goldiesogay/profile.xml?widget_id=59b43a5fbbd8fb5e5a1b94ac9de7f2d9'
2007/12/17 18:38:00| ctx: exit level  0 2007/12/17 18:38:00|
storeUpdateCopy: Error at 324 (-1)

Adrian thinks this is to do with the object re-validation Henrik's put
into 2.HEAD; has anyone else seen something like this?


-- 
Tony Dodd, Systems Administrator

Last.fm | http://www.last.fm
Karen House 1-11 Baches Street
London N1 6DL

check out my music taste at:
http://www.last.fm/user/hawkeviper


[squid-users] No great results after 2 weeks with squid

2007-12-17 Thread Carlos Lima
Hi List,

I've being testing and studying squid for almost two weeks now and I'm
getting no results. I already understood the problems related to http
headers where in most cases web servers administrators or programmers
are creating more and more dynamic data which is bad for caching. So,
I installed CentOS 5 along with 2.6.STABLE6 using yum install and set
only an ACL for my internal network. After that I set also
visible_hostname to localhost since quid was complaining about it.
Now, as I a stated already I read a lot regarding to squid including
some tips in order to optimize sda access or increasing memory size
limit but shouldn't squid be working great out-of-the-box?! Oh, I
forgot my problem is that on mysar that I installed in order to see
the performance I only see 0% of TRAFFIC CACHE PERCENT when already
visited almost 300 websites. In some ocassions I see 10% or even
30/40% but for almost of 98% of websites I get 0%.

So my questions are:
- Should Squid be taking only in consideration for large environments
with hundreds or even thousands of people accessing web?!
- In these days a proxy like Squid for caching purposes is more a
have to have or a must to have when for almost every site proxy's
are skipped and the wan speed access are increasing every day now!?

Thanks!

By the way:

I intend use Squid for caching purposes only since I already have
Cisco based QOS and bandwidth management. My deploying site as only at
most 5 people accessing web simultaneous under a 8Mb dsl connection.
My current config is:

http_port 3128
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
cache_mem 64 MB
maximum_object_size 40 MB
access_log /var/log/squid/access.log squid
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern . 0 20% 4320
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl myNetwork src 10.10.1.0/255.255.255.0
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow myNetwork
http_access deny all
http_reply_access allow all
icp_access allow all
cache_effective_user squid
cache_effective_group squid
delay_pools 1
delay_class 1 1
delay_parameters 1 -1/-1
coredump_dir /var/spool/squid
visible_hostname localhost


Re: [squid-users] No great results after 2 weeks with squid

2007-12-17 Thread Dieter Bloms
Hi,


On Mon, Dec 17, Carlos Lima wrote:

 So my questions are:
 - Should Squid be taking only in consideration for large environments
 with hundreds or even thousands of people accessing web?!

no, it can also be used in small enviroment.

 - In these days a proxy like Squid for caching purposes is more a
 have to have or a must to have when for almost every site proxy's
 are skipped and the wan speed access are increasing every day now!?

you can configure user-, time-, source-, or destination acl, and you
have a application gateway (it is more than only a packet filter like a
cisco firewall.

Btw.:
I think you should set cache_dir to some GB, to cache more than 100MB
of data to the disk cache, which is the default.
Please update to the last stable release, 2.6.STABLE6 isa little
outdated (from december last year). 


-- 
Gruß

  Dieter

--
I do not get viruses because I do not use MS software.
If you use Outlook then please do not put my email address in your
address-book so that WHEN you get a virus it won't use my address in the
From field.


pgp2g6CZAWuj8.pgp
Description: PGP signature


Re: [squid-users] Invoked sites by allowed websites.

2007-12-17 Thread Cody Jarrett
So this is what I have now, and the way I see it, it says allow all  
goodsites and sites that have referers, but it still doesn't appear to  
work properly.


acl goodsites dstdom_regex /etc/squid/allowed-sites.squid
acl has_referer referer_regex .
http_access allow goodsites has_referer

On Dec 14, 2007, at 2:21 PM, Henrik Nordstrom wrote:


On fre, 2007-12-14 at 11:45 -0600, Cody Jarrett wrote:

I think I almost have it. I can access sites in my allowed file. But
when I access a site that isn't, it gives me the Error, access  
denied

when trying to retrieve the url: http://google.com/;, but If I click
the link, http://www.google.com, it takes me to the site which isn't
wanted. I think there is something wrong with the order of acl's or I
need to combine them on one line maybe.

#allow only the sites listed in the following file
acl goodsites dstdom_regex /etc/squid/allowed-sites.squid
acl has_referer referer_regex .
http_access allow goodsites
http_access allow has_referer


This says allow access to follow any link, no matter where that link  
is

or no matter where it was found.

You need to make patterns of the sites from where following links /
loading inlined content is allowed.

Regards
Henrik





[squid-users] Squid with auth NTLM

2007-12-17 Thread Leandro Ferrrari
I have configured squid 3.0 with NTLM, and this configuration in squid.conf is:

auth_param ntlm program /usr/local/bin/ntlm_auth
--helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 30
auth_param ntlm max_challenge_lifetime 2 minutes

auth_param basic program /usr/local/bin/ntlm_auth
--helper-protocol=squid-2.5-basic
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours

When a test the ntlm auth, in the Explorer client with a user
authenticate in Domain Controller Windows 2003, the explorer or
firefox show popup of the basic auth.
How to use the ntlm auth with an user of the domain group without auth basic?

Sincerely,
Leandro Ferrari


[squid-users] TCP close problem

2007-12-17 Thread Janko Mivšek

Hello,


While tweaking a Swazoo Smalltalk web server to work through Squid I 
noticed that Squid wants to close a connection soon after sending a 
request to the web server, sometimes in the middle of the longer 
responses from the server.


I'm wondering if this is a normal behavior and if it is, what are the 
reasons for so fast connection close?


Thanks very much for any hint

Janko

--
Janko Mivšek
AIDA/Web
Smalltalk Web Application Server
http://www.aidaweb.si


Re: [squid-users] No great results after 2 weeks with squid

2007-12-17 Thread Amos Jeffries
 Hi List,

 I've being testing and studying squid for almost two weeks now and I'm
 getting no results. I already understood the problems related to http
 headers where in most cases web servers administrators or programmers
 are creating more and more dynamic data which is bad for caching. So,
 I installed CentOS 5 along with 2.6.STABLE6 using yum install and set
 only an ACL for my internal network. After that I set also
 visible_hostname to localhost since quid was complaining about it.

Your DNS is broken silghtly. Any web-service mserver should have a FQDN
for its hostname. Many programs like squid use the hostname in their
connections outward, and many validate all connecting hosts before
accepting data traffic.

 Now, as I a stated already I read a lot regarding to squid including
 some tips in order to optimize sda access or increasing memory size
 limit but shouldn't squid be working great out-of-the-box?! Oh, I

It does ... for a generic 1998-era server.
To work these days the configuration is very site-specific.

 forgot my problem is that on mysar that I installed in order to see
 the performance I only see 0% of TRAFFIC CACHE PERCENT when already
 visited almost 300 websites. In some ocassions I see 10% or even
 30/40% but for almost of 98% of websites I get 0%.

The would be ones including '?' in the URI methinks.


 So my questions are:
 - Should Squid be taking only in consideration for large environments
 with hundreds or even thousands of people accessing web?!
 - In these days a proxy like Squid for caching purposes is more a
 have to have or a must to have when for almost every site proxy's
 are skipped and the wan speed access are increasing every day now!?

 Thanks!

 By the way:

 I intend use Squid for caching purposes only since I already have
 Cisco based QOS and bandwidth management. My deploying site as only at
 most 5 people accessing web simultaneous under a 8Mb dsl connection.

Well then as said earlier, you need more than 100MB of data cache, and
probably more than 64MB of RAM cache.

 My current config is:

 http_port 3128
 hierarchy_stoplist cgi-bin ?
 acl QUERY urlpath_regex cgi-bin \?
 cache deny QUERY

Right here you are non-caching a LOT of websites, some of which are
actually cachable.

We now recommend using 2.6STABLE17 with some new refresh_pattern set instead.

  refresh_pattern cgi-bin 0 0% 0
  refresh_pattern \? 0 0% 0
  refresh_pattern ^ftp: 1440 20% 10080
  refresh_pattern ^gopher: 1440 0% 1440
  refresh_pattern . 0 20% 4320


 acl apache rep_header Server ^Apache
 broken_vary_encoding allow apache
 cache_mem 64 MB
 maximum_object_size 40 MB

You will get at most 3 of these in the cache the way things are.
It will also skip most video and download content. To do bandwidth-saving
you should have gigs of disk available, and max object should be at least
720MB.

 access_log /var/log/squid/access.log squid
 refresh_pattern ^ftp: 1440 20% 10080
 refresh_pattern ^gopher: 1440 0% 1440
 refresh_pattern . 0 20% 4320
 acl all src 0.0.0.0/0.0.0.0
 acl manager proto cache_object
 acl localhost src 127.0.0.1/255.255.255.255
 acl to_localhost dst 127.0.0.0/8
 acl SSL_ports port 443
 acl Safe_ports port 80 # http
 acl Safe_ports port 21 # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70 # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535 # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl CONNECT method CONNECT
 acl myNetwork src 10.10.1.0/255.255.255.0
 http_access allow manager localhost
 http_access deny manager
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow localhost
 http_access allow myNetwork
 http_access deny all
 http_reply_access allow all
 icp_access allow all

Stand alone squid does not need ICP. Drop that.

 cache_effective_user squid
 cache_effective_group squid

These are better left to the OS. Slight misconfigurations here can really
screw your system security.

 delay_pools 1
 delay_class 1 1
 delay_parameters 1 -1/-1

These are useless. The delay_parameters effectivly say no pooling.

 coredump_dir /var/spool/squid
 visible_hostname localhost

This soulhd be a publicly accessible FQDN. It is the name squid connects
outbound with. If the machine is a server (likely) its hostname should be
a FQDN to communicate well with the Internet.


Amos




Re: [squid-users] Squid and Windows Update

2007-12-17 Thread Amos Jeffries
 On Fri, 22 Jun 2007 13:53:57 +1200 (NZST) [EMAIL PROTECTED] wrote:

 I have just added a FAQ page
 (http://wiki.squid-cache.org/SquidFaq/WindowsUpdate) with the content of
 this thread.

 Can anyone please make a link to
 http://wiki.squid-cache.org/SquidFaq/WindowsUpdate
 in http://wiki.squid-cache.org/SquidFaq/ ?


Done. And the WU page updated with some more info found recently to make
it play nice with Vista and Win98.

Amos




Re: [squid-users] TCP close problem

2007-12-17 Thread Amos Jeffries
 Hello,


 While tweaking a Swazoo Smalltalk web server to work through Squid I
 noticed that Squid wants to close a connection soon after sending a
 request to the web server, sometimes in the middle of the longer
 responses from the server.

 I'm wondering if this is a normal behavior and if it is, what are the
 reasons for so fast connection close?

 Thanks very much for any hint

It could be any number of network issues.

If its squid doing the premature close itself, then I think the most
common reason for that is the server sending squid info anout teh request
size which is smaller than the request data actually sent. Squid will
detect that and its virus self-protection code will kick in and shutdown
the link.
cache.log will give you a trace of whats going on there.

Otherwise you should start looking at things like a firewall with short
NAT timeouts (ew). And problems with kernel TCP transmission settings.


Amos




[squid-users] cache_peer weighting

2007-12-17 Thread Tony Dodd

Hi Guys,

Wanted to double check I hadn't screwed up my config lines before 
dropping a bug report


I've got some of my parent's configured with weight's, as we're trying 
out some performance optimizing code on perlbal... thing is, setting a 
weight in squid doesn't seem to make a difference to the number of 
requests that squid sends back to the parent.  This is Squid-2.6STABLE17.


[EMAIL PROTECTED] ~]# grep perlbal1-80 /squidperlbal | wc -l
2
[EMAIL PROTECTED] ~]# grep perlbal2-80 /squidperlbal | wc -l
248
[EMAIL PROTECTED] ~]# grep -v perlbal[12]-80 /squidperlbal | wc -l
750

--I expected at least the inverse of the above, as unless my 
understanding of squid weighting is completely incorrect, a weight of 
100 should mean that peer is being used for 100 times more connections 
than the weight=1 (default weight) peers, no?


Squid is configured with the following:

###ws.audioscrobbler.com  mainsite
cache_peer 10.0.0.14 parent 80 0 no-query originserver no-digest 
no-netdb-exchange name=saruman2-80 round-robin

cache_peer_access saruman2-80 allow mainsite

cache_peer 10.0.0.14 parent 80 0 no-query originserver no-digest 
no-netdb-exchange name=saruman2-81 round-robin

cache_peer_access saruman2-81 allow mainsite

cache_peer 10.0.0.35 parent 80 0 no-query originserver no-digest 
no-netdb-exchange name=perlbal1-80 round-robin weight=100

cache_peer_access perlbal1-80 allow mainsite

cache_peer 10.0.0.35 parent 81 0 no-query originserver no-digest 
no-netdb-exchange name=perlbal1-81 round-robin

cache_peer_access perlbal1-81 allow mainsite

cache_peer 10.0.0.114 parent 80 0 no-query originserver no-digest 
no-netdb-exchange name=perlbal2-80 round-robin weight=100

cache_peer_access perlbal2-80 allow mainsite

cache_peer 10.0.0.114 parent 81 0 no-query originserver no-digest 
no-netdb-exchange name=perlbal2-81 round-robin

cache_peer_access perlbal2-81 allow mainsite

cache_peer 10.0.0.114 parent 82 0 no-query originserver no-digest 
no-netdb-exchange name=perlbal2-82 round-robin

cache_peer_access perlbal2-82 allow mainsite

cache_peer 10.0.0.114 parent 83 0 no-query originserver no-digest 
no-netdb-exchange name=perlbal2-83 round-robin

cache_peer_access perlbal2-83 allow mainsite
###ws.audioscrobbler.com  mainsite Ends

I also tried removing round-robin, in case that was screwing up the 
config, however, i found that this merely means all requests go to 
saruman2-80.


Thanks!
--
Tony Dodd, Systems Administrator

Last.fm | http://www.last.fm
Karen House 1-11 Baches Street
London N1 6DL

check out my music taste at:
http://www.last.fm/user/hawkeviper


Re: [squid-users] Squid with auth NTLM

2007-12-17 Thread Amos Jeffries
 I have configured squid 3.0 with NTLM, and this configuration in
 squid.conf is:

 auth_param ntlm program /usr/local/bin/ntlm_auth
 --helper-protocol=squid-2.5-ntlmssp
 auth_param ntlm children 30
 auth_param ntlm max_challenge_lifetime 2 minutes

 auth_param basic program /usr/local/bin/ntlm_auth
 --helper-protocol=squid-2.5-basic
 auth_param basic children 5
 auth_param basic realm Squid proxy-caching web server
 auth_param basic credentialsttl 2 hours

 When a test the ntlm auth, in the Explorer client with a user
 authenticate in Domain Controller Windows 2003, the explorer or
 firefox show popup of the basic auth.
 How to use the ntlm auth with an user of the domain group without auth
 basic?

Remove the basic configuration to not use it.
You NTLM is broken by the sound of it if its always falling back on basic.
Although the login box does not necessarily mean basic is being used. It
could just be that the browser has no working credentials for the user to
login NTLM with.


Amos



RE: [squid-users] Squid with auth NTLM

2007-12-17 Thread Nick Duda
Have you joined your box to the domain? What is your krb5.conf file? What is 
your smb.conf file? What is the status of something like wbinfo -g or -u ?

I would troubleshoot your domain connectivity before you worry about squid.


-Original Message-
From: Amos Jeffries [mailto:[EMAIL PROTECTED]
Sent: Mon 12/17/2007 7:33 PM
To: Leandro Ferrrari
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid with auth NTLM
 
 I have configured squid 3.0 with NTLM, and this configuration in
 squid.conf is:

 auth_param ntlm program /usr/local/bin/ntlm_auth
 --helper-protocol=squid-2.5-ntlmssp
 auth_param ntlm children 30
 auth_param ntlm max_challenge_lifetime 2 minutes

 auth_param basic program /usr/local/bin/ntlm_auth
 --helper-protocol=squid-2.5-basic
 auth_param basic children 5
 auth_param basic realm Squid proxy-caching web server
 auth_param basic credentialsttl 2 hours

 When a test the ntlm auth, in the Explorer client with a user
 authenticate in Domain Controller Windows 2003, the explorer or
 firefox show popup of the basic auth.
 How to use the ntlm auth with an user of the domain group without auth
 basic?

Remove the basic configuration to not use it.
You NTLM is broken by the sound of it if its always falling back on basic.
Although the login box does not necessarily mean basic is being used. It
could just be that the browser has no working credentials for the user to
login NTLM with.


Amos




Re: [squid-users] cache_peer weighting

2007-12-17 Thread Amos Jeffries
 Hi Guys,

 Wanted to double check I hadn't screwed up my config lines before
 dropping a bug report

Good choice. :-)

round-robin == round-robin: each server trued in sequence until all have
bee tried then repeats. No weighting there.

IIRC Squid3.0 introduces weighted-round-robin for this purpose. Otherwise
there is CARP in 2.6.

Amos



 I've got some of my parent's configured with weight's, as we're trying
 out some performance optimizing code on perlbal... thing is, setting a
 weight in squid doesn't seem to make a difference to the number of
 requests that squid sends back to the parent.  This is Squid-2.6STABLE17.

 [EMAIL PROTECTED] ~]# grep perlbal1-80 /squidperlbal | wc -l
 2
 [EMAIL PROTECTED] ~]# grep perlbal2-80 /squidperlbal | wc -l
 248
 [EMAIL PROTECTED] ~]# grep -v perlbal[12]-80 /squidperlbal | wc -l
 750

 --I expected at least the inverse of the above, as unless my
 understanding of squid weighting is completely incorrect, a weight of
 100 should mean that peer is being used for 100 times more connections
 than the weight=1 (default weight) peers, no?

 Squid is configured with the following:

 ###ws.audioscrobbler.com  mainsite
 cache_peer 10.0.0.14 parent 80 0 no-query originserver no-digest
 no-netdb-exchange name=saruman2-80 round-robin
 cache_peer_access saruman2-80 allow mainsite

 cache_peer 10.0.0.14 parent 80 0 no-query originserver no-digest
 no-netdb-exchange name=saruman2-81 round-robin
 cache_peer_access saruman2-81 allow mainsite

 cache_peer 10.0.0.35 parent 80 0 no-query originserver no-digest
 no-netdb-exchange name=perlbal1-80 round-robin weight=100
 cache_peer_access perlbal1-80 allow mainsite

 cache_peer 10.0.0.35 parent 81 0 no-query originserver no-digest
 no-netdb-exchange name=perlbal1-81 round-robin
 cache_peer_access perlbal1-81 allow mainsite

 cache_peer 10.0.0.114 parent 80 0 no-query originserver no-digest
 no-netdb-exchange name=perlbal2-80 round-robin weight=100
 cache_peer_access perlbal2-80 allow mainsite

 cache_peer 10.0.0.114 parent 81 0 no-query originserver no-digest
 no-netdb-exchange name=perlbal2-81 round-robin
 cache_peer_access perlbal2-81 allow mainsite

 cache_peer 10.0.0.114 parent 82 0 no-query originserver no-digest
 no-netdb-exchange name=perlbal2-82 round-robin
 cache_peer_access perlbal2-82 allow mainsite

 cache_peer 10.0.0.114 parent 83 0 no-query originserver no-digest
 no-netdb-exchange name=perlbal2-83 round-robin
 cache_peer_access perlbal2-83 allow mainsite
 ###ws.audioscrobbler.com  mainsite Ends

 I also tried removing round-robin, in case that was screwing up the
 config, however, i found that this merely means all requests go to
 saruman2-80.

 Thanks!
 --
 Tony Dodd, Systems Administrator

 Last.fm | http://www.last.fm
 Karen House 1-11 Baches Street
 London N1 6DL

 check out my music taste at:
 http://www.last.fm/user/hawkeviper





Re: [squid-users] clustering squid

2007-12-17 Thread Amos Jeffries
 Hello,

 I am looking to utilize squid as a reverse proxy for a medium sized
 implementation that will need to scale to a lot of requests/sec (a lot
 is a relative/unknown term).  I found this very informative thread:
 http://www.squid-cache.org/mail-archive/squid-users/200704/0089.html

 However, is clustering the OS the only way to provide a high
 availability (active/active or active/standby) solution?   For
 example, with Red Hat Cluster Suite.  Here is a rough drawing of my
 logic:
 Client ---FW --- Squid  --- Load Balancer   --- Webservers

 They already have expensive load balancers in place so they aren't
 going anywhere.   Thanks for any insight!


IIRC there has been some large-scale sites setup using CARP in grids
between squid sibling acelerators. The problem we have here is that few of
the large-scale sites share their configurations back to the community.

If you are doing any sort of scalable I'd suggest looking at the
ICP-multicast and CARP setup for bandwidth scaling.
Squid itself does not include any means of failover for connected clients
if an individual cache dies. That is up to the
FW/router/switch/loadbalancer between squid and clients. All squid can do
it restart itself quickly when something major occurs.

Amos




Re: [squid-users] cache_peer weighting

2007-12-17 Thread Tony Dodd

Amos Jeffries wrote:

Hi Guys,

Wanted to double check I hadn't screwed up my config lines before
dropping a bug report


Good choice. :-)

round-robin == round-robin: each server trued in sequence until all have
bee tried then repeats. No weighting there.

IIRC Squid3.0 introduces weighted-round-robin for this purpose. Otherwise
there is CARP in 2.6.

Amos



Hey Amos,

Hmmm, so the only way for weighting cache_peers in 2.6 is with CARP? 
The config manual seems to suggest otherwise:


cache_peer 172.16.1.123 sibling 3129 5500 weight=2

Or am I assuming too much here?  I could be getting the wrong end of the 
stick; but it seemed like using a similar cache_peer entries to the 
above, but with a couple having the weight=100 didn't seem to change the 
way squid was choosing the cache_peer to use.


Thanks!
--
Tony Dodd, Systems Administrator

Last.fm | http://www.last.fm
Karen House 1-11 Baches Street
London N1 6DL

check out my music taste at:
http://www.last.fm/user/hawkeviper


Re: [squid-users] only log *some* sites

2007-12-17 Thread Amos Jeffries

 Hi Group,

 I am finding my access.log fills up with some sites that I don't need to
 keep tabs on.

 Is there a way I can log everything except a few known sites, such as our
 company's website and our local Wiki site?

In recent squid 2.6 and 3.0 there are ACL capabilities on access_log option.
http://www.squid-cache.org/Versions/v3/HEAD/cfgman/access_log.html

Amos




Re: [squid-users] cache_peer weighting

2007-12-17 Thread Amos Jeffries
 Amos Jeffries wrote:
 Hi Guys,

 Wanted to double check I hadn't screwed up my config lines before
 dropping a bug report

 Good choice. :-)

 round-robin == round-robin: each server trued in sequence until all have
 bee tried then repeats. No weighting there.

 IIRC Squid3.0 introduces weighted-round-robin for this purpose.
 Otherwise
 there is CARP in 2.6.

 Amos


 Hey Amos,

 Hmmm, so the only way for weighting cache_peers in 2.6 is with CARP?

No, its just the most modern and one thats shown some promise in recent
benchmarking earlier this year by a large-scale user. Thier exact results
are buried back in the mailing list somewhere.
There are other algorithms, with different properties that suite differing
siutaions.

 The config manual seems to suggest otherwise:

 cache_peer 172.16.1.123 sibling 3129 5500 weight=2

 Or am I assuming too much here?  I could be getting the wrong end of the
 stick; but it seemed like using a similar cache_peer entries to the
 above, but with a couple having the weight=100 didn't seem to change the
 way squid was choosing the cache_peer to use.

The different algorithms all work their own way, with different inputs.
round-robin you were trying is an algorithm that ignores weight.
 I think carp, closest-only, multicast-responder (weighted using ttl=) are
weighted in 2.6.
All the closest-* ones use live network loading instead of a fixed weight.

I'm not sure which config manual you got that from. The Official
Authoritative one does not include that text.
http://www.squid-cache.org/Versions/v2/2.6/cfgman/cache_peer.html
http://www.squid-cache.org/Versions/v3/3.0/cfgman/cache_peer.html

Amos




Re: [squid-users] cache_peer weighting

2007-12-17 Thread Tony Dodd

Amos Jeffries wrote:

No, its just the most modern and one thats shown some promise in recent
benchmarking earlier this year by a large-scale user. Thier exact results
are buried back in the mailing list somewhere.
There are other algorithms, with different properties that suite differing
siutaions.



I'll take a look at CARP, thanks =]


The config manual seems to suggest otherwise:

cache_peer 172.16.1.123 sibling 3129 5500 weight=2

Or am I assuming too much here?  I could be getting the wrong end of the
stick; but it seemed like using a similar cache_peer entries to the
above, but with a couple having the weight=100 didn't seem to change the
way squid was choosing the cache_peer to use.


I'm not sure which config manual you got that from. The Official
Authoritative one does not include that text.
http://www.squid-cache.org/Versions/v2/2.6/cfgman/cache_peer.html
http://www.squid-cache.org/Versions/v3/3.0/cfgman/cache_peer.html


ViSolve.. heh

Thanks again Amos!


--
Tony Dodd, Systems Administrator

Last.fm | http://www.last.fm
Karen House 1-11 Baches Street
London N1 6DL

check out my music taste at:
http://www.last.fm/user/hawkeviper


[squid-users] transparent squid and ustream.tv

2007-12-17 Thread squid
Hi, I am having trouble accessing http://www.ustream.tv videos when connected 
through my Squid, is there a known fix for this problem ?.

I tried the always_direct command but with no success.

I am using squid-2.5.STABLE14

Could anybody please help me ?

Thanks

Samy






[squid-users] transparent squid and ustream.tv

2007-12-17 Thread squid
Hi, I am having trouble accessing http://www.ustream.tv videos when connected 
through my Squid, is there a known fix for this problem ?.

I tried the always_direct command but with no success.

I am using squid-2.5.STABLE14

Could anybody please help me ?

Thanks

Samy




Re: [squid-users] transparent squid and ustream.tv

2007-12-17 Thread Manoj_Rajkarnikar

On Tue, 18 Dec 2007, [EMAIL PROTECTED] wrote:


Hi, I am having trouble accessing http://www.ustream.tv videos when connected
through my Squid, is there a known fix for this problem ?.

I tried the always_direct command but with no success.

I am using squid-2.5.STABLE14


Please upgrade to the latest stable version...

http://www.squid-cache.org/Versions/



Could anybody please help me ?

Thanks

Samy




--


Re: [squid-users] No great results after 2 weeks with squid

2007-12-17 Thread Manoj_Rajkarnikar

On Tue, 18 Dec 2007, Amos Jeffries wrote:


Hi List,

I've being testing and studying squid for almost two weeks now and I'm
getting no results. I already understood the problems related to http
headers where in most cases web servers administrators or programmers
are creating more and more dynamic data which is bad for caching. So,
I installed CentOS 5 along with 2.6.STABLE6 using yum install and set
only an ACL for my internal network. After that I set also
visible_hostname to localhost since quid was complaining about it.


Your DNS is broken silghtly. Any web-service mserver should have a FQDN
for its hostname. Many programs like squid use the hostname in their
connections outward, and many validate all connecting hosts before
accepting data traffic.


Now, as I a stated already I read a lot regarding to squid including
some tips in order to optimize sda access or increasing memory size
limit but shouldn't squid be working great out-of-the-box?! Oh, I


It does ... for a generic 1998-era server.
To work these days the configuration is very site-specific.


forgot my problem is that on mysar that I installed in order to see
the performance I only see 0% of TRAFFIC CACHE PERCENT when already
visited almost 300 websites. In some ocassions I see 10% or even
30/40% but for almost of 98% of websites I get 0%.


The would be ones including '?' in the URI methinks.



So my questions are:
- Should Squid be taking only in consideration for large environments
with hundreds or even thousands of people accessing web?!
- In these days a proxy like Squid for caching purposes is more a
have to have or a must to have when for almost every site proxy's
are skipped and the wan speed access are increasing every day now!?

Thanks!

By the way:

I intend use Squid for caching purposes only since I already have
Cisco based QOS and bandwidth management. My deploying site as only at
most 5 people accessing web simultaneous under a 8Mb dsl connection.


Well then as said earlier, you need more than 100MB of data cache, and
probably more than 64MB of RAM cache.


My current config is:

http_port 3128
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY


Right here you are non-caching a LOT of websites, some of which are
actually cachable.

We now recommend using 2.6STABLE17 with some new refresh_pattern set instead.

 refresh_pattern cgi-bin 0 0% 0
 refresh_pattern \? 0 0% 0
 refresh_pattern ^ftp: 1440 20% 10080
 refresh_pattern ^gopher: 1440 0% 1440


also add these refresh_pattern lines here and see if it helps...

refresh_pattern -i \.exe$   10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.zip$   10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.tar\.gz$   10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.tgz$   10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.mp3$   10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.ram$   10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.jpeg$  10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.gif$   10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.wav$   10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.avi$   10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.mpeg$  10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.mpg$   10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.pdf$   10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.ps$10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.Z$ 10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.doc$   10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.ppt$   10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.tiff$  10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.snd$   10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.jpe$   10080   90% 99 reload-into-ims 
ignore-no-cache override-expire ignore-private
refresh_pattern -i \.midi$  10080   90% 

[squid-users] Some puzzle with squid!

2007-12-17 Thread auser
Hi,all:
I have a puzzle with between apache2.2 and squid. I have three apache
server with same configuration and only with apache service.And to squid 
server in front of apache server to cache my websites.
(By the way, i used apache1.3 all the time before and working well.)
I use apche control cache time.And i set 15days with types of *.js.When I
check http reponse , i find all time set is right!
HTTP/1.1 200 OK
Date: Tue, 18 Dec 2007 03:32:09 GMT
Server: Apache/2.2.6 (Unix)
Last-Modified: Wed, 14 Nov 2007 08:34:31 GMT
ETag: 178505d-16a2-43edf6f31b3c0
Accept-Ranges: bytes
Cache-Control: max-age=1296000
Expires: Wed, 02 Jan 2008 03:32:09 GMT
Vary: Accept-Encoding,User-Agent
Content-Encoding: gzip
Content-Length: 1556
Content-Type: application/javascript
X-Zip-Status: request=gzip:this; respond=vary
Age: 649
But when i check access_log, i found too many request from squid server with
*.js types ,and the request so high frequency.
10.0.121.12 - - [18/Dec/2007:11:40:07 +0800] GET /chinabank.js HTTP/1.1 200 
841
10.0.121.3 - - [18/Dec/2007:11:40:07 +0800] GET /newjs/hexunjs.js HTTP/1.1 
200 1556
10.0.251.206 - - [18/Dec/2007:11:40:07 +0800] GET /newjs/hexunjs.js HTTP/1.1 
304 -
10.0.251.206 - - [18/Dec/2007:11:40:08 +0800] GET /newjs/hexunjs.js HTTP/1.1 
304 -
10.0.251.206 - - [18/Dec/2007:11:40:09 +0800] GET /newjs/hexunjs.js HTTP/1.1 
304 -
10.0.251.206 - - [18/Dec/2007:11:40:09 +0800] GET /newjs/hexunjs.js HTTP/1.1 
304 -
10.0.251.206 - - [18/Dec/2007:11:40:10 +0800] GET /newjs/hexunjs.js HTTP/1.1 
304 -
10.0.251.206 - - [18/Dec/2007:11:40:10 +0800] GET /newjs/hexunjs.js HTTP/1.1 
304 -
10.0.121.3 - - [18/Dec/2007:11:40:11 +0800] GET /newjs/hexunjs.js HTTP/1.1 
304 -
10.0.251.206 - - [18/Dec/2007:11:40:11 +0800] GET /newjs/hexunjs.js HTTP/1.1 
304 -
10.0.251.206 - - [18/Dec/2007:11:40:12 +0800] GET /newjs/hexunjs.js HTTP/1.1 
304 -
10.0.251.206 - - [18/Dec/2007:11:40:12 +0800] GET /newjs/hexunjs.js HTTP/1.1 
304 -
10.0.251.206 - - [18/Dec/2007:11:40:14 +0800] GET /newjs/hexunjs.js HTTP/1.1 
304 -
10.0.251.206 - - [18/Dec/2007:11:40:15 +0800] GET /newjs/hexunjs.js HTTP/1.1 
304 -
10.0.251.206 - - [18/Dec/2007:11:40:16 +0800] GET /newjs/hexunjs.js HTTP/1.1 
304 -
10.0.250.217 - - [18/Dec/2007:11:40:16 +0800] GET /newjs/hexunjs.js HTTP/1.1 
304 -
10.0.121.12 - - [18/Dec/2007:11:40:17 +0800] GET /newjs/hexunjs.js HTTP/1.1 
304 -
10.0.251.206 - - [18/Dec/2007:11:40:17 +0800] GET /newjs/hexunjs.js HTTP/1.1 
200 5794
10.0.251.206 - - [18/Dec/2007:11:40:18 +0800] GET /newjs/hexunjs.js HTTP/1.1 
304 -
10.0.251.206 - - [18/Dec/2007:11:40:19 +0800] GET /newjs/hexunjs.js HTTP/1.1 
304 - 

I relly don't how to deal with,Thank you for your help! 
Is it problem with apache ro squid ?