Re: [squid-users] Squid 'Waiting For...' Hanging

2009-10-05 Thread Morphine.



Matthew Morgan-3 wrote:
> 
> Henrik Nordstrom wrote:
>> mån 2009-10-05 klockan 02:05 -0700 skrev Morphine.:
>>   
>>> Recently I've observed squid hanging.
>>> I've only noticed this on some forum websites such as
>>> http://forums.overclockers.com.au
>>> The Paige loads 100% (As far as i can observe) but still the page
>>> appears to
>>> be loading, displaying the messages "Waiting for url" or "Transferring
>>> data
>>> from" which never finish. (Hanging)
>>> 
>>
>> Loads fine for me using Firefox and Squid. Admittedly in normal proxy
>> operation and not transparent interception but that should not make much
>> difference...
>>
>> Regards
>> Henrik
> 

My apologies, I've had (and resolved) hanging problems with squid before, so
i immediately suspected squid as the cause of this problem. 
however clearing my Firefox cache did the trick.

thanks for your help. 

-- 
View this message in context: 
http://www.nabble.com/Squid-%27Waiting-For...%27-Hanging-tp25747379p25762875.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] What does --enable-ntlm-fail-open do?

2009-10-05 Thread Daniel Rose
Hello!

I've been hunting, but I can't find any extra info on the  
--enable-ntlm-fail-open configure argument.

What needs to be setup in the squid.conf to enable this behaviour?

This has been asked before:
http://www.squid-cache.org/mail-archive/squid-users/200512/0328.html

But there was no answer.

./configure says

"A helper that fails one of the Authentication steps can allow squid to 
still authenticate the user."

Does this mean that I'll try ntlm auth, but even if it fails I'll let you 
through anyway?

I too desire this behaviour!


--
Daniel Rose
National Library of Australia


Re: [squid-users] Squid and PDF

2009-10-05 Thread Amos Jeffries
On Mon, 5 Oct 2009 10:23:20 -0700, Randall Fidler 
wrote:
> Hello,
> 
> I have squid up and running and the one issue which is causing
> headaches is the viewing of PDF files.  From sites which are in my
> 'approved' list, if I click on a PDF link, my browser (Firefox) will
> just hang and I eventually have to kill it.  If I do the same action
> without going through squid (same machine, same browser, same site,
> etc.) then the Acrobat plugin fires up and I can view the pdf without
> issue - so to me it's a squid related problem.
> 
> Is there some port which I need to allow?  If so I would think that
> Squid would give me a 'denied' error, not just cause the browser to
> hang up.
> 
> Ideas?

Adobe are known to send HTTP range requests out of order.
Squid does not handle these but sends the request back to the origin server
instead. With range_offset set to fetch the whole file you might experience
hangs on large PDF files needing to be re-downloaded multiple times in
order to fetch the various range sets Adobe requests.

Amos



Re: [squid-users] ssl_bump and certificate for client

2009-10-05 Thread Amos Jeffries
On Mon, 05 Oct 2009 10:59:49 -0400, "Carsten Lührs" 
wrote:
> Hi,
> I configured ssl_bump as follows:
> 
> sslproxy_version 1
> ssl_bump allow all
> sslproxy_cert_error deny all
> always_direct allow all
> 
> http_port 3128 sslBump cert=/usr/local/squid/etc/cert.pem
> 
> My problem is, that the client receives a certificate issued fo the 
> squid, not
> for the original server (using the squid CA) - how could I solve this?
> 
> Thanks
> ?? Carsten

This is how SSL works. It encrypts the channel between two IP addresses
(Client -> Server).

When you place Squid in the middle (Client->Squid->Server) the SSL
authentication must change so that it authenticates/encrypts the two
different IP connections separately (Client->Squid) and (Squid->Server).

SslBump does that and is why even using it will not allow you to forge
HTTPS requests.  In order to use SslBump you require control of the clients
to make them accept the Squid CA. The solution you seek is to push out the
CA signing the Squid certificate to the client browsers.

Amos



Re: [squid-users] POST NONE://

2009-10-05 Thread Amos Jeffries
On Mon, 05 Oct 2009 14:30:06 +0200, Henrik Nordstrom
 wrote:
> mån 2009-10-05 klockan 22:56 +1300 skrev Amos Jeffries:
> 
>> I'm not sure if that applies to this situation since it requires an 
>> intermediate proxies to upgrade as well.
> 
> Ofcourse.
> 
>> For the record, Chunked coding is in all current 3.x releases since 
>> 3.0.STABLE16.
> 
> That's just responses right?
> 
> This thread is about POST requests...
> 
> Regards
> Henrik

Both. Request support arrived just in time to get it for 3.0 during the
back-port
http://www.squid-cache.org/Versions/v3/3.0/changesets/b9025.patch

Amos


[squid-users] squid_ldap_group concurrency

2009-10-05 Thread vincent.blondel

Hello all,

have somebody already get some experience with squid_ldap_group on squid
2.7.X because I try to find some info on what reasonable value I can
define for concurrency and if concurrency can also be used with children
... let we say something like this :

external_acl_type name children=?? concurrency=?? ...

many thks
Vincent.

-
ATTENTION:
The information in this electronic mail message is private and
confidential, and only intended for the addressee. Should you
receive this message by mistake, you are hereby notified that
any disclosure, reproduction, distribution or use of this
message is strictly prohibited. Please inform the sender by
reply transmission and delete the message without copying or
opening it.

Messages and attachments are scanned for all viruses known.
If this message contains password-protected attachments, the
files have NOT been scanned for viruses by the ING mail domain.
Always scan attachments before opening them.
-




Re: [squid-users] Querying cache

2009-10-05 Thread Amos Jeffries
On Mon, 5 Oct 2009 16:33:10 -0400, Miguel Cruz  wrote:
> Hello all,
> 
> I would like to know if there is a way to query squid for the total
> amount of files that it has in its cache.
> 
> Reason is we are using squid in http_accell mode and if I do a wget on
> "/" I can get a listing of all the files that reside on the docroot
> and all the directories that are there but not of the files inside the
> directories.  So if I was to "massage" the index.html to clear the
> http tags I could get a list that I can count and use to do another
> wget into the directories I find but this seems "over-engineered".
> Instead of getting the required data in 1 connection I would have to
> connect multiple times.
> 
> This is part of my squid.conf file:
> 
> httpd_accel_host 10.x.x.x
> httpd_accel_port 80
> httpd_accel_single_host on
> httpd_accel_with_proxy off
> httpd_accel_uses_host_header off
> 
> Thanks in advanced
> Miguel

To get the number of files in squids cache use SNMP and via the cachemgr
interface (squidclient mgr:info or cachemgr.cgi).

However the problem you describe with "/" URL providing a list of files is
not related to Squid at all. "/" URL is requested from the web server like
any other. It's the same URL browsers fetch when only a domain name is
entered in the address bar.

The results you get back are created by the web server. You describe a
directory listing files stored on the web server. It might be that none of
the files listed there are cacheable and stored in squid at all.


Please seriously consider upgrading your squid though. You will find any
release 2.6 or higher to be much better for reverse-proxy usage. In speed,
capability, and ease of configuration.


Amos


Re: [squid-users] Problem with options tproxy in squid 3.0

2009-10-05 Thread Amos Jeffries
On Mon, 5 Oct 2009 21:51:15 +0200, "Roman"  wrote:
> I use Debian 5.0 with kernel 2.6.31  compiled with tproxy
>  dmesg |grep TPROXY
> 
> I downloaded ad installed iptables from git.balabit.hu/bazsi
> I use current squid (version squid-3.HEAD-20090929) with options
> '--enable-linux-netfilter'
> 
> I can't open web page from client.
> 
> The following error was encountered while trying to retrieve the URL: 
> http://www.whatismyip.com/
>  Connection to 72.233.89.199 failed.
>  The system returned: (110) Connection timed out
>  The remote host or network may be down. Please try the request again.
>  in squid log i see
> 
> 2009/10/02 01:39:32.709| PconnPool::key(www.whatismyip.com,80,(no 
> domain),xxx.xxx.xxx.xxxis {www.whatismyip.com:80-xxx.xxx.xxx.xxx}
> 2009/10/02 01:39:32.709| PconnPool::pop: lookup for key 
> {www.whatismyip.com:80-xxx.xxx.xxx.xxx} failed.
> 2009/10/02 01:39:32.709| FilledChecklist.cc(162) ~ACLFilledChecklist: 
> ACLFilledChecklist destroyed 0xbfaf5d38
> 2009/10/02 01:39:32.709| ACLChecklist::~ACLChecklist: destroyed
0xbfaf5d38
> 
> what problem ? it's problem in kernel, iptables or squid ? please help
!!!


1) why are you bringing up code issues in squid-users instead of squid-dev?
- particularly for alpha code releases.

2) Exactly why are you pulling from Balabit?
- they only have experimental code available.
- their current code is _very_ untested with new IPv6 support only having 2
testers so far.

3) have you followed up on all the possibilities mentioned at
http://wiki.squid-cache.org/Features/Tproxy4#Troubleshooting ?


4) failure to find an existing persistent connection inside squid is not
unusual. Just means no connections are open yet.


Amos


Re: [squid-users] Querying cache

2009-10-05 Thread Ralf Hildebrandt
* Miguel Cruz :
> Hello all,
> 
> I would like to know if there is a way to query squid for the total
> amount of files that it has in its cache.

Yes, via SNMP

-- 
Ralf Hildebrandt
  Geschäftsbereich IT | Abteilung Netzwerk
  Charité - Universitätsmedizin Berlin
  Campus Benjamin Franklin
  Hindenburgdamm 30 | D-12203 Berlin
  Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962
  ralf.hildebra...@charite.de | http://www.charite.de



[squid-users] Querying cache

2009-10-05 Thread Miguel Cruz
Hello all,

I would like to know if there is a way to query squid for the total
amount of files that it has in its cache.

Reason is we are using squid in http_accell mode and if I do a wget on
"/" I can get a listing of all the files that reside on the docroot
and all the directories that are there but not of the files inside the
directories.  So if I was to "massage" the index.html to clear the
http tags I could get a list that I can count and use to do another
wget into the directories I find but this seems "over-engineered".
Instead of getting the required data in 1 connection I would have to
connect multiple times.

This is part of my squid.conf file:

httpd_accel_host 10.x.x.x
httpd_accel_port 80
httpd_accel_single_host on
httpd_accel_with_proxy off
httpd_accel_uses_host_header off

Thanks in advanced
Miguel


[squid-users] Problem with options tproxy in squid 3.0

2009-10-05 Thread Roman
I use Debian 5.0 with kernel 2.6.31  compiled with tproxy
 dmesg |grep TPROXY

I downloaded ad installed iptables from git.balabit.hu/bazsi
I use current squid (version squid-3.HEAD-20090929) with options
'--enable-linux-netfilter'

I can't open web page from client.

The following error was encountered while trying to retrieve the URL: 
http://www.whatismyip.com/
 Connection to 72.233.89.199 failed.
 The system returned: (110) Connection timed out
 The remote host or network may be down. Please try the request again.
 in squid log i see

2009/10/02 01:39:32.709| PconnPool::key(www.whatismyip.com,80,(no 
domain),xxx.xxx.xxx.xxxis {www.whatismyip.com:80-xxx.xxx.xxx.xxx}
2009/10/02 01:39:32.709| PconnPool::pop: lookup for key 
{www.whatismyip.com:80-xxx.xxx.xxx.xxx} failed.
2009/10/02 01:39:32.709| FilledChecklist.cc(162) ~ACLFilledChecklist: 
ACLFilledChecklist destroyed 0xbfaf5d38
2009/10/02 01:39:32.709| ACLChecklist::~ACLChecklist: destroyed 0xbfaf5d38

what problem ? it's problem in kernel, iptables or squid ? please help !!!

Thanks
Roman


 



-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



Re: [squid-users] reverse proxy - sporadic TCP_MISS/403

2009-10-05 Thread Michael Grimm

Dear Amos,

thank you for your fast help. The config works perfectly.

Kind regards
Michael


Amos Jeffries schrieb:

You have a big huge problem.

You wanted a reverse proxy. But you configured something else very weird
instead.

Also, the bug in Squid-3 which allowed this configuration to work at all
has just been fixed.
  
You need to reconfigure your squid properly as a reverse proxy.


http://wiki.squid-cache.org/ConfigExamples/Reverse/BasicAccelerator

Note the comment at the top of the squid configuration section "This
configuration MUST appear at the top ..."

Assuming that the above was your whole config... Erase the contents of
squid.conf and replace with only this:

cache_mgr i...@mycompany.com
access_log /var/log/squid/access.log squid

https_port 443 accel
cert=/etc/ssl/reverse_proxy/customer.mycompany.com.cert
key=/etc/ssl/reverse_proxy/customer.mycompany.com.key
defaultsite=customer.mycompany.com options=NO_SSLv2

cache_peer 192.168.1.50 parent 8080 0 originserver no-query 
	name=tomcatapplication forcedomain=customer.mycompany.com


acl reverse_tomcatapplication dstdomain customer.mycompany.com

http_access allow reverse_tomcatapplication
http_access deny all

cache_peer_access tomcatapplication allow reverse_tomcatapplication
cache_peer_access tomcatapplication deny all

never_direct allow all


Amos

  




Re: [squid-users] range_offset_limit per domain

2009-10-05 Thread Matthew Morgan

Henrik Nordstrom wrote:

mån 2009-09-28 klockan 17:55 -0400 skrev Matthew Morgan:
  

Is it possible to set range_offset_limit per domain?



Not today, but should not be too hard to add in the code.

If you know a little C programming then you are very welcome to give it
a try. Just join squid-dev list and ask for hints on where to start.

If you don't feel equipped for looking into the source then find someone
to do it for you. Or start a new feature page in the wiki hoping that
someone else will pick it up..

Regards
Henrik


  
I didn't think it sounded like it would be very difficult.  As long as 
no one minds it taking me a little while, I'd really like to give it a 
try.  I'll pop in on squid-dev in the next day or two and see what they say!


Thanks, Henrik.


[squid-users] Squid and PDF

2009-10-05 Thread Randall Fidler
Hello,

I have squid up and running and the one issue which is causing
headaches is the viewing of PDF files.  From sites which are in my
'approved' list, if I click on a PDF link, my browser (Firefox) will
just hang and I eventually have to kill it.  If I do the same action
without going through squid (same machine, same browser, same site,
etc.) then the Acrobat plugin fires up and I can view the pdf without
issue - so to me it's a squid related problem.

Is there some port which I need to allow?  If so I would think that
Squid would give me a 'denied' error, not just cause the browser to
hang up.

Ideas?

Regards,

Randall


[squid-users] squid counters appear to be wrapping on squid v2.6.18 (old I know)

2009-10-05 Thread Gavin McCullagh
Hi,

we're seeing something odd on squid v2.6.18-1ubuntu3.  I know this is an
old version and not recommended but I just thought I'd point it out to make
sure this has been fixed in a more recent version.

After some time running, a couple of squid's pointers appear to be
wrapping, like signed 32-bit integers.  In particular:

  client_http.kbytes_out = -2112947050

We noticed this as we use munin, which queries the counters in this way and
ignores negative values.  The select_loops value is also negative.  If this
is fixed in v2.7 that's fair enough but I thought I'd mention it here in
case it isn't.

Gavin


gavi...@watcher:~$ nc localhost 8080
GET cache_object://localhost/counters HTTP/1.0
Accept: */*

HTTP/1.0 200 OK
Server: squid/2.6.STABLE18
Date: Mon, 05 Oct 2009 15:44:17 GMT
Content-Type: text/plain
Expires: Mon, 05 Oct 2009 15:44:17 GMT
Last-Modified: Mon, 05 Oct 2009 15:44:17 GMT
X-Cache: MISS from watcher.gcd.ie
X-Cache-Lookup: MISS from watcher.gcd.ie:8080
Via: 1.0 watcher.gcd.ie:8080 (squid/2.6.STABLE18)
Proxy-Connection: close

sample_time = 1254757456.12734 (Mon, 05 Oct 2009 15:44:16 GMT)
client_http.requests = 63518961
client_http.hits = 27155728
client_http.errors = 5191
client_http.kbytes_in = 77340031
client_http.kbytes_out = -2112947050
client_http.hit_kbytes_out = 261721929
server.all.requests = 36663719
server.all.errors = 0
server.all.kbytes_in = 1908075341
server.all.kbytes_out = 61829714
server.http.requests = 36233326
server.http.errors = 0
server.http.kbytes_in = 1901156005
server.http.kbytes_out = 59068791
server.ftp.requests = 28
server.ftp.errors = 0
server.ftp.kbytes_in = 941732
server.ftp.kbytes_out = 4
server.other.requests = 430365
server.other.errors = 0
server.other.kbytes_in = 5977603
server.other.kbytes_out = 2760918
icp.pkts_sent = 0
icp.pkts_recv = 0
icp.queries_sent = 0
icp.replies_sent = 0
icp.queries_recv = 0
icp.replies_recv = 0
icp.query_timeouts = 0
icp.replies_queued = 0
icp.kbytes_sent = 0
icp.kbytes_recv = 0
icp.q_kbytes_sent = 0
icp.r_kbytes_sent = 0
icp.q_kbytes_recv = 0
icp.r_kbytes_recv = 0
icp.times_used = 0
cd.times_used = 0
cd.msgs_sent = 36  
cd.msgs_recv = 36
cd.memory = 0
cd.local_memory = 6481
cd.kbytes_sent = 3
cd.kbytes_recv = 48
unlink.requests = 0
page_faults = 6
select_loops = -1656175576
cpu_time = 78423.85
wall_time = 1.478176
swap.outs = 12350417
swap.ins = 39255680
swap.files_cleaned = 1158
aborted_requests = 1300408



Re: [squid-users] Squid 'Waiting For...' Hanging

2009-10-05 Thread Matthew Morgan

Henrik Nordstrom wrote:

mån 2009-10-05 klockan 02:05 -0700 skrev Morphine.:
  

Recently I've observed squid hanging.
I've only noticed this on some forum websites such as
http://forums.overclockers.com.au
The Paige loads 100% (As far as i can observe) but still the page appears to
be loading, displaying the messages "Waiting for url" or "Transferring data
from" which never finish. (Hanging)



Loads fine for me using Firefox and Squid. Admittedly in normal proxy
operation and not transparent interception but that should not make much
difference...

Regards
Henrik


  
I can confirm it works fine in transparent mode also with Firefox.  I 
browsed around the site for a few minutes, and every page load status 
went to "done".


[squid-users] Re: Re[squid-users] verse Proxy, sporadic TCP_MISS

2009-10-05 Thread tookers



Henrik Nordstrom-5 wrote:
> 
> tis 2009-09-29 klockan 02:41 -0700 skrev tookers:
>> Hello all,
>> 
>> I'm running several Squid boxes as reverse proxies, the problem i'm
>> seeing
>> is when there are a high number of connections in the region of 80,000
>> per
>> Squid at peak I'm getting 1,000's of TCP_MISS for the same URL hitting
>> the
>> back end servers, things do eventually sort themselves out. Is there any
>> way
>> to prevent such behaviour? I assumed with 'collapsed_forwarding on' it
>> would
>> only send a single request to the backend for new content? 
> 
> It does, but if that response is not cachable for some reason then all
> waiting clients will storm the server all at once..
> 
> 
> Regards
> Henrik
> 

Hi Henrik,
Thanks for your reply. I'm getting TCP_MISS/200 for these particular
requests so the file exists on the back-end, Squid seems unable to store the
object in cache (quite possible due to a lack of free fd's), or possibly due
to the high traffic volume. I've increased the number of fd's (from 100k to
150k), increased cache_mem from 512MB to 768MB and enabled cache_log to
check requests during busy peaks.
Is there any way to control the 'storm' of requests? I.e. Possibly force the
object to cache (regardless of pragma:no-cache etc) or have some sort of
timer / sleeper function to allow only a small number of requests, for a
particular request, to goto the backend?

Many thanks,
tookers
-- 
View this message in context: 
http://www.nabble.com/Reverse-Proxy%2C-sporadic-TCP_MISS-tp25659879p25752579.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] ssl_bump and certificate for client

2009-10-05 Thread Carsten Lührs

Hi,
I configured ssl_bump as follows:

sslproxy_version 1
ssl_bump allow all
sslproxy_cert_error deny all
always_direct allow all

http_port 3128 sslBump cert=/usr/local/squid/etc/cert.pem

My problem is, that the client receives a certificate issued fo the 
squid, not

for the original server (using the squid CA) - how could I solve this?

Thanks
?? Carsten


Re: [squid-users] not caching enough

2009-10-05 Thread ant2ne

Squid version 2.6. This is the apt-get version for ubuntu 8.04. I think you
are right about the ignore-reload.

Here is my squid.conf that I will put into production at 3pm today.

http_port 3128
acl QUERY urlpath_regex cgi-bin \?
cache_mem 512 MB# May need to set lower if I run low on RAM
maximum_object_size_in_memory 2048 KB# May need to set lower if I run
low on RAM
maximum_object_size 1 GB
cache_dir aufs /cache 50 256 256
redirect_rewrites_host_header off
cache_replacement_policy lru
acl all src all
acl localnet src 10.60.0.0/255.255.0.0
acl localhost src 127.0.0.1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/8
acl Safe_ports port 80 443 210 119 70 21 1025-65535
acl SSL_Ports port 443
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_Ports
http_access allow localnet
http_access allow localhost
http_access deny all 
icp_port 0
refresh_pattern \.jpg$ 3600 50% 60 #ignore-reload
refresh_pattern \.gif$ 3600 50% 60 #ignore-reload
refresh_pattern \.css$ 3600 50% 60 #ignore-reload
refresh_pattern \.js$ 3600 50% 60 #ignore-reload
refresh_pattern \.html$ 300 50% 10 #ignore-reload
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320
visible_hostname AHSPX01





-- 
View this message in context: 
http://www.nabble.com/not-caching-enough-tp25530445p25752421.html
Sent from the Squid - Users mailing list archive at Nabble.com.



RE: [squid-users] SSL Reverse Proxy testing With Invalid Certificate, can it be done.

2009-10-05 Thread Dean Weimer
> -Original Message-
> From: Henrik Nordstrom [mailto:hen...@henriknordstrom.net]
> Sent: Monday, October 05, 2009 4:48 AM
> To: Dean Weimer
> Cc: squid-users@squid-cache.org
> Subject: Re: [squid-users] SSL Reverse Proxy testing With Invalid
> Certificate, can it be done.
> 
> fre 2009-09-25 klockan 10:57 -0500 skrev Dean Weimer:
> 
> > 2009/09/25 11:38:07| SSL unknown certificate error 18 in...
> > 2009/09/25 11:38:07| fwdNegotiateSSL: Error negotiating SSL
> connection on FD 15: error:14090086:SSL
> routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
(1/-1/0)
> 
> This is your Squid trying to use SSL to connect to the requested
> server.
> Not related to the http_port certificate settings.
> 
> validation requirements on peer certificates is set in cache_peer.
> 
> Regards
> Henrik

I was running Squid 3.0.STABLE19 on the test system.  Here are the
configuration lines from the original test. At one point I had added
cert lines on the cache_peer before realizing that those were only for
use when certificate authentication was needed on the parent.  I can't
remember for sure if the log was copied form when I had those options on
or not, I still had an invalid certificate error after removing them but
it may have been a different error number.

https_port 443 accel cert=/usr/local/squid/etc/certs/server.crt
key=/usr/local/squid/etc/certs/server.key defaultsite=mysite vhost

cache_peer 1.2.3.4 parent 443 0 no-query originserver ssl
sslflags=DONT_VERIFY_PEER,DONT_VERIFY_DOMAIN name=secure_mysite

My production server is a couple revisions behind, currently running
STABLE17, it will be updated to 19 this coming weekend.  I did not test
it with the fake certificate.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co


Re: [squid-users] logrotate in squid

2009-10-05 Thread espoire20



espoire20 wrote:
> 
> I use sarg to generate the report but my disk space come full now so 
> i need to create a logrotate. so like this i can  rotates my logs every
> day and automatically delete the eldest logs. I should configure it to
> rotate my squid logs every week and keep the logs for more than four weeks
> I want to be able to produce monthly reports. Of course, adjust this
> according to the traffic in my  proxy , like this i can have the  free
> disk space in my server.
> 
> please you know how i can create the logrotate ??
> 
> many thanks 
> 

Thank you , please if you can give me more ditaille because i m not stronge
in squid 

this my crontab

# run-parts
01 * * * * root run-parts /etc/cron.hourly
02 4 * * * root run-parts /etc/cron.daily
22 4 * * 0 root run-parts /etc/cron.weekly
42 4 1 * * root run-parts /etc/cron.monthly

many thanks 
-- 
View this message in context: 
http://www.nabble.com/logrotate-in-squid-tp25728886p25750483.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] POST NONE://

2009-10-05 Thread Henrik Nordstrom
mån 2009-10-05 klockan 22:56 +1300 skrev Amos Jeffries:

> I'm not sure if that applies to this situation since it requires an 
> intermediate proxies to upgrade as well.

Ofcourse.

> For the record, Chunked coding is in all current 3.x releases since 
> 3.0.STABLE16.

That's just responses right?

This thread is about POST requests...

Regards
Henrik



Re: [squid-users] Squid 'Waiting For...' Hanging

2009-10-05 Thread Henrik Nordstrom
mån 2009-10-05 klockan 02:05 -0700 skrev Morphine.:
> Recently I've observed squid hanging.
> I've only noticed this on some forum websites such as
> http://forums.overclockers.com.au
> The Paige loads 100% (As far as i can observe) but still the page appears to
> be loading, displaying the messages "Waiting for url" or "Transferring data
> from" which never finish. (Hanging)

Loads fine for me using Firefox and Squid. Admittedly in normal proxy
operation and not transparent interception but that should not make much
difference...

Regards
Henrik



Re: [squid-users] problems

2009-10-05 Thread Henrik Nordstrom
fre 2009-10-02 klockan 14:56 -0500 skrev Al - Image Hosting Services:

> This is where I ran into problems. Both https and ftp are filtered fine 
> when configured in the browser, but don't work when just pushed to the 
> proxy though the software. Since the software runs on the end users 
> computers, it seems like I should be able to make ftp and https work. Does 
> anyone have any suggestions on how to do this?

How does that software push the traffic to the proxy?

Needs to be done by forcing the proxy settings in the browsers. Doable
via a domain policy for IE at least..

Regards
Henrik



Re: [squid-users] secured authentication

2009-10-05 Thread Henrik Nordstrom
tis 2009-09-29 klockan 21:28 -0500 skrev David Boyer:
> I've been using squid_ldap_auth (Squid 2.7, SLES 11) for basic
> authentication, and it wasn't terribly difficult to set up. What
> concerns me is the passing of credentials from the browser to Squid in
> plain text. When we use basic authentication anywhere else, the web
> site usually requires HTTPS. I'm not seeing an easy way to do that
> with Squid.

Squid can via it's https_port directive, but there is no known browsers
supporting SSL encrypted proxy connections.

> We have a full Active Directory environment, and everyone using Squid has a 
> domain account. Our users use a combination of Firefox 3.x, IE, and Safari.

Then NTLM or Kerberos/Negotiate authentication should be a viable option
for you.

The other available option Digest authentication unfortunately can not
integrate with Active Directory that easy...

> What options are there for using authentication with Squid while also
> ensuring the credentials passed between the browser and Squid are
> encrypted? The stunnel approach would not be an option for us.

And neither is pushing the browser vendors to have support for SSL
encrypted proxy connections I suppose?

Regards
Henrik



Re: [squid-users] External Script for checks

2009-10-05 Thread Henrik Nordstrom
fre 2009-10-02 klockan 11:42 +0200 skrev Stefan Dengscherz:

> i'm using 'external_acl_type' with a homebrew script to lookup remote
> user ids via the windows registry at the moment because NTLM and
> Kerberos did not work well in my environment.

Interesting. Can you provide more information on the script you wrote?

Regards
Henrik



Re: [squid-users] Purge tool in 'related software' not downloadable

2009-10-05 Thread Henrik Nordstrom
fre 2009-10-02 klockan 14:09 +1300 skrev Amos Jeffries:

> > Is this still usable with squid 3.x?
> 
> I believe so. There have been no problem reports here to my knowledge.

There is a small patch required for 2.6 or later at
http://www.henriknordstrom.net/code/

> The cache storage systems have not changed format very much. Only an 
> addition to make large files the default. I would assume the tool it had 
> that support already for people selecting the optional support in 2.x

Had to patch it a bit to work with 2.6 or later (LFS changes, and other
metadata additions).. have not tested with 3.x yet.

Regards
Henrik



Re: [squid-users] Squid and Intranet

2009-10-05 Thread Henrik Nordstrom
tor 2009-10-01 klockan 23:01 +0200 skrev - leer -:
> Dear guys,
> 
> I have running Squid 2.7 under SUSE.
> And it works fine with a parent Squid in another network.
> But when I use the IP to my webserver for example 192.168.0.1
> I can't get the page, because Squid is trying to resolv the IP
> with the parent proxy.
> 
> How can I diasbled that for the local IP bench?

See the always_direct directive.

REgards
Henrik



Re: [squid-users] External Script for checks

2009-10-05 Thread Henrik Nordstrom
tor 2009-10-01 klockan 07:45 -0400 skrev mic...@casa.co.cu:

> Would like to make a script for my squid server then checks against  
> mysql search if the user is connected, compare against a file if the  
> user exists in that list, take the ip address that I assign freeradius  
> (stored in mysql) and squid allows Internet access.

This is done via external_acl_type in squid.conf.

Requirements:

you need a script that queries your mysql DB (or data extracted from
there) based on IP and returns the username authenticated at that IP.

Regards
Henrik



Re: [squid-users] Caching is growing faster than releasing objects

2009-10-05 Thread Henrik Nordstrom
ons 2009-09-30 klockan 08:28 -0500 skrev Luis Daniel Lucio Quiroz:
> Hi all,
> 
> Well, after implementing cache, in a heavy environment (with about 5k users) 
> I'm seeing that our squid is not freeing far enough objects, our 100GB disk 
> cache fills in 5 days.  I wonder I misunderstood refresh_pattern options.

No. refresh_pattern is not about freeing cache.

How is your cache_dir configured?

Regards
Henrik



Re: [squid-users] How can i leave this mail list

2009-10-05 Thread Amos Jeffries

Leonel Florín Selles wrote:

could any body tell me How can i leave this mail list.



http://www.squid-cache.org/Support/mailing-lists.dyn

"To unsubscribe ..."

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE19
  Current Beta Squid 3.1.0.14


[squid-users] How can i leave this mail list

2009-10-05 Thread Leonel Florín Selles
could any body tell me How can i leave this mail list.



Re: [squid-users] problems

2009-10-05 Thread Amos Jeffries

Al - Image Hosting Services wrote:

Hi,

I seem to have created a lot of problems for myself. We are using squid 
with custom written software to filter web content. Because the server 
is in one location and my users are in other locations and because of 
the large number of hours spent helping people setup their computers to 
use the proxy, I had software written to push everything on port 80, 
443, and 21 to the squid servers and to prevent people from changing the 
settings. This is where I ran into problems. Both https and ftp are 
filtered fine when configured in the browser, but don't work when just 
pushed to the proxy though the software. Since the software runs on the 
end users computers, it seems like I should be able to make ftp and 
https work. Does anyone have any suggestions on how to do this?


Best Regards,
Al


The problem you face is that both FTP and HTTPS are not HTTP. They 
require special wrapping protocol actions to take place in order to 
transfer them over HTTP.


FTP requires that the destination URL from the browser address bar be 
sent unhandled to the proxy. Unless the browser is explicitly configured 
to know about the proxy it will attempt to open native FTP connections 
itself.  To catch those you require an FTP proxy such as frox.


HTTPS requires a special CONNECT method open a tunnel through the proxy. 
After which the native SSL wrappers can be sent down it. Very tricky to 
do it without affecting the SSL transport but you might be able to catch 
the HTTPS and do the wrapping yourself.



Or... you could use WPAD/PAC requests sent by the browsers when they 
startup. That way you can send back a PAC file automatically configuring 
the browsers to use the proxy.


Worst case there you might need to catch the browser WPAD requests, 
which fortunately are HTTP, and maybe control DHCP.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE19
  Current Beta Squid 3.1.0.14


Re: [squid-users] Re: Appending multiple domains for non-FQDN DNS resolution

2009-10-05 Thread Henrik Nordstrom
ons 2009-09-30 klockan 15:35 +1200 skrev dmor...@tycoflow.co.nz:

> I have now built another Squid server based on 3.0 STABLE19 but am
> experiencing the same results.
> I can resolve all non-FQDN addresses perfectly (across our three internal
> domains) from the server command line yet Squid refuses to query DNS based
> on the multiple search domains specified in resolv.conf. I have yet to
> disable Squid’s internal DNS as I read its far from the preferred setup in
> a modern install.
> 
> Any ideas ?

Have you enabled dns_defnames in squid.conf?

Regards
Henrik



Re: [squid-users] Truncated requests in cache.log

2009-10-05 Thread Henrik Nordstrom
tis 2009-09-29 klockan 09:52 -0700 skrev dtinazzi:

> Anyway, the only way to resolve this problem seems to be to update Squid,
> right?

Worth a try, but doubt it will make a difference. What needs to be done
is identify why the client and Squid gets out of sync. Either the client
is sending bad data to Squid, or Squid is getting confused and reading
it wrongly.

Regards
Henrik



Re: [squid-users] Truncated requests in cache.log

2009-10-05 Thread Henrik Nordstrom
tor 2009-09-24 klockan 09:30 -0700 skrev dtinazzi:

> You can see the request has the starting part truncated (all final 
> characters are mine...), probably it's the reason because I've "unsupported
> method" error and then "Invalid request", but I've these problems only for
> certain pages and not for others, so I can't understand the reason.

If you have identified when this happens then please collect a packet
trace of the request traffic so it can be analyzed in detail

  tcpdump -s 0 -w bad_request.pcap host ip.of.server and host 
ip.of.testing.client

Then sent that bad_request.pcap trace to me at
ftp://ftp.squid-cache.se/incoming/ and drop me an email notifying me
it's there, or open it in wireshark and try to analyze the http traffic
to see if you can understand what goes wrong..

Regards
Henrik



Re: [squid-users] squid.conf and Squid 2.6 vs. Squid 2.7

2009-10-05 Thread Henrik Nordstrom
tis 2009-09-29 klockan 00:10 -0400 skrev Michael Lenaghan:

> I've had a very difficult time finding good docs for vhost, vport and
> defaultsite. I've looked and I've searched in many places, but I
> haven't found anything that would help me explain *why* this change
> worked. Indeed, the bits I did find made me think that perhaps you
> don't need defaultsite when you're using vhost--but I'm not even sure
> about that!

vport is a little overloaded and serves two purposes. Controls both the
port number of the reconstructed URL and may be used to enable IP based
host reconstruction as fallback when there is no Host header (or when
vhost is not enabled).

You need at least one of vhost, vport or defaultsite enabled.

Basically

1. If vhost is enabled then use the Host header.

2. If vhost is not enabled or if there is no host header then use
defaultsite if set.

3. If none of the above and vport is set without argument then use the
local IP address as hostname.

4. If the above do not contain a port number then vport=NN is used as
port number, or if vport is not set to a static port number then the
local http_port number.

> (The 2.7 change notes say that for http_port "Accelerator mode options
> cleaned up (accel, defaultsite, vport, vhost and combinations
> thereof)". Is the difference in behaviour here related to that
> clean-up?)

Yes. Was even more odd before..

Regards
Henrik



Re: [squid-users] POST NONE://

2009-10-05 Thread Amos Jeffries

Henrik Nordstrom wrote:

mån 2009-09-28 klockan 12:23 +0400 skrev Mario Remy Almeida:

Hi Amos,

Thanks for that, My problem is solved.

Is there any way to by-pass such problems. I mean for known source IP if
HTTP headers are not set then still it is pass through.


There is preleminary support for chunked encoding of requests in 3.1 I
think. (the other alternative to sending Content-Length in requests with
a request body)

Regards
Henrik



I'm not sure if that applies to this situation since it requires an 
intermediate proxies to upgrade as well.


For the record, Chunked coding is in all current 3.x releases since 
3.0.STABLE16.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE19
  Current Beta Squid 3.1.0.14


RE: [squid-users] SSL Reverse Proxy testing With Invalid Certificate, can it be done.

2009-10-05 Thread Henrik Nordstrom
tis 2009-09-29 klockan 07:54 -0500 skrev Dean Weimer:

> I didn't see that one, though I have the real certificate now and
> everything is working with it.  I figure the sslflags on the cache peer
> settings should accomplish the same thing, but they didn't seem to make
> a difference whether I included them or not.

It should.

Which versions of Squid are you running?

Regards
Henrik



Re: [squid-users] SSL Reverse Proxy testing With Invalid Certificate, can it be done.

2009-10-05 Thread Henrik Nordstrom
fre 2009-09-25 klockan 10:57 -0500 skrev Dean Weimer:

> 2009/09/25 11:38:07| SSL unknown certificate error 18 in...
> 2009/09/25 11:38:07| fwdNegotiateSSL: Error negotiating SSL connection on FD 
> 15: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate 
> verify failed (1/-1/0)

This is your Squid trying to use SSL to connect to the requested server.
Not related to the http_port certificate settings.

validation requirements on peer certificates is set in cache_peer.

Regards
Henrik



Re: [squid-users] Re[squid-users] verse Proxy, sporadic TCP_MISS

2009-10-05 Thread Henrik Nordstrom
tis 2009-09-29 klockan 02:41 -0700 skrev tookers:
> Hello all,
> 
> I'm running several Squid boxes as reverse proxies, the problem i'm seeing
> is when there are a high number of connections in the region of 80,000 per
> Squid at peak I'm getting 1,000's of TCP_MISS for the same URL hitting the
> back end servers, things do eventually sort themselves out. Is there any way
> to prevent such behaviour? I assumed with 'collapsed_forwarding on' it would
> only send a single request to the backend for new content? 

It does, but if that response is not cachable for some reason then all
waiting clients will storm the server all at once..


Regards
Henrik



Re: [squid-users] squid vport

2009-10-05 Thread Henrik Nordstrom
tis 2009-09-29 klockan 15:41 +0800 skrev wangwen:

> alter HTTP_Port as follow:
> http_port 192.168.0.164:88 accel vhost defaultsite=192.168.24.198
> When Clients access http://192.168.0.164:88/rdims/index.jsp
> HTTP request header which Squid sent to backend server is:
> 
> GET /rdims/index.jsp HTTP/1.0
> Accept: */*
> Accept-Language: zh-cn
> UA-CPU: x86
> Accept-Encoding: gzip, deflate
> User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; SLCC1;.NETCLR
> 2.0.50727; InfoPath.1; .NET CLR 3.5.30729; .NET CLR 3.0.30618)
> Host: 192.168.24.198
> Via: 1.1 szrd.com:88 (squid/2.6.STABLE21)
> X-Forwarded-For: 192.168.12.48
>  Cache-Control: max-age=259200
> Connection: keep-alive
> 
> Now i did not use vport.
> Above you said that the Squid check the URL. Finds /rdims/index.jsp.
> ... checks the Host: header. Finds 192.168.0.164:88, making
> URL=http://192.168.0.164:88/rdims/index.jsp

Port management in accelerator mode is a little odd.. use of vport
(without arguments) is recommended if you need to use ports other than
the default (80/443).

Regards
Henrik



Re: [squid-users] range_offset_limit per domain

2009-10-05 Thread Henrik Nordstrom
mån 2009-09-28 klockan 17:55 -0400 skrev Matthew Morgan:
> Is it possible to set range_offset_limit per domain?

Not today, but should not be too hard to add in the code.

If you know a little C programming then you are very welcome to give it
a try. Just join squid-dev list and ask for hints on where to start.

If you don't feel equipped for looking into the source then find someone
to do it for you. Or start a new feature page in the wiki hoping that
someone else will pick it up..

Regards
Henrik



Re: [squid-users] POST NONE://

2009-10-05 Thread Henrik Nordstrom
mån 2009-09-28 klockan 12:23 +0400 skrev Mario Remy Almeida:
> Hi Amos,
> 
> Thanks for that, My problem is solved.
> 
> Is there any way to by-pass such problems. I mean for known source IP if
> HTTP headers are not set then still it is pass through.

There is preleminary support for chunked encoding of requests in 3.1 I
think. (the other alternative to sending Content-Length in requests with
a request body)

Regards
Henrik





Re: [squid-users] Too many ldap tryes

2009-10-05 Thread Henrik Nordstrom
fre 2009-09-25 klockan 17:40 -0500 skrev Luis Daniel Lucio Quiroz:
> I dont know usernames users try.  I just wonder if there is a way to tell 
> squid to ignore usernames that they doesnt exists.
> 
> Maybe an external ACL with 2 days cache?

Unfortunately not. Authentication have to be passed before the acls is
used.

Regards
Henrik



Re: [squid-users] Too many ldap tryes

2009-10-05 Thread Henrik Nordstrom
fre 2009-09-25 klockan 17:40 -0500 skrev Luis Daniel Lucio Quiroz:

> I dont know usernames users try.  I just wonder if there is a way to tell 
> squid to ignore usernames that they doesnt exists.

access.log should contain the user info. Look for TCP_DENIED/407
responses with a username.

Regards
Henrik



Re: [squid-users] Strange parent-childrend disconection

2009-10-05 Thread Henrik Nordstrom
fre 2009-09-25 klockan 16:30 -0500 skrev Luis Daniel Lucio Quiroz:
> Hi,
> 
> I have a squid with some parents.  Suddenly I'm habb
> 2009/09/25 16:09:03| TCP connection to 10.10.50.233/3228 failed   
>
> 2009/09/25 16:09:03| TCP connection to 10.10.50.234/3228 failed   
>
> 2009/09/25 16:09:03| TCP connection to 10.10.50.235/3228 failed

Looks like a networking issue of some kind.

- Router/switch failing
- Local connection table full
- Out of local sockets

Squid uses persistent connections by default, but even with persistent
connections the connections have to be reopened fairly frequently.

Regards
Henrik



[squid-users] Https traffic

2009-10-05 Thread Ivan . Galli
Hi, 
my company are going to buy Websense web security suite. 
It seems to be able to decrypt and check contents in ssl tunnel. 
Is it really important to do this to prevent malicius code or dangerous 
threat?

Thanks and regards.

Ivan

On Wed, 30 Sep 2009 14:58:08 +0200, Ivan.Galli_at_aciglobal.it wrote: 
> Hi, i have a question about https traffic content. 
> There is some way to check what pass through ssl tunnel? 
> Can squidguard or any other programs help me? 
The 'S' in HTTPS means Secure or SSL encrypted. 
Why do you want to do this? 
Depends on the type of service environment are you working with... 
* ISP-like where 'random' people use the proxy? 
- dont bother. This is a one-way road to serious trouble. 
* reverse-proxy where you own or manage the HTTPS website itself? 
- use https_port and decrypt as things enter Squid. Re-encrypt if needed 
to 
the peer. 
* Enterprise setup where you have full control of the workstation 
configuration? 
- use Squid-3.1 and SslBump. Push out settings to all workstations to 
trust 
the local proxy keys (required). 
Amos 

Ivan 


[squid-users] Squid 'Waiting For...' Hanging

2009-10-05 Thread Morphine.

Recently I've observed squid hanging.
I've only noticed this on some forum websites such as
http://forums.overclockers.com.au
The Paige loads 100% (As far as i can observe) but still the page appears to
be loading, displaying the messages "Waiting for url" or "Transferring data
from" which never finish. (Hanging)


This is my Squid.Conf:

http_port 192.168.1.10:3128 transparent
forwarded_for off

access_log /var/log/squid/access.log squid
hosts_file /etc/hosts
coredump_dir /var/spool/squid

acl all src all
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl SSL_ports port 443 # https
acl SSL_ports port 563 # snews
acl SSL_ports port 873 # rsync
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 631 # cups
acl Safe_ports port 873 # rsync
acl Safe_ports port 901 # SWAT
acl purge method PURGE
acl CONNECT method CONNECT
acl apache rep_header Server ^Apache
acl Gleeson.Lan src 192.168.1.0/255.255.255.0

http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow Gleeson.Lan
http_access deny all
broken_vary_encoding allow apache


server_http11 on
icp_access allow localnet
icp_access deny all
hierarchy_stoplist cgi-bin ?

cache_mem 64 MB
maximum_object_size_in_memory 512 KB
maximum_object_size 64 MB

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern (Release|Package(.gz)*)$0   20% 2880
refresh_pattern .   0   20% 4320



 
-- 
View this message in context: 
http://www.nabble.com/Squid-%27Waiting-For...%27-Hanging-tp25747379p25747379.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] 'Waiting For...'

2009-10-05 Thread Morphine.


-- 
View this message in context: 
http://www.nabble.com/%27Waiting-For...%27-tp25747264p25747264.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] squid_kerb_auth Backup-Auth server?

2009-10-05 Thread Mrvka Andreas
Thanks for response.

I will try it.

But as Markus mentioned before, authentication doesn't need any configured 
KDCs because it looks into AD - it didn't help for me maybe caused by one kdc 
entry in the realm section you mentioned below.

I hope I find time to test both scenarios.

Regards
Andrew


Am Freitag, 2. Oktober 2009 22:26:19 schrieb andrew:
> Mrvka Andreas wrote:
> > Hi list,
> >
> > does anybody know if there is any change to define a backup kerberos
> > authentication server?
> >
> > Do I have to set anything in krb5.conf to support more than one AD
> > server?
> >
> > If I want to reboot the kerberos server squid should still be able to
> > authenticate.
> >
> > Are there any hints?
> >
> > Regards
> > Andrew
> 
> Try several "kdc" lines in the /etc/krb5.conf file.
> Like this
> 
> [realms]
> DOMAIN.BLA = {
> kdc = kerbserver1.domain.bla
> kdc = kerbserver2.domain.bla
> 
> }
> 
> 
> HTH,
> 
> Andrew
> 


Re: [squid-users] Not able to access Thunderbird from a linux client through squid

2009-10-05 Thread Matus UHLAR - fantomas
> >> I am using squid2.6stable18 on ubuntu 8.04 server.
> >> I have configured squid for very basic proxy and my squid.conf is below.
> >> I am not able to access thunderbird email through this proxy
> >> configuration, I am using thunderbird from a Ubuntu client, but i am
> >> able to access internet using Mozilla Firefox browser, but not
> >> thunderbird. How can i get this working?
> >>
> >> The thunderbird client uses Port 110 and 25 to access emails and i
> >> have enabled them here.

> On Fri, Sep 18, 2009 at 12:17 PM, Amos Jeffries  wrote:
> > Thunderbird does not use HTTP proxy for email fetching.
> >
> > It will only use HTTP proxy settings for fetching HTML content images,
> > videos and virus code which are embeded.
> >
> > The emails themselves are fetched via native POP3 or IMAP protocols.

On 29.09.09 12:22, Avinash Rao wrote:
> I understand, but why isn't it working? If the machine has direct
> connection to internet (modem connected to the machine) thunderbird
> works, but if it has to go through proxy it doesn't work.

He just said it. Squid is a HTTP proxy and can not be used as proxy for
POP/IMAP/SMTP protocols. You must connect to those services directly, not
through proxy. 
-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Due to unexpected conditions Windows 2000 will be released
in first quarter of year 1901


Re: [squid-users] Managing clusters of siblings (squid2.7)

2009-10-05 Thread Matus UHLAR - fantomas
On 28.09.09 15:04, Chris Hostetter wrote:
> My company currently runs several "clusters" of application servers 
> behind load balancers, which are each in turn sitting behind a "cluster" 
> of squid machines configured as accelerators. each squid cluster is then 
> sitting behind a load balancer that is hit by our clients.
...
> Our operations team is pretty adamant about software/configs deployed to  
> boxes in a clustering needing to be the same for every box in the 
> cluster. The goal is understandable: they don't want to need custom 
> install steps for every individual machine.  So while my dev setup of a 5 
> machine squid cluster each with 4 distinct "cache_peer ... sibling" lines 
> works great so far, i can't deploy a unique squid.conf for each machine 
> in a cluster.
...
> is there any easy way to reuse the same cache_peer config options on  
> multiple instances, but keep squid smart enough that it doesn't bother  
> trying to peer with itself?

We have similar problem for forward proxies. We use /etc/hosts table that
contains local and remote IPs different on each host (not only for squid) so
for squid I could just set up:

http_port proxy.example.com:3128
cache_peer sibling1.example.com
cache_peer sibling2.example.com
visible_hostname proxy.example.com

The only problem was unique_hostname which is (de facto) taken from
visible_hostname, so I've filled bugreport
http://www.squid-cache.org/bugs/show_bug.cgi?id=2654

for now we include small file containing only unique_hostname setting.

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
WinError #9: Out of error messages.