RE: [squid-users] any work arounds for bug 2176

2009-12-07 Thread Bill Allison
As the reporter of this bug, apologies Amos for not responding promptly and 
thanks Brett for doing so - my excuse is pressure of work. It's particularly on 
my conscience because we so badly need this fix. Today I have a test instance I 
can use, and I guess the next step, and my contribution, is apply the patch, 
tcpdump the result, and report back (unless Brett gets there first ;-) )

Bill A.

-Original Message-
From: Brett Lymn [mailto:bl...@baesystems.com.au] 
Sent: 07 December 2009 01:39
To: Amos Jeffries
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] any work arounds for bug 2176

On Wed, Dec 02, 2009 at 07:22:57PM +1300, Amos Jeffries wrote:
 
 Sorry. I attached it to the bug report.
 

I manually applied the patch - I couldn't be bothered with patch for a
simple #if removal.  The symptoms have changed.  We no longer get an
auth pop up but at the end of the upload the browser displays:

Bad Request (Invalid Verb)

and the document is not shown in the list.  So, the patch made a
difference but something else is amiss still.

-- 
Brett Lymn
Warning:
The information contained in this email and any attached files is
confidential to BAE Systems Australia. If you are not the intended
recipient, any use, disclosure or copying of this email or any
attachments is expressly prohibited.  If you have received this email
in error, please notify us immediately. VIRUS: Every care has been
taken to ensure this email and its attachments are virus free,
however, any loss or damage incurred in using this email is not the
sender's responsibility.  It is your responsibility to ensure virus
checks are completed before installing any data sent in this email to
your computer.



-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



[squid-users] TCP_Denied for when requesting IP as URL over SSL using squid proxy server.

2009-12-07 Thread kevin band
Hi,

I'm hoping somebody can help me here, because I'm at a loss about what
to do next.

Basically we have squid running as a proxy server to restrict access
to just those sites which we've included in our ACL's
I have noticed recently that it isn't handling HTTPS reqests properly
if the URL contains an IP address instead of a domain name.

The reason this is a particular problem is that although the users can
connect to the page using the domain name, something within that
domain is then forwarding requests to the same web-server using its IP
address.
I'm sure I have my ACL's setup correctly because squid will forward
the request using either URL if I send the requests using HTTP.  It
then times out on the web-server because it only allows https, but at
least the request is being forwarded to the web-server rather than
being denied in squid

Here's an extract from the logs that might explain it better :-

158.41.4.44 - - [04/Dec/2009:15:56:47 +] GET
http://stpaccess.marksandspencer.com/ HTTP/1.1 504 1024 TCP_MISS:NONE
158.41.4.44 - - [04/Dec/2009:15:57:02 +] CONNECT
stpaccess.marksandspencer.com:443 HTTP/1.0 200 7783 TCP_MISS:DIRECT
158.41.4.44 - - [04/Dec/2009:16:01:53 +] GET
http://63.130.82.113/Citrix/MetaFrameXP/default/login.asp HTTP/1.1
504 1064 TCP_MISS:NONE
158.41.4.44 - - [04/Dec/2009:16:03:13 +] CONNECT
63.130.82.113:443 HTTP/1.0 403 980 TCP_DENIED:NONE


And config extracts:

acl SSL_ports port 443 563 444
acl Safe_ports port 80 8002 23142 5481 5181 5281 5381 5481 5581
5400 5500   # http
acl Safe_ports port 23142   # OPEL project
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 444 563 # https, snew#s

acl CONNECT method CONNECT

acl regex_ms dstdom_regex   -i /home/security/regex_marksandspencer.txt
acl urlregex_mands url_regex -i
/home/security/regex_marksandspencer_ip.txt
acl mands_allowed_nets  src  /home/security/mands_allowed_nets.txt

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

http_access allow regex_ms  mands_allowed_nets
http_access allow urlregex_mands mands_allowed_nets
http_access deny all

There are actually a lot more ACL's than this, but these are the only
ones I think are relevant

relevant extracts from files linked to ACLs:
  regex_marksandspencer.txt
  .*marksandspencer.*com

  regex_marksandspencer_ip.txt
  .*.63.130.82.113


Thanks for any help.

Kevin,


Re: [squid-users] any work arounds for bug 2176

2009-12-07 Thread Amos Jeffries

Brett Lymn wrote:

On Wed, Dec 02, 2009 at 07:22:57PM +1300, Amos Jeffries wrote:

Sorry. I attached it to the bug report.



I manually applied the patch - I couldn't be bothered with patch for a
simple #if removal.  The symptoms have changed.  We no longer get an
auth pop up but at the end of the upload the browser displays:

Bad Request (Invalid Verb)

and the document is not shown in the list.  So, the patch made a
difference but something else is amiss still.



Strange. I've never see that one before.

I think another trace of the request-reply sequence is needed to see if 
there is anything different now and what.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20
  Current Beta Squid 3.1.0.15


Re: [squid-users] TCP_Denied for when requesting IP as URL over SSL using squid proxy server.

2009-12-07 Thread Amos Jeffries

kevin band wrote:

Hi,

I'm hoping somebody can help me here, because I'm at a loss about what
to do next.

Basically we have squid running as a proxy server to restrict access
to just those sites which we've included in our ACL's
I have noticed recently that it isn't handling HTTPS reqests properly
if the URL contains an IP address instead of a domain name.

The reason this is a particular problem is that although the users can
connect to the page using the domain name, something within that
domain is then forwarding requests to the same web-server using its IP
address.
I'm sure I have my ACL's setup correctly because squid will forward
the request using either URL if I send the requests using HTTP.  It
then times out on the web-server because it only allows https, but at
least the request is being forwarded to the web-server rather than
being denied in squid


The remote web server(s) is rejecting the connections. Probably because 
the SSL certificates require a domain name as part of their 
authentication validation.


It's probably a broken client browser or maybe the website itself 
sending funky page URLs with the raw-IP inside. If you care you need to 
find out which and complain to whoever made the broken bits. Squid is 
just an innocent middleman here.




Here's an extract from the logs that might explain it better :-

158.41.4.44 - - [04/Dec/2009:15:56:47 +] GET
http://stpaccess.marksandspencer.com/ HTTP/1.1 504 1024 TCP_MISS:NONE
158.41.4.44 - - [04/Dec/2009:15:57:02 +] CONNECT
stpaccess.marksandspencer.com:443 HTTP/1.0 200 7783 TCP_MISS:DIRECT
158.41.4.44 - - [04/Dec/2009:16:01:53 +] GET
http://63.130.82.113/Citrix/MetaFrameXP/default/login.asp HTTP/1.1
504 1064 TCP_MISS:NONE
158.41.4.44 - - [04/Dec/2009:16:03:13 +] CONNECT
63.130.82.113:443 HTTP/1.0 403 980 TCP_DENIED:NONE


And config extracts:

acl SSL_ports port 443 563 444
acl Safe_ports port 80 8002 23142 5481 5181 5281 5381 5481 5581
5400 5500   # http
acl Safe_ports port 23142   # OPEL project
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 444 563 # https, snew#s

acl CONNECT method CONNECT

acl regex_ms dstdom_regex   -i /home/security/regex_marksandspencer.txt
acl urlregex_mands url_regex -i
/home/security/regex_marksandspencer_ip.txt
acl mands_allowed_nets  src  /home/security/mands_allowed_nets.txt

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

http_access allow regex_ms  mands_allowed_nets
http_access allow urlregex_mands mands_allowed_nets
http_access deny all

There are actually a lot more ACL's than this, but these are the only
ones I think are relevant

relevant extracts from files linked to ACLs:
  regex_marksandspencer.txt
  .*marksandspencer.*com

  regex_marksandspencer_ip.txt
  .*.63.130.82.113


Thanks for any help.

Kevin,


Kevin, meet dstdomain:

  acl markandspencer dstdomain .marksandspencer.com 63.130.82.113
  http_access allow markandspencer mands_allowed_nets

10x or more faster than regex. Matching marksandspencer.com, all 
sub-domains and the raw-IP address form.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20
  Current Beta Squid 3.1.0.15


Re: [squid-users] Squid at 100% CPU with 10 minutes period

2009-12-07 Thread Amos Jeffries

fedorischev wrote:

В сообщении от Friday 04 December 2009 23:51:28 Guy Bashkansky написал(а):

Hi,

The problem: on a certain origin content, Squid reaches 100% CPU
periodically, every 10 minutes, so the cache service suffers.

Any clues where to look?  Maybe this problem and its solution are known?

The CPU load pattern is something like this, minute-by-minute:
40%, 40%, 60%, 80%, 99%, 99%, 99%, 80%, 60%, 40%, repeat.

Thanks,
Guy


Several days ago I find, that Squid3.0STABLE16 eating CPU time by HTTP method 
CONNECT while request is hitting delay pool. I'm looking for the spare time 
to post more debugging info on the list.


WBR.


Please try 3.0.STABLE20 and confirm that the problem still exists.
There are a few hang and infinite loop errors resolved since *16. Some 
rather nasty remote DDoS security issues as well.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20
  Current Beta Squid 3.1.0.15


Re: [squid-users] Squid at 100% CPU with 10 minutes period

2009-12-07 Thread fedorischev
В сообщении от Monday 07 December 2009 13:00:22 Amos Jeffries написал(а):
 fedorischev wrote:
  Several days ago I find, that Squid3.0STABLE16 eating CPU time by HTTP
  method CONNECT while request is hitting delay pool. I'm looking for the
  spare time to post more debugging info on the list.
 
  WBR.

 Please try 3.0.STABLE20 and confirm that the problem still exists.
 There are a few hang and infinite loop errors resolved since *16. Some
 rather nasty remote DDoS security issues as well.

 Amos

OK, thank you for tip. I'll try to use 3.0STABLE20  post the results on the 
list immediately. Now I'm looking for the linux http downloader software, 
that may using CONNECT - for squid testing purposes. Unfortunately - no 
results as yet  :)

WBR.


[squid-users] Could this be a potential problem? Squid stops working and requires restart to work

2009-12-07 Thread Asim Ahmed @ Folio3

Hi,

I found this in cache.log when i restarted squid after a halt!

CPU Usage: 79.074 seconds = 48.851 user + 30.223 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
   total space in arena:7452 KB
   Ordinary blocks: 7363 KB285 blks
   Small blocks:   0 KB  1 blks
   Holding blocks: 14752 KB 94 blks
   Free Small blocks:  0 KB
   Free Ordinary blocks:  88 KB
   Total in use:   22115 KB 297%
   Total free:88 KB 1%

--

Regards,

Asim Ahmed Khan
IT Manager,
Folio3 (Pvt.) Ltd. www.folio3.com
Direct: 92-21-4323721-4 Ext 110
Email: aah...@folio3.com



Re: [squid-users] Squid at 100% CPU with 10 minutes period

2009-12-07 Thread Amos Jeffries

fedorischev wrote:

В сообщении от Monday 07 December 2009 13:00:22 Amos Jeffries написал(а):

fedorischev wrote:

Several days ago I find, that Squid3.0STABLE16 eating CPU time by HTTP
method CONNECT while request is hitting delay pool. I'm looking for the
spare time to post more debugging info on the list.

WBR.

Please try 3.0.STABLE20 and confirm that the problem still exists.
There are a few hang and infinite loop errors resolved since *16. Some
rather nasty remote DDoS security issues as well.

Amos


OK, thank you for tip. I'll try to use 3.0STABLE20  post the results on the 
list immediately. Now I'm looking for the linux http downloader software, 
that may using CONNECT - for squid testing purposes. Unfortunately - no 
results as yet  :)


WBR.


I have not tested this, but from a quick check of the code its should work:

  squidclient -m CONNECT -P file host:port

If you create a fake HTTP request and store the request headers in 
file the above should send a CONNECT request for host:port to Squid 
then pass the contents of file through the tunnel same as a downloader 
would.


The only difference from a regular CONNECT is that squidclient will add 
a Content-Length: header which do not usually go on CONNECT requests due 
to their unpredictable nature.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20
  Current Beta Squid 3.1.0.15


[squid-users] problem attachments

2009-12-07 Thread espoire20

Hi

I have a small problem I have proxy server installed , i configured squid
it's working well .
The issue that i have now when i try to add the attachment file  in  email
on gmail or yahoo  i can't  added when I take out the proxy from the browser
I can add the attachment file

if someone can help me i will be thankful

many thanks
-- 
View this message in context: 
http://old.nabble.com/problem-attachments-tp26676186p26676186.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] Squid Service on windows Starts and stops automatically

2009-12-07 Thread Preetham N.
Hi,
I have installed 2.7 on a windows 2008(x86) server. It installed fine
and i guess the conf is fine too. But when i try to start the service
it says that the service started adn stopped automatically, it also
says that service with no job stops automatically.
Can some one help?

--
Regards,
Preetham N.


Re: [squid-users] Squid Service on windows Starts and stops automatically

2009-12-07 Thread Kinkie
On Mon, Dec 7, 2009 at 1:51 PM, Preetham N. preetha...@gmail.com wrote:
 Hi,
 I have installed 2.7 on a windows 2008(x86) server. It installed fine
 and i guess the conf is fine too. But when i try to start the service
 it says that the service started adn stopped automatically, it also
 says that service with no job stops automatically.
 Can some one help?

What's in the cache.log file?


-- 
/kinkie


Re: [squid-users] COSS cache_dir size exceeds largest offset at maximum file size

2009-12-07 Thread Jason Healy
On Dec 7, 2009, at 1:37 AM, Amos Jeffries wrote:

 For some reason the safety check that is catching you uses  instead of =
 I'm not sure why. If you want to experiment you could change it manually and 
 rebuild. Around line 864 of src/fs/coss/store_dir_coss.c.

It looks like I'm off by more than just a single unit:

2009/12/05 12:16:00| COSS largest file offset = 4194296 KB
2009/12/05 12:16:00| COSS cache_dir size = 134217728 KB

The largest file I'm able to use is 4095 MB, instead of the 131072 MB 
requested.  Am I smacking up against some architecture-specific constant?

Possibly related: I'm just using the standard Debian build process, so maybe it 
isn't guessing everything correctly.  I want a 32-bit build with a maximum 
address size of 4GB and largefile support.  Should I be explicitly passing in 
something like:

  --with-build-environment=POSIX_V6_ILP32_OFFBIG

Thanks,

Jason

--
Jason Healy|jhe...@logn.net|   http://www.logn.net/






Re: [squid-users] Squid as reverse proxy for Microsoft Office Communications Server 2007 R2 ?

2009-12-07 Thread Claudio Prono
No one with any hint? possible? :(

Claudio Prono ha scritto:
 Hi to all,

 There is someone have used squid as reverse proxy for Office
 Communications Server 2007 R2 ?

 I need to know if it works well, or if it don't work.

 I must use it for a society of 160+ employees, so the solution must be
 stable, with excellent performances and without problems. Is it possible?
 The other solution can be ISA server, but if i can, i use opensource
 solution.

 Thank u in advance, ad have a nice weekend.

 Cordially,
 Claudio Prono.

   

-- 

Claudio Prono
Systems Development @ PSS Srl, Divisione Implementazione Sistemi
Via San Bernardino, 17 - 10137 Torino (TO) - IT
Tel +39-011.32.72.100  Fax +39-011.32.46.497
PGP Fingerprint: 75C2 4049 E23D 2FBF A65F  40DB EA5C 11AC C2B0 3647
Disclaimer: http://atpss.net/disclaimer
 



[squid-users] reverse proxy

2009-12-07 Thread Ludovit Koren
Hi,

I have Debian Linux and Squid Version 2.7.STABLE3. As I understand
from the documentation, there was some change in the version and I did
not find relevant information on the net.

I have the following scenario:

client - https - squid - https - server1
client - https - squid - http - server2


This is what I added to the squid.conf

http_port 80 accel defaultsite=dflt1.domain.sk vhost
https_port 443 cert=/etc/squid/ssl.crt key=/etc/squid/ssl.key 
defaultsite=dflt1.domain.sk vhost

acl webmail dstdomain webmail.domain.sk

cache_peer dflt1.domain.sk parent 80 0 no-query originserver
cache_peer dflt1.domain.sk parent 443 0 no-query ssl sslflags=DONT_VERIFY_PEER 
front-end-https 
name=dflt1
cache_peer webmail.domain.sk parent 80 0 no-query originserver name=dflt2


cache_peer_access dflt2 allow webmail



According to log the redirection is either all the time http or https
(if i add protocol=http to the configuration above):

1260203474.257116 Y.Y.Y.Y TCP_MISS/502 1439 GET https://webmail.domain.sk/ 
- DIRECT/
X.X.X.X text/html



How can I configure squid as https reverse proxy and one page redirect to
the https backend server and the second page redirect to the http
backend server?

Any hint appreciated.

Thank you very much.

Regards,

lk
attachment: Ludovit_Koren.vcf

[squid-users] Query String in External ACLs

2009-12-07 Thread Lu, Roy
Hi,

I understand that Squid 3.0 supports %PATH for the path info of a URL in
external_acl. But is it possible to access the query string portion of
the URL? If not, should I use the redirector or url_rewrite mechanism?

Thanks.
Roy
**
 
This message may contain confidential or proprietary information intended only 
for the use of the 
addressee(s) named above or may contain information that is legally privileged. 
If you are 
not the intended addressee, or the person responsible for delivering it to the 
intended addressee, 
you are hereby notified that reading, disseminating, distributing or copying 
this message is strictly 
prohibited. If you have received this message by mistake, please immediately 
notify us by  
replying to the message and delete the original message and any copies 
immediately thereafter. 

Thank you. 
**
 
FACLD



[squid-users] RAID stripes sizes...

2009-12-07 Thread John Doe
Hi,

I was just wondering if squid has a prefered RAID stripe size; as in the 
smaller/bigger the better...
I did not find it in the squid RAID wiki...
If I understand it correctly, the smaller the RAID stripe, the more disks 
involved in a file read/write (file is spread on more disks).
I see the pros (more disks, more bandwidth) and the cons (too many concurrent 
seeks).
So, from experience, in a reverse proxy setup, which one is the best option?

Thx,
JD


  


[squid-users] Re: How to configure squid for ftp traffic.

2009-12-07 Thread Ali Ahsan

Hi All

I have one task at my hand.I want to use my squid proxy server to pass
ftp traffic.So that user can use some ftp client like filezilla cuteftp
etc... .For file uploading and downloading.Can any one guide me how can
i achieve that.I did goggle but didn't find any useful thing.So i have
high hopes .

Squid version

squid-3.0.STABLE20-2


Thanks
Ali


Re: [squid-users] Squid as reverse proxy for Microsoft Office Communications Server 2007 R2 ?

2009-12-07 Thread Serge Fonville
 There is someone have used squid as reverse proxy for Office
 Communications Server 2007 R2 ?

 I need to know if it works well, or if it don't work.

 I must use it for a society of 160+ employees, so the solution must be
 stable, with excellent performances and without problems. Is it possible?
 The other solution can be ISA server, but if i can, i use opensource
 solution.

Why do you believe squid has an added value here?
squid is a proxy server

what extra checks do you want squid to perform so that you need it?


HTH

Regards,

Serge Fonville

-- 
http://www.sergefonville.nl

Convince Google!!
They need to support Adsense over SSL
https://www.google.com/adsense/support/bin/answer.py?hl=enanswer=10528
http://www.google.com/support/forum/p/AdSense/thread?tid=1884bc9310d9f923hl=en


Re: [squid-users] RAID stripes sizes...

2009-12-07 Thread Marcus Kool

Stripes need be be larger than the average object size to have
concurrent access to more than one object at the same time.
The *average* objects size is 13 KB so to be on the safe side
I would use a stripe size of 32K or more.

The optimal size also depends on the file system type that you use.

Marcus


John Doe wrote:

Hi,

I was just wondering if squid has a prefered RAID stripe size; as in the 
smaller/bigger the better...
I did not find it in the squid RAID wiki...
If I understand it correctly, the smaller the RAID stripe, the more disks 
involved in a file read/write (file is spread on more disks).
I see the pros (more disks, more bandwidth) and the cons (too many concurrent 
seeks).
So, from experience, in a reverse proxy setup, which one is the best option?

Thx,
JD


  





Re: [squid-users] Squid Service on windows Starts and stops automatically

2009-12-07 Thread Kinkie
On Mon, Dec 7, 2009 at 2:30 PM, Preetham N. preetha...@gmail.com wrote:
 hi,
 below is the snippet from the log

Strange..
it seems to start and then voluntarily stop, no mention of errors.
Maybe Guido can help you more than I can..

-- 
/kinkie


Re: [squid-users] COSS cache_dir size exceeds largest offset at maximum file size

2009-12-07 Thread Amos Jeffries
On Mon, 7 Dec 2009 10:58:18 -0500, Jason Healy jhe...@logn.net wrote:
 On Dec 7, 2009, at 1:37 AM, Amos Jeffries wrote:
 
 For some reason the safety check that is catching you uses  instead of
 =
 I'm not sure why. If you want to experiment you could change it
manually
 and rebuild. Around line 864 of src/fs/coss/store_dir_coss.c.
 
 It looks like I'm off by more than just a single unit:
 
 2009/12/05 12:16:00| COSS largest file offset = 4194296 KB
 2009/12/05 12:16:00| COSS cache_dir size = 134217728 KB
 
 The largest file I'm able to use is 4095 MB, instead of the 131072 MB
 requested.  Am I smacking up against some architecture-specific
constant?

Yes, not sure what I was thinking yesterday. (multiplying by block size
twice, sheesh) The constant is inside Squid. Maximum of 2^25-1 files per
cache == largest file offset.
  cache size  largest file offset * block size.

Plus the default 10 COSS in-memory stripes (10MB in your case AFAICT) are
counted as part of the total cache file space, but not mentioned in that
mini report.

 
 Possibly related: I'm just using the standard Debian build process, so
 maybe it isn't guessing everything correctly.  I want a 32-bit build
with a
 maximum address size of 4GB and largefile support.  Should I be
explicitly
 passing in something like:
 
   --with-build-environment=POSIX_V6_ILP32_OFFBIG
 

Should be automatic. If in doubt add it. COSS does need it.

Amos


Re: [squid-users] RAID stripes sizes...

2009-12-07 Thread Amos Jeffries
On Mon, 7 Dec 2009 09:29:16 -0800 (PST), John Doe jd...@yahoo.com wrote:
 Hi,
 
 I was just wondering if squid has a prefered RAID stripe size; as in the
 smaller/bigger the better...
 I did not find it in the squid RAID wiki...
 If I understand it correctly, the smaller the RAID stripe, the more
disks
 involved in a file read/write (file is spread on more disks).
 I see the pros (more disks, more bandwidth) and the cons (too many
 concurrent seeks).
 So, from experience, in a reverse proxy setup, which one is the best
 option?

Squid for cache storage both reads and writes in fixed discrete 4096 byte
chunks to random files in the cache in random sequence. COSS is the only
exception so far. It reads/writes in MB chunks.

JBOD is best, on fast disks.

Amos



Re: [squid-users] any work arounds for bug 2176

2009-12-07 Thread Brett Lymn
On Mon, Dec 07, 2009 at 10:36:52PM +1300, Amos Jeffries wrote:
 
 I think another trace of the request-reply sequence is needed to see if 
 there is anything different now and what.
 

I do have a trace from snoop.  I don't want to post it to the list
due to it containing details of the site we are trying to upload to.
Can I mail it to you off list?

-- 
Brett Lymn
Warning:
The information contained in this email and any attached files is
confidential to BAE Systems Australia. If you are not the intended
recipient, any use, disclosure or copying of this email or any
attachments is expressly prohibited.  If you have received this email
in error, please notify us immediately. VIRUS: Every care has been
taken to ensure this email and its attachments are virus free,
however, any loss or damage incurred in using this email is not the
sender's responsibility.  It is your responsibility to ensure virus
checks are completed before installing any data sent in this email to
your computer.




Re: [squid-users] squid ceasing to function when interface goes down

2009-12-07 Thread Chris Robertson

ty...@marieval.com wrote:

 Original Message 
Subject: Re: [squid-users] squid ceasing to function when interface
goes down
From: Jose Ildefonso Camargo Tolosa ildefonso.cama...@gmail.com
Date: Sun, December 06, 2009 10:44 am
To: ty...@marieval.com
Cc: squid-users@squid-cache.org


Hi!

This could come a little off-topic, but: which VPN are you using? if
you happen to be using OpenVPN, try adding the option persist-tun,
also, there is a chance that you can make squid actually restart when
the VPN goes down and up again (by using the down and up options,
that calls down and up scripts).

Anyway, please keep us informed!


All of those are already done.

I've reset the cache.  rm -rf /var/cache/squid/* and let it rebuild it. 
The cache seemed the only difference between this installation and two

other ones with the same configs that remain stable...  If that doesn't
work, I may need to replace the server.
  


Have you changed your never_direct as per 
http://wiki.squid-cache.org/SquidFaq/ConfiguringSquid#How_do_I_configure_Squid_to_work_behind_a_firewall.3F


Chris




Re: [squid-users] reverse proxy

2009-12-07 Thread Amos Jeffries
On Mon, 07 Dec 2009 17:59:22 +0100, Ludovit Koren
ludovit_ko...@tempest.sk wrote:
 Hi,
 
 I have Debian Linux and Squid Version 2.7.STABLE3. As I understand
 from the documentation, there was some change in the version and I did
 not find relevant information on the net.

NP: Please use the latest Squid version available, 2.7.STABLE7 is
available in backports if you need to.

 
 I have the following scenario:
 
 client - https - squid - https - server1
 client - https - squid - http - server2
 

Use this for reference:
  http://wiki.squid-cache.org/ConfigExamples/Reverse/VirtualHosting

 
 This is what I added to the squid.conf
 
 http_port 80 accel defaultsite=dflt1.domain.sk vhost

This configures:

 Client - HTTP - Squid.

Which I note is missing from your specs. If your specs were right then
drop this and only use the https_port directive below.


 https_port 443 cert=/etc/squid/ssl.crt key=/etc/squid/ssl.key
 defaultsite=dflt1.domain.sk vhost
 
 acl webmail dstdomain webmail.domain.sk
 
 cache_peer dflt1.domain.sk parent 80 0 no-query originserver

Missing:  name=dflt1


 cache_peer dflt1.domain.sk parent 443 0 no-query ssl
 sslflags=DONT_VERIFY_PEER front-end-https 
 name=dflt1

 cache_peer webmail.domain.sk parent 80 0 no-query originserver
name=dflt2
 
 
 cache_peer_access dflt2 allow webmail

Missing:
   cache_peer_access dflt2 deny all

   cache_peer_access dflt1 allow !webmail

Also missing:
  * list of domains to be passed to dflt1
  * http_access lines to permit valid domain traffic to enter Squid.

 
 According to log the redirection is either all the time http or https
 (if i add protocol=http to the configuration above):
 
 1260203474.257116 Y.Y.Y.Y TCP_MISS/502 1439 GET
 https://webmail.domain.sk/ - DIRECT/
 X.X.X.X text/html
 
 
 
 How can I configure squid as https reverse proxy and one page redirect
to
 the https backend server and the second page redirect to the http
 backend server?

What you had configured above is a reverse proxy which accepts both HTTP
and HTTPS connections. Then passes all requests to dflt1.domain.sk:80.

If dflt1.domain.sk:80 became available or overloaded the webmail.domain.sk
traffic would be pushed to dflt1.domain.sk:443 and the non-webmail.*
traffic would be dropped with an error.

Amos


Re: [squid-users] Re: How to configure squid for ftp traffic.

2009-12-07 Thread Amos Jeffries
On Mon, 07 Dec 2009 23:29:45 +0500, Ali Ahsan ali.ah...@prog.awpdc.com
wrote:
 Hi All
 
 I have one task at my hand.I want to use my squid proxy server to pass
 ftp traffic.So that user can use some ftp client like filezilla cuteftp
 etc... .For file uploading and downloading.Can any one guide me how can
 i achieve that.I did goggle but didn't find any useful thing.So i have
 high hopes .

You cannot. Squid is an HTTP (port 80) proxy not an FTP (port 21) proxy.
Look at frox instead.

Amos


Re: [squid-users] any work arounds for bug 2176

2009-12-07 Thread Amos Jeffries
On Tue, 8 Dec 2009 08:49:00 +1030, Brett Lymn bl...@baesystems.com.au
wrote:
 On Mon, Dec 07, 2009 at 10:36:52PM +1300, Amos Jeffries wrote:
 
 I think another trace of the request-reply sequence is needed to see if

 there is anything different now and what.
 
 
 I do have a trace from snoop.  I don't want to post it to the list
 due to it containing details of the site we are trying to upload to.
 Can I mail it to you off list?

Sure.

Amos


Re: [squid-users] Query String in External ACLs

2009-12-07 Thread Amos Jeffries
On Mon, 7 Dec 2009 09:12:04 -0800, Lu, Roy r...@facorelogic.com wrote:
 Hi,
 
 I understand that Squid 3.0 supports %PATH for the path info of a URL in
 external_acl. But is it possible to access the query string portion of
 the URL? If not, should I use the redirector or url_rewrite mechanism?
 
 Thanks.
 Roy

%URI is supposed to be the full URL/URI from 3.0 onwards.

Amos


Re: [squid-users] Squid does not work after origin's server DirectoryIndex change

2009-12-07 Thread Chris Robertson

Adam Squids wrote:

I had index.html redirecting to http://www.domain.com/online/1.html in
my origin server. Now I changed Apache configs and set DirectoryIndex
to be /online/1.html.


Did you PURGE http://www.domain.com/index.html;?  Is it, perhaps, still 
cached in Squid, redirecting to http://www.domain.com/online/1.html; 
(which of course Apache, with the new DirectoryIndex, would take as a 
request for http://www.domain.com/online/1.html/online/1.html;)?


On an aside, I have never heard of using a subdirectory in a 
DirectoryIndex definition.



 When I browse straight to my origin, it works
fine. When I browse via my squid server I get  ERR_READ_ERROR 104

I assume that in this case, 'connection reset by peer', peer==Apache
origin. How come I can access it when I browse straight to it ? and it
even works perfectly?
What should I look for in cache.log ? I did not find anything aspecially odd :)
  


I'd be more likely to check your Apache logs.  It's rejecting the 
connection, after all.



Many thanks,

Adam
  


Chris



Re: [squid-users] Exception error:corrupted chunk size

2009-12-07 Thread Mark Nottingham
Seems to be, at a glance...


 HTTP/1.1 200 OK
 Server: Caribou/5.0
 Date: Mon, 07 Dec 2009 23:37:27 GMT
 Content-type:  text/html; charset=utf-8
 Transfer-Encoding: chunked
 Transfer-encoding: chunked
 Vary: Accept-Encoding
 
 2000
 HTML
 HEAD
 TITLE/TITLE
 


On 03/12/2009, at 10:33 PM, Amos Jeffries wrote:

 Mark Nottingham wrote:
 It looks like they're sending Transfer-Encoding: chunked twice;
 http://redbot.org/?uri=http%3A%2F%2Ft.news-accorhotels.com%2Fnl%2Fjsp%2Fm.jsp%3Fc%3Da6f0cf19b1c0782ef7
 Cheers,
 
 Maybe. But I'm inclined to believe its because ...
 
 (...drumroll please...)
 
 ... the body content in fact NOT being chunked encoded at all?
 
 Amos
 
 On 02/12/2009, at 10:10 AM, Amos Jeffries wrote:
 On Tue, 1 Dec 2009 17:10:27 +0800, Wong wongb...@telkom.net wrote:
 Dear All,
 
 I got problem to access 
 http://t.news-accorhotels.com/nl/jsp/m.jsp?c=a6f0cf19b1c0782ef7 through 
 proxy (site can be accessed without proxy).
 
 After checking cache.log, I found message below.
 
 Need advise why squid generate that message and stop the browsing
 activity. How can I fix it?
 
 Thanks a lot for your kind help.
 
 Wong
 
 ---snip---
 
 2009/12/01 16:55:45| Exception error:corrupted chunk size
 2009/12/01 16:56:13| Exception error:corrupted chunk size
 2009/12/01 17:04:14| Exception error:corrupted chunk size
 Make sure you have the latest Squid-2.7 or Squid-3.x release to handle the
 chunking properly.
 
 If it still remains, the web server is badly broken. Complain to the
 website administrator.
 
 Amos
 
 --
 Mark Nottingham   m...@yahoo-inc.com
 
 
 -- 
 Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20
  Current Beta Squid 3.1.0.15

--
Mark Nottingham   m...@yahoo-inc.com




Re: [squid-users] Could this be a potential problem? Squid stops working and requires restart to work

2009-12-07 Thread Chris Robertson

Asim Ahmed @ Folio3 wrote:

Hi,

I found this in cache.log when i restarted squid after a halt!

CPU Usage: 79.074 seconds = 48.851 user + 30.223 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
   total space in arena:7452 KB
   Ordinary blocks: 7363 KB285 blks
   Small blocks:   0 KB  1 blks
   Holding blocks: 14752 KB 94 blks
   Free Small blocks:  0 KB
   Free Ordinary blocks:  88 KB
   Total in use:   22115 KB 297%
   Total free:88 KB 1%


This is not likely the source of your trouble...

http://www.squid-cache.org/mail-archive/squid-users/200904/0535.html

Chris




Re: [squid-users] Squid delay pool question

2009-12-07 Thread Chris Robertson

mikewest09 wrote:

Hi Amos,

Thanks a lot for your detailed explanation, I believe that I had big
misunderstanding of how Classes work.

Having said that, I am not sure if class 4 will be the best one for me
because of two important reasons:

A. All of our users login with the 'same' exact login name/ password as it
is embedded in the desktop application exe file. So what we have here is
same login name/ password and different IP for each user
  


Then Class 4 is out.  You would have one pool per username (so one pool 
for the aggregate, one per-subnet, one per-ip and one username pool, 
acting the same as the aggregate).  Keep reading for a description of 
bucket types.



B. As mentioned before the server have 100 Mbps, my thoughts ('at first')
was that I wanted each user to get 'for example' maximum speed then 'all of
them' will have the same 10 MBps. But I never imagined that the connection
speed 100 or 10 will be (divided) on the number of users logged in, meaning
I couldn't imagine that when I drop the speed to 10 MBPs for user A then all
users will have this speed divided on the number of users logged into the
server (and this is of course due to my network basics ignorance :( )
  


With a Class 2 pool, there are two types of bucket.  One type is an 
aggregate bucket (there is only one instance of this bucket, and 
everyone's traffic is withdrawn from it).  The recommendation in your 
case is to leave that bucket at unlimited.  The other type is 
individual.  There will be one instance of this type of bucket for how 
ever many distinct IPs Squid sees connecting to it (192.168.32.18 is 
assigned one instance, 192.168.32.83 is assigned another, etc*).  Each 
IP will be able to try to saturate the 100mbit link until their 
individual bucket is empty, at which point, they will not be allowed to 
transfer any more data until their bucket refills some.  With the 
recommendations above, the bucket is 15MB.  If I download a 14MB file, I 
will not be rate-limited at all.  If I download a 20 MB file, the first 
15 MB** will not be rate limited, but the next 5 will (this limit will 
just affect my traffic, you have your own bucket to deplete (or not) at 
your leisure).



So my question now is...is it possible in the first place that 'each user'
will get the same 10Mbps despite of the number of users connected to the
server (please excuse my network ignorance here if what I say seems
impossible)?
  


If you set the bucket size to a fairly small size (say 1024) then the 
rate limiting will take effect almost immediately (the initial value of 
the bucket gets depleted at up to 100mbit/sec, then the refill rate is 
the max (per-IP) download speed, with an overall limit of your 100mbit 
connection).




Now if this will not be possible, is it possible that I simply limit the
usage of the server for browsing html / html files only and exclude any
downloads exe, mp3, ...etc?


You can make ACLs that matches file extensions, and ACLs that match 
MimeType responses, but it's hard to get right (and fairly easy to 
circumvent, with cooperation on the far end).


See http://www.squid-cache.org/mail-archive/squid-users/200904/0307.html 
and http://www.squid-cache.org/mail-archive/squid-users/200904/0432.html 
for one example.  The mailing list archives have other examples.



 without putting any limitation on speed? If I
can do this then there might be no need to do the delay pools limitation in
the first place!


Thanks in advance for your time and efforts 


Chris

* For what it's worth, A Class 2 individual pool only accounts for the 
final octet of the IP:  192.168.42.118 would draw from the same pool as 
1.2.3.118.  Class 3 (and 4) individual pools use the final 2 octets: 
192.168.42.118 would use a different pool from 1.2.3.118, but 
192.168.42.118 would share a pool with 1.2.42.118.


* Not technically accurate, as the bucket would be filling (at 
10mbit/sec) while the download runs, so if the download is limited on 
the far end to less than 10mbit/sec, Squid's delay pool will never come 
into effect.  If the download is only running at 12mbit/sec it likely 
won't come into effect either (I'm too tired to do the math, but 
hopefully you get the idea).  If I'm downloading other objects at the 
same time, they will all count against my individual bucket.




Re: [squid-users] Exception error:corrupted chunk size

2009-12-07 Thread Amos Jeffries
On Tue, 8 Dec 2009 10:38:33 +1100, Mark Nottingham m...@yahoo-inc.com
wrote:
 Seems to be, at a glance...
 
 
 HTTP/1.1 200 OK
 Server: Caribou/5.0
 Date: Mon, 07 Dec 2009 23:37:27 GMT
 Content-type:  text/html; charset=utf-8
 Transfer-Encoding: chunked
 Transfer-encoding: chunked
 Vary: Accept-Encoding
 
 2000
 HTML
 HEAD
 TITLE/TITLE
 
 
 

hmm

Aha!. Darn ISP filtering again.

Okay, so my ISP run some proxy for domestic traffic.

What I was getting back was the first chunked header stripped off and the
chunks decoded. As one would expect the remaining Transfer-Encoding header
with no remainign chunks goes badly.

Amos


 On 03/12/2009, at 10:33 PM, Amos Jeffries wrote:
 
 Mark Nottingham wrote:
 It looks like they're sending Transfer-Encoding: chunked twice;

http://redbot.org/?uri=http%3A%2F%2Ft.news-accorhotels.com%2Fnl%2Fjsp%2Fm.jsp%3Fc%3Da6f0cf19b1c0782ef7
 Cheers,
 
 Maybe. But I'm inclined to believe its because ...
 
 (...drumroll please...)
 
 ... the body content in fact NOT being chunked encoded at all?
 
 Amos
 
 On 02/12/2009, at 10:10 AM, Amos Jeffries wrote:
 On Tue, 1 Dec 2009 17:10:27 +0800, Wong wongb...@telkom.net
wrote:
 Dear All,
 
 I got problem to access
 http://t.news-accorhotels.com/nl/jsp/m.jsp?c=a6f0cf19b1c0782ef7
 through proxy (site can be accessed without proxy).
 
 After checking cache.log, I found message below.
 
 Need advise why squid generate that message and stop the browsing
 activity. How can I fix it?
 
 Thanks a lot for your kind help.
 
 Wong
 
 ---snip---
 
 2009/12/01 16:55:45| Exception error:corrupted chunk size
 2009/12/01 16:56:13| Exception error:corrupted chunk size
 2009/12/01 17:04:14| Exception error:corrupted chunk size
 Make sure you have the latest Squid-2.7 or Squid-3.x release to
handle
 the
 chunking properly.
 
 If it still remains, the web server is badly broken. Complain to the
 website administrator.
 
 Amos
 
 --
 Mark Nottingham   m...@yahoo-inc.com
 
 
 -- 
 Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20
  Current Beta Squid 3.1.0.15
 
 --
 Mark Nottingham   m...@yahoo-inc.com


Re: [squid-users] Could this be a potential problem? Squid stops working and requires restart to work

2009-12-07 Thread Amos Jeffries
On Mon, 07 Dec 2009 14:47:22 -0900, Chris Robertson crobert...@gci.net
wrote:
 Asim Ahmed @ Folio3 wrote:
 Hi,

 I found this in cache.log when i restarted squid after a halt!

 CPU Usage: 79.074 seconds = 48.851 user + 30.223 sys
 Maximum Resident Size: 0 KB
 Page faults with physical i/o: 0
 Memory usage for squid via mallinfo():
total space in arena:7452 KB
Ordinary blocks: 7363 KB285 blks
Small blocks:   0 KB  1 blks
Holding blocks: 14752 KB 94 blks
Free Small blocks:  0 KB
Free Ordinary blocks:  88 KB
Total in use:   22115 KB 297%
Total free:88 KB 1%
 
 This is not likely the source of your trouble...
 
 http://www.squid-cache.org/mail-archive/squid-users/200904/0535.html
 
 Chris

That would be right if they were negatives or enough to wrap 32-bit back
to positive.

Since its only ~300% I'm more inclined to think it's a weird issue with
the squid memory cache objects.

The bug of this week seems to be a few people now seeing multiple-100%
memory usage in Squid on FreeBSD 7+ 64-bit OS. Due to Squid memory-cache
objects being very slightly larger than the malloc page size. Causing 2x
pages per node instead of just one. And our use of fork() allocating N time
the virtual-memory which mallinfo might report.

Asim Ahmed: does that match your OS?


Amos


Re: [squid-users] COSS cache_dir size exceeds largest offset at maximum file size

2009-12-07 Thread Jason Healy

On Dec 7, 2009, at 5:05 PM, Amos Jeffries wrote:

 Yes, not sure what I was thinking yesterday. (multiplying by block size
 twice, sheesh) The constant is inside Squid. Maximum of 2^25-1 files per
 cache == largest file offset.
  cache size  largest file offset * block size.
 
 Plus the default 10 COSS in-memory stripes (10MB in your case AFAICT) are
 counted as part of the total cache file space, but not mentioned in that
 mini report.

Ah, thank you.  I would have been tearing my hair out wondering why those last 
10 megs didn't work...

I rebuilt using ' --with-build-environment=POSIX_V6_ILP32_OFFBIG' explicitly in 
the debian rules, and that seems to have brought me back up to the full 128 
gigs (I'll find out for sure once dd finishes paving out the COSS store, but 
for now `squid -k check` seems OK with it).

The working line is:

  cache_dir coss /mumble/cosstest 131061 block-size=8192

Thanks,

Jason

--
Jason Healy|jhe...@logn.net|   http://www.logn.net/






[squid-users] AVG Updates not being cached with squid 2.6?

2009-12-07 Thread Richard Chapman
I have a more or less default configured squid 2.6 proxy on a centos 5.4 
server.
I have configured AVG 9 network edition (Virus scanner) to use the squid 
proxy (as opposed to the avg proxy) - and it appears to be doing so.
However - checking the usage logs - it appears that different client 
machines download identical update (.bin) files within a few hours of 
each other - but do not appear to get a cache hit..


Can anyone suggest why these update files are not being cached (or at 
least not getting cache hits) - and whether there is anything I can do 
to encourage them to be cached?


I have checked the Squid FAQ and searched the archive - and found a 
similar request from 2005. The suggestion there was that the AVG server 
might be using the


Pragma: no-cache HTTP header

And that at that time there was no suggestion on how to override this. 
Can anyone confirm that this is the reason for the apparently 
unnecessary cache misses - and if so - is there anything new in squid 
to allow us to override?



Thanks
Richard.



Re: [squid-users] Exception error:corrupted chunk size

2009-12-07 Thread Mark Nottingham
Yup. 

At first I thought Caribou was http://www.cariboucms.com/, but now I'm not 
so sure; AFAICT that's a PHP/MySQL CMS, and doesn't have its own server (as 
well as there being a 'jsp' in the URL). Any other candidates?

That URL also seems to be 500'ing now...


On 08/12/2009, at 12:09 PM, Amos Jeffries wrote:

 On Tue, 8 Dec 2009 10:38:33 +1100, Mark Nottingham m...@yahoo-inc.com
 wrote:
 Seems to be, at a glance...
 
 
 HTTP/1.1 200 OK
 Server: Caribou/5.0
 Date: Mon, 07 Dec 2009 23:37:27 GMT
 Content-type:  text/html; charset=utf-8
 Transfer-Encoding: chunked
 Transfer-encoding: chunked
 Vary: Accept-Encoding
 
 2000
 HTML
 HEAD
 TITLE/TITLE
 
 
 
 
 hmm
 
 Aha!. Darn ISP filtering again.
 
 Okay, so my ISP run some proxy for domestic traffic.
 
 What I was getting back was the first chunked header stripped off and the
 chunks decoded. As one would expect the remaining Transfer-Encoding header
 with no remainign chunks goes badly.
 
 Amos
 
 
 On 03/12/2009, at 10:33 PM, Amos Jeffries wrote:
 
 Mark Nottingham wrote:
 It looks like they're sending Transfer-Encoding: chunked twice;
 
 http://redbot.org/?uri=http%3A%2F%2Ft.news-accorhotels.com%2Fnl%2Fjsp%2Fm.jsp%3Fc%3Da6f0cf19b1c0782ef7
 Cheers,
 
 Maybe. But I'm inclined to believe its because ...
 
 (...drumroll please...)
 
 ... the body content in fact NOT being chunked encoded at all?
 
 Amos
 
 On 02/12/2009, at 10:10 AM, Amos Jeffries wrote:
 On Tue, 1 Dec 2009 17:10:27 +0800, Wong wongb...@telkom.net
 wrote:
 Dear All,
 
 I got problem to access
 http://t.news-accorhotels.com/nl/jsp/m.jsp?c=a6f0cf19b1c0782ef7
 through proxy (site can be accessed without proxy).
 
 After checking cache.log, I found message below.
 
 Need advise why squid generate that message and stop the browsing
 activity. How can I fix it?
 
 Thanks a lot for your kind help.
 
 Wong
 
 ---snip---
 
 2009/12/01 16:55:45| Exception error:corrupted chunk size
 2009/12/01 16:56:13| Exception error:corrupted chunk size
 2009/12/01 17:04:14| Exception error:corrupted chunk size
 Make sure you have the latest Squid-2.7 or Squid-3.x release to
 handle
 the
 chunking properly.
 
 If it still remains, the web server is badly broken. Complain to the
 website administrator.
 
 Amos
 
 --
 Mark Nottingham   m...@yahoo-inc.com
 
 
 -- 
 Please be using
 Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20
 Current Beta Squid 3.1.0.15
 
 --
 Mark Nottingham   m...@yahoo-inc.com

--
Mark Nottingham   m...@yahoo-inc.com




Re: [squid-users] Could this be a potential problem? Squid stops working and requires restart to work

2009-12-07 Thread Asim Ahmed @ Folio3



Asim Ahmed @ Folio3 wrote:
 I am using Red Hat Enterprise Linux Server release 5.3 (Tikanga) with 
shorewall 4.4.4-2 and Squid 3.0 STABLE20-1. My problem is kind of wierd. 
Squid stops working after like a day and i need to restart it to let 
user browse or use internet. Any parameters to look for? Out of 2 GB RAM 
only 200 MB RAM is left free when i find squid halted (before restrting it).


 One more question i have is: In just two days my squid cache has 
grown to 500 MB. I've set cache_dir as 10GB ... I believe it will not 
take long before it will reach this limit! what happens then? does it 
start discarding old cache objects or what?


 Amos Jeffries wrote:
 On Mon, 07 Dec 2009 14:47:22 -0900, Chris Robertson crobert...@gci.net
 wrote:
   
 Asim Ahmed @ Folio3 wrote:
 
 Hi,


 I found this in cache.log when i restarted squid after a halt!

 CPU Usage: 79.074 seconds = 48.851 user + 30.223 sys
 Maximum Resident Size: 0 KB
 Page faults with physical i/o: 0
 Memory usage for squid via mallinfo():
total space in arena:7452 KB
Ordinary blocks: 7363 KB285 blks
Small blocks:   0 KB  1 blks
Holding blocks: 14752 KB 94 blks
Free Small blocks:  0 KB
Free Ordinary blocks:  88 KB
Total in use:   22115 KB 297%
Total free:88 KB 1%
   
 This is not likely the source of your trouble...


 http://www.squid-cache.org/mail-archive/squid-users/200904/0535.html

 Chris
 


 That would be right if they were negatives or enough to wrap 32-bit back
 to positive.

 Since its only ~300% I'm more inclined to think it's a weird issue with
 the squid memory cache objects.

 The bug of this week seems to be a few people now seeing multiple-100%
 memory usage in Squid on FreeBSD 7+ 64-bit OS. Due to Squid memory-cache
 objects being very slightly larger than the malloc page size. Causing 2x
 pages per node instead of just one. And our use of fork() allocating 
N time

 the virtual-memory which mallinfo might report.

 Asim Ahmed: does that match your OS?


 Amos

   


 --

 Regards,



Re: [squid-users] AVG Updates not being cached with squid 2.6?

2009-12-07 Thread Amos Jeffries

Richard Chapman wrote:
I have a more or less default configured squid 2.6 proxy on a centos 5.4 
server.
I have configured AVG 9 network edition (Virus scanner) to use the squid 
proxy (as opposed to the avg proxy) - and it appears to be doing so.
However - checking the usage logs - it appears that different client 
machines download identical update (.bin) files within a few hours of 
each other - but do not appear to get a cache hit..


Can anyone suggest why these update files are not being cached (or at 
least not getting cache hits) - and whether there is anything I can do 
to encourage them to be cached?


I have checked the Squid FAQ and searched the archive - and found a 
similar request from 2005. The suggestion there was that the AVG server 
might be using the


Pragma: no-cache HTTP header


To be sure take the URL that should be a HIT and enter it at redbot.org.
The whole problems should be easily visible there.



And that at that time there was no suggestion on how to override this. 
Can anyone confirm that this is the reason for the apparently 
unnecessary cache misses - and if so - is there anything new in squid to 
allow us to override?




Squid which do not ignore Pragma: no-cache treat it the same as 
Cache-Control: no-cache


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20
  Current Beta Squid 3.1.0.15


Re: [squid-users] Re: How to configure squid for ftp traffic.

2009-12-07 Thread Jeff Pang

Ali Ahsan:

Hi All

I have one task at my hand.I want to use my squid proxy server to pass
ftp traffic.So that user can use some ftp client like filezilla cuteftp
etc... .For file uploading and downloading.


No, squid is a http proxy, it handles only FTP over HTTP.
If you want to pass the standard ftp traffic, try using iptables.

--
Jeff Pang
http://home.arcor.de/pangj/