Re: [squid-users] Invalid response during POST request

2008-12-23 Thread Kinkie
On Tue, Dec 23, 2008 at 4:19 AM, howard chen howac...@gmail.com wrote:
 I am using Squid as reverse proxy to web server.

 Sometimes (not always), when client POST something to my server, error
 will be shown:

 =
 ERROR

 The requuseted URL could not be retrieved

 * Invalid Resposne
 =

 Full Screen cap : http://howachen.googlepages.com/squid-error.gif

 Any idea for this error?

Hi Howard,
  the screenshot is missing the details about the actual response.
A cursory glance at the request seems fine.
Obtaining the broken response is going to be tricky, and will probably
require extensive logging or network sniffing.
Could you try running at debug_options 11,4 ALL,1 ?
You'll then have to sort through the cache.log and see if any further
details emerge from there.


-- 
/kinkie


[squid-users] No scanning at upload (squid+c-icap+clamav) ?

2008-12-23 Thread Alexandre Fouché

Hi,

Is it normal that squid+c-icap+clamav does not trigger an alert when  
i upload a virus ?
I am using squid as a reverse proxy to accelerate a dynamic user  
generated content website, and so, it does not prevent from  
uploading malware to the website, only from downloading it




[squid-users] squid3 + latest c-icap + clamav unreliable ?

2008-12-23 Thread Alexandre Fouché


Hi,

After seting up squid3+c-icap+clamav in a reverse proxy  
configuration, and i found it quite unreliable from an user  
experience point of view. It appeared to me as being very  
unreliable, as one page of ten would always seem unreachable and  
report an error, despites squid alone would work great without c-icap 
+clamav.


Does anyone has such an experience ? This is sad, as i would rather  
use c-icap than HAVP


For information, i am running Opensuse 11 64bits and squid3.0 and  
clamav 0.94.2 from official binary repositories, and latest  
c_icap-060708rc1 compiled on this same machine.






Re: [squid-users] storeurl_rewrite and ICP

2008-12-23 Thread Imri Zvik
On Sunday 21 December 2008 10:52:42 Imri Zvik wrote:
 Hi,

 On Thursday 18 December 2008 21:57:22 Adrian Chadd wrote:
  Nope, I don't think the storeurl-rewriter stuff was ever integrated into
  ICP.
 
  I think someone posted a patch to the squid bugzilla to implement this.

 If you can point me to said patch, I'd be happy to test it under load.

  I'm happy to commit whatever people sensibly code up and deploy. :)
 
 
 
  Adrian
 
  2008/12/18 Imri Zvik im...@bsd.org.il:
   Hi,
  
   I'm using the storeurl_rewrite feature to store content with changing
   attributes.
  
   As my traffic grows, I want to be able to add cache_peers to share the
   load.
  
   After configuring the peers, I've found out that all my ICP queries
   results with misses.
   It seems like the storeurl_rewrite logic is not implemented in the ICP
   queries - i.e., nor the ICP client or the server passes the URL through
   the storeurl_rewrite process before checking if the requested content
   is cached or not.
  
   Am I missing something?
  
  
  
   Thank you in advance,

 Thanks!


I've found the said patch in squid's bugzilla - It seems to be working, but 
I'm going to test the patch under load (700 mbit~) and report back.




RE: [squid-users] Zeros in cachemgr output

2008-12-23 Thread Jevos, Peter
 On 17.12.08 20:14, Jevos, Peter wrote:
  Is this ok with all these zero's in my output ?
 
  Cache information for squid:
  Request Hit Ratios: 5min: 0.0%, 60min: 0.0%
  Byte Hit Ratios:5min: 87.9%, 60min: 42.4%
  Request Memory Hit Ratios:  5min: 0.0%, 60min: 0.0%
  Request Disk Hit Ratios:5min: 0.0%, 60min: 0.0%
  Storage Swap size:  1843144 KB
  Storage Mem size:   104 KB
  Mean Object Size:   16.74 KB
  Requests given to unlinkd:  42884
  Median Service Times (seconds)  5 min60 min:
  HTTP Requests (All):   1.00114  1.05672
  Cache Misses:  1.00114  1.05672
  Cache Hits:0.0  0.0
  Near Hits: 0.0  0.0
  Not-Modified Replies:  0.0  0.0
  DNS Lookups:   0.0  0.00295
  ICP Queries:   0.0  0.0
 
 --
 Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
 Warning: I wish NOT to receive e-mail advertising to this address.
 Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
 - Holmes, what kind of school did you study to be a detective?
 - Elementary, Watson.

Sorrym did you write something ?

I've missed your advice

Thanks in advance

Br

pet


[squid-users] load balancing

2008-12-23 Thread Mario Remy Almeida
Hi All,

any links on how to configure load balancing of squid


Regards,
Mario



Re: [squid-users] load balancing

2008-12-23 Thread Ken Peng





Hi All,

any links on how to configure load balancing of squid




See the default squid.conf, :)


Re: [squid-users] expires header

2008-12-23 Thread Ken Peng



Hi,


I have two imageservers behind a squid.

My issue is that my imageservers are not sending any Expires headers 

but

I would like to attaché one from the squid.

So by the time the image reaches the browser I have an Expires header 

in

it.



if there is neither Expires nor max-age headers, Squid won't cache them, 
is it?
you may use apache's mod_expire to add that value, if you can control 
the realservers.


Re: [squid-users] How important is harddisk performance?

2008-12-23 Thread Ken Peng





Hi there.

I'm planning to build a new dedicated Squid-box, with amd64 and 4 gigs
of RAM, with two cache_dir's on two separate harddisks and Squid-3 

doing

application level striping, all servicing around 6k users. Will two
recent IDE disks of 7200 rpm suffice, or I'm better off getting two
15000 rpm SCSI disks on a dedicated controller board?


15000 rpm SCSI is surely much better than 7200 rpm IDE's.
We use 15K rpm of SAS disks, that're more better I may thought.


RE: [squid-users] expires header

2008-12-23 Thread Alin Bugeag

The squid is adding the max-age header but not the expires. So it cache them.

I was looking at the methods that are available and I think I will just modify 
the code and add a hardcoded expires header ... and then compile the whole 
thing ...


 Alin Bugeag

 Tel +1 905 761 5301 ext 231
 Home   +1 416 623 9253

-Original Message-
From: Ken Peng [mailto:kenp...@rambler.ru] 
Sent: Tuesday, December 23, 2008 9:43 AM
To: Alin Bugeag
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] expires header


 Hi,


 I have two imageservers behind a squid.

 My issue is that my imageservers are not sending any Expires headers 
but
 I would like to attaché one from the squid.

 So by the time the image reaches the browser I have an Expires header 
in
 it.


if there is neither Expires nor max-age headers, Squid won't cache them, 
is it?
you may use apache's mod_expire to add that value, if you can control 
the realservers.


RE: [squid-users] expires header

2008-12-23 Thread Ken Peng






The squid is adding the max-age header but not the expires. So it 

cache

them.



are you sure? I remember Squid adds an age header, not max-age header.
but maybe I'm wrong.


RE: [squid-users] expires header

2008-12-23 Thread Alin Bugeag
Yes, you are right it's the age header ... :)
But I did some tests and it's cache them ...

 Alin Bugeag

 Tel +1 905 761 5301 ext 231
 Home   +1 416 623 9253

-Original Message-
From: Ken Peng [mailto:kenp...@rambler.ru] 
Sent: Tuesday, December 23, 2008 9:59 AM
To: Alin Bugeag
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] expires header





 The squid is adding the max-age header but not the expires. So it 
cache
 them.


are you sure? I remember Squid adds an age header, not max-age header.
but maybe I'm wrong.


RE: [squid-users] How important is harddisk performance?

2008-12-23 Thread Ritter, Nicholas
To a degree I agree with Matus in that the type of load is important. It is 
also important to keep in mind how you plan to setup cache dirs, and cache 
replacement. If you configure squid to cache most stuff to RAM, then disks are 
not as important as RAMalthough RAM is really always the most important 
because it is faster and why would you want to cache stuff to a slower medium 
when you can cache it to faster medium.
 
If you can afford the faster disks, get themalthough I would suggest that 
you be sure to get an I20 capable card like an adaptec because you can further 
improve performance by offloading disk IO operations (to an extent anyway) away 
from the kernel to the controller. I have know idea if, much less how much, 
Squid itself would improve its performance from this, but I20 capable cards are 
affordable.
 
I was having a discussion with some of my coworkers about SATAII versus 
SCSIsome felt that one was worth more than the other given costs and ease 
of management.
 
In general, identify how your users will be using it and how plan the cache 
replacement policy and setup. Are your users going to be downloading files, or 
just web content? What sizes files will you cache to disk versus cache to 
RAMetc.
 
Nick



From: rihad [mailto:ri...@mail.ru]
Sent: Tue 12/23/2008 12:44 AM
To: squid-users@squid-cache.org
Subject: [squid-users] How important is harddisk performance?



Hi there.

I'm planning to build a new dedicated Squid-box, with amd64 and 4 gigs
of RAM, with two cache_dir's on two separate harddisks and Squid-3 doing
application level striping, all servicing around 6k users. Will two
recent IDE disks of 7200 rpm suffice, or I'm better off getting two
15000 rpm SCSI disks on a dedicated controller board? Just not sure if
performance gains would be noticeable by an average user, given enough
ram. I read this too: http://wiki.squid-cache.org/BestOsForSquid
Just double checking.

Thanks for any tips.





Re: [squid-users] How important is harddisk performance?

2008-12-23 Thread rihad

Ken Peng wrote:





Hi there.

I'm planning to build a new dedicated Squid-box, with amd64 and 4 gigs
of RAM, with two cache_dir's on two separate harddisks and Squid-3 

doing

application level striping, all servicing around 6k users. Will two
recent IDE disks of 7200 rpm suffice, or I'm better off getting two
15000 rpm SCSI disks on a dedicated controller board?


15000 rpm SCSI is surely much better than 7200 rpm IDE's.


I couldn't argue with that! Here's what I think: as fetching a cached 
copy of some HTTP resource from a 7200 RPM IDE disk is _much_ faster 
than doing it over, say, a 10-Gig network link, passing the traffic 
through Squid is going to feel faster for an end user, but never slower. 
Will 7200 rpm - 15000 increase in rpm be _that_ important given enough 
RAM exists to cache most frequently accessed disk blocks (4 gigs for ~6k 
simultaneous surfers)? I doubt that.


[squid-users] How to block multimedia content... efficiently?

2008-12-23 Thread Jason Voorhees
Hi there:

I'm running Squid to block multimedia online using something like this:

acl multimedia rep_mime_type -i /etc/squid/multimedia.txt
http_reply_access deny multimedia-online

/etc/squid/acl/multimedia.txt has these lines inside:

^application/vnd.ms.wms-hdr.asfv1$
^application/x-mms-framed$
^audio/x-pn-realaudio$
^audio/mid$
^audio/mpeg$
^video/flv$
^video/x-flv$
^video/x-ms-asf$
^video/x-ms-asf$
^video/x-ms-wma$
^video/x-ms-wmv$
^video/x-msvideo$
^video/x-shockwave-flash$
^application/x-shockwave-flash$

These rules work fine. Websites like www.enladisco.com or www.atevip.net
are displayed normally except the multimedia content (a flash music
player) that is correctly blocked.
My problem comes here: there are too many websites (I don't know which
exactly, maybe 10, 100 or thousands) that display valid content (not
online video nor online music) as a application/x-shockwave-flash mime
type, so they get blocked and end users aren't happy with that.

I started to make exceptions to those websites using something like this:

acl multimedia-exceptions dstdomain /etc/squid/webs.txt
http_reply_access allow multimedia-exceptions
http_reply_access deny multimedia-online

This works OK, but is unmanageable! I can't make exceptions forever just
because of application/x-shockwave-flash mime type! www.enladisco.com
uses this mime type and it should be blocked because it offers music,
but www.xtrema.com.pe doesn't offer online music and is getting blocked
when I would not want to block it.

Is anybody here having similar troubles with this? Is there any way to
block music/video using this mime type?

Thanks everyone.

P.D.: I'm sorry about my poor english


[squid-users] Delay pools bucket refill

2008-12-23 Thread Johannes Buchner
Hi! 

I have a question about delay_pools: If I make a time-based acl with a
delay-pool, does it refill in the time the acl is inactive or is the
amount stopped and continued when the acl starts again? 

Like, if I have a pool acl going from 9:00 till 20:00 with a size of 3GB
and a rate of 1200 B/s, and a client runs low on the bucket at 20:00.
What will he be able to download at 9:00 the next day? 

Also, if I would define one bucket for 9:00 till 20:00 and another one
for 20:00 till 9:00 of different sizes and rates, would they share
their amount? Probably not I guess.

Regards,
Johannes

-- 
Emails können geändert, gefälscht und eingesehen werden. Signiere oder
verschüssele deine Mails mit GPG.
http://web.student.tuwien.ac.at/~e0625457/pgp.html



pgpcNEjaPsydi.pgp
Description: PGP signature


[squid-users] Handling websites that switch between http https

2008-12-23 Thread Joseph L. Casale
How does one deal with this scenario? It seems that when we encounter websites
that toggle between http/s the connection is broken. I can see why this 
logically
happens, but I am unable to work a solution for it? Anyone have experience with 
a
scenario such as this?

Thanks!
jlc


[squid-users] forcing squid to use a parent for a domain?

2008-12-23 Thread Carl Brewer



Hello,
I've got a squid 3.0 proxy that I'm trying to force to use an upstream 
proxy for a specific domain to get around a path MTU problem that's 
proving difficult to fix.


I have the following in my squid.conf :

cache_peer  proxy.xxx.yyy.zz  parent  8080 7  no-query 
no-netdb-exchange no-digest allow-miss

cache_peer_domain   proxy.xxx.yyy.zz  clients.brokenmtu.com.au
cache_peer_domain   proxy.xxx.yyy.zz  .brokenmtu.com.au

It worked once :

grep -i brokenmtu access.log
1230018675.341310 10.0.0.2 TCP_MISS/200 9480 GET 
http://clients.brokenmtu.com.au/ - FIRST_UP_PARENT/proxy.xxx.yyy.zz 
text/html


And then went direct and has been going direct ever since, and thus not 
working :


1230019575.025 899821 10.0.0.2 TCP_MISS/504 2538 GET 
http://clients.brokenmtu.com.au/lib/func_js.js - DIRECT/x.x.x.x text/html


I've misunderstood the doco for cache_peer I think, how can I force my 
squid to always go via this other proxy for this domain?


thankyou

Carl




Re: [squid-users] forcing squid to use a parent for a domain?

2008-12-23 Thread Chris Robertson

Carl Brewer wrote:



Hello,
I've got a squid 3.0 proxy that I'm trying to force to use an upstream 
proxy for a specific domain to get around a path MTU problem that's 
proving difficult to fix.


I have the following in my squid.conf :

cache_peer  proxy.xxx.yyy.zz  parent  8080 7  no-query 
no-netdb-exchange no-digest allow-miss

cache_peer_domain   proxy.xxx.yyy.zz  clients.brokenmtu.com.au
cache_peer_domain   proxy.xxx.yyy.zz  .brokenmtu.com.au


acl brokenMTU dstdomain .brokenmtu.com.au
never_direct allow brokenMTU



It worked once :

grep -i brokenmtu access.log
1230018675.341310 10.0.0.2 TCP_MISS/200 9480 GET 
http://clients.brokenmtu.com.au/ - FIRST_UP_PARENT/proxy.xxx.yyy.zz 
text/html


And then went direct and has been going direct ever since, and thus 
not working :


1230019575.025 899821 10.0.0.2 TCP_MISS/504 2538 GET 
http://clients.brokenmtu.com.au/lib/func_js.js - DIRECT/x.x.x.x text/html


I've misunderstood the doco for cache_peer I think, how can I force my 
squid to always go via this other proxy for this domain?


thankyou

Carl




Chris


RE: [squid-users] expires header

2008-12-23 Thread Ken Peng





Yes, you are right it's the age header ... :)
But I did some tests and it's cache them ...



that's b/c images have a Last-Modified-Since header, squid calculate it 
based on that.
you can't force squid to insert a max-age or expires headers in the 
response.


Re: [squid-users] Delay pools bucket refill

2008-12-23 Thread Amos Jeffries

Johannes Buchner wrote:
Hi! 


I have a question about delay_pools: If I make a time-based acl with a
delay-pool, does it refill in the time the acl is inactive or is the
amount stopped and continued when the acl starts again? 


Pools refill at the constant rate unless the are full or reconfigured. 
Client usage is not taken into consideration on the filling, only on the 
emptying.




Like, if I have a pool acl going from 9:00 till 20:00 with a size of 3GB
and a rate of 1200 B/s, and a client runs low on the bucket at 20:00.
What will he be able to download at 9:00 the next day? 


Also, if I would define one bucket for 9:00 till 20:00 and another one
for 20:00 till 9:00 of different sizes and rates, would they share
their amount? Probably not I guess.


Correct. No. They are different pools.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE11
  Current Beta Squid 3.1.0.3


Re: [squid-users] Handling websites that switch between http https

2008-12-23 Thread Amos Jeffries

Joseph L. Casale wrote:

How does one deal with this scenario? It seems that when we encounter websites
that toggle between http/s the connection is broken. I can see why this 
logically
happens, but I am unable to work a solution for it? Anyone have experience with 
a
scenario such as this?


Define 'connection'. I suspect what you think of as a connection is not 
related to HTTP connections.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.3 or 3.0.STABLE11-RC1


[squid-users] about read_timeout

2008-12-23 Thread Ken Peng

I saw this in squid.conf:

#  TAG: read_timeouttime-units
#   The read_timeout is applied on server-side connections.  After
#   each successful read(), the timeout will be extended by this
#   amount.  If no data is read again after this amount of time,
#   the request is aborted and logged with ERR_READ_TIMEOUT.  The
#   default is 15 minutes.
#
#Default:
# read_timeout 15 minutes


what's server-side connections? does it mean squid to originalserver 
(or peers) or clients to squid?

Why this timeout has so high value? Thanks.


RE: [squid-users] Handling websites that switch between http https

2008-12-23 Thread Joseph L. Casale
Define 'connection'. I suspect what you think of as a connection is not 
related to HTTP connections.

Amos,
Appreciate your help here, why I theorize connection was because what happens
when an SSL session is started versus a simple HTTP session. This is all related
to our users getting yahoo mail, the session toggles back and forth and I 
suspect
that is what is causing them to be logged out of the mail interface when 
attempting
to dl an attachment. I was thinking that had something to do with the proxy 
handling
the http versus the proxy passing through http.

Could I possibly tell squid to always do something with .yahoo.com such that a 
session
whether it be http or https from a server connection point of view be the same?

Thanks!
jlc


Re: [squid-users] Handling websites that switch between http https

2008-12-23 Thread Amos Jeffries

Joseph L. Casale wrote:
Define 'connection'. I suspect what you think of as a connection is not 
related to HTTP connections.


Amos,
Appreciate your help here, why I theorize connection was because what happens
when an SSL session is started versus a simple HTTP session. This is all related
to our users getting yahoo mail, the session toggles back and forth and I 
suspect
that is what is causing them to be logged out of the mail interface when 
attempting
to dl an attachment. I was thinking that had something to do with the proxy 
handling
the http versus the proxy passing through http.

Could I possibly tell squid to always do something with .yahoo.com such that a 
session
whether it be http or https from a server connection point of view be the same?


This is closely related to the Keep-Alive: and Connection: header of 
HTTP. You can check that by following the headers sent/received by Squid 
and the yahoo server.


To get around this you would need Squid to maintain two simultaneous 
persistent connections (one for HTTP and one for HTTPS requests). I'm 
not too sure about Squid behavior in the area of duplicate connections.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.3 or 3.0.STABLE11-RC1


[squid-users] Squid 3.0 STABLE11 is available

2008-12-23 Thread Amos Jeffries

The Squid HTTP Proxy team is pleased to announce the
availability of the Squid-3.0.STABLE11 release!

The previous RC release has now completed its mandatory 14 days without 
new bugs being detected, or bad reports against the tested patches. As 
such the 3.0 code is once again considered stable enough to drop the RC 
label.


All users of Squid 3.0.STABLE10 and 3.0.STABLE11-RC1 are advised to 
migrate to this release as soon as possible.


Several bugs fixed since the last release are also included:

  Bug 2424: filedescriptors being left unnecessary opened
  Bug 2545: fault passing ICAP filtered traffic to peers
  Bug 2227: Segfaults in MemBuf::reset during idnsSendQuery


Please refer to the release notes at
http://www.squid-cache.org/Versions/v3/3.0/RELEASENOTES.html
if and when you are ready to make the switch to Squid-3.

This new release can be downloaded from our HTTP or FTP servers

 http://www.squid-cache.org/Versions/v3/3.0/
 ftp://ftp.squid-cache.org/pub/squid-3/STABLE/

or the mirrors. For a list of mirror sites see

 http://www.squid-cache.org/Download/http-mirrors.dyn
 http://www.squid-cache.org/Download/mirrors.dyn

If you encounter any issues with this release please file a bug report.
 http://bugs.squid-cache.org/


Amos Jeffries


[squid-users] Login problem

2008-12-23 Thread Ismail OZATAY

Hi ,

I have an interesting problem while logining in to a web site. I am 
using squid 2.6 with OpenBSD 4.3 stable. Web site is opening without any 
problem. When I enter the username and password it waits for some 
seconds then get timeout error from the remote server. I am looking 
squid logs but do not get any error. If I login without squid there is 
no problem. Any experiences ?


Regards

ismail