Re: [squid-users] how to simulate big file workload test for squid

2010-09-20 Thread Amos Jeffries
On Tue, 21 Sep 2010 10:27:12 +0800, du du  wrote:
> Hi
> I want to make a workload test for squid,  I want to use the tool
> web-polygraph which has some standard workload models. But in these
> workload, the content size  is too small to me.
> 
> Dose anyone has a modified workload with big content size or has another
> method to simulate big file transaction?
> 
> My file size is 1-5MB

Your own access.log history is probably the best source of such info.
The actual file contents does not matter to Squid so can be synthesized to
fit the URLs logged transfer sizes, with the URLs themselves re-written to
match for the testing.
(Have not done this on a large scale or with web-polygraph myself)

Amos



Re: [squid-users] Understanding the use of must-revalidate/proxy-revalidate

2010-09-20 Thread Amos Jeffries
On Tue, 21 Sep 2010 10:53:37 +0800, Sean SPALDING 
wrote:
> Hi all,
> 
> I'm struggling with the configuration of a PHP based content management
> system served from Apache behind a squid 2.6 reverse proxy.
Specifically,
> it's serving out stale content, ie. responses that are past their
"Expires"
> time.
> 
> I've added "must-revalidate" to the "Cache-Control" header but squid is
> still caching the (old) response. I thought it would re-cache the URL
after
> the "Expires" time. Is that not the case?
> 
> Also note, the application is configured to send a "304 Not Modified"
> Status-Code where appropriate.
> 
> This request was made at Mon, 20 Sep 2010 05:05:45 GMT but is still
being
> severed by squid hours after becoming "stale".

Looks like bug #7 biting. You will need to upgrade to the latest Squid 2.7
or 3.1 which fix this in various ways.

Amos


[squid-users] Understanding the use of must-revalidate/proxy-revalidate

2010-09-20 Thread Sean SPALDING
Hi all,

I'm struggling with the configuration of a PHP based content management system 
served from Apache behind a squid 2.6 reverse proxy. Specifically, it's serving 
out stale content, ie. responses that are past their "Expires" time.

I've added "must-revalidate" to the "Cache-Control" header but squid is still 
caching the (old) response. I thought it would re-cache the URL after the 
"Expires" time. Is that not the case?

Also note, the application is configured to send a "304 Not Modified" 
Status-Code where appropriate.

This request was made at Mon, 20 Sep 2010 05:05:45 GMT but is still being 
severed by squid hours after becoming "stale".

HTTP/1.0 200 OK
DateSun, 19 Sep 2010 03:42:44 GMT
Server  Apache/2.2.3 (Red Hat)
X-Powered-ByPHP/5.1.6
Expires Sun, 19 Sep 2010 12:34:05 GMT
Cache-Control   max-age=43200, public, must-revalidate
Pragma  cache
Last-Modified   Fri, 17 Sep 2010 06:50:06 GMT
VaryAccept-Encoding,User-Agent,X-SSL
Content-Encodinggzip
Content-Length  5009
Content-Typetext/html; charset=utf-8
Age 1
X-Cache HIT from webcms-prod02.mysite.com
X-Cache-Lookup  HIT from webcms-prod02.mysite.com:3128
Via 1.0 webcms-prod02.mysite.com:3128 (squid/2.6.STABLE21)
Connection  keep-alive

--
Regards,

Sean.


This e-mail is confidential. If you are not the intended recipient you must not 
disclose or use the information contained within. If you have received it in 
error please return it to the sender via reply e-mail and delete any record of 
it from your system. The information contained within is not the opinion of 
Edith Cowan University in general and the University accepts no liability for 
the accuracy of the information provided.

CRICOS IPC 00279B


[squid-users] how to simulate big file workload test for squid

2010-09-20 Thread du du
Hi
I want to make a workload test for squid,  I want to use the tool
web-polygraph which has some standard workload models. But in these
workload, the content size  is too small to me.

Dose anyone has a modified workload with big content size or has another
method to simulate big file transaction?

My file size is 1-5MB

--
thanks,
Ergod


Re: [squid-users] Performance tips for squid 3 (config file included)?

2010-09-20 Thread Amos Jeffries
On Mon, 20 Sep 2010 14:31:45 -0700, Andrei 
wrote:
> Thank you so much! I'm not sure if I understood everything, but here
> is what I have so far.
> 
> 1) 1GB of RAM in this machine (P4, 40GB IDE, 1GB RAM).
> 2) Running Squid 3.1.3 now :-)
> 3) Not sure what you meant with AUFS.  Does this need to be changed?
> cache_dir ufs /var/spool/squid3 7000 16 256

Yes:
  cache_dir aufs /var/spool/squid3 7000 16 256

> 4) Random port for interception? Like this: http_port 3128 transparent

Nevermind. Irrelevant due to (5)

> 5) No NATing is done on this machine.

Ah, "transparent" flag does not means what you think then.

In Squid-3.2 and older it means "traffic arriving at this port has been
redirected here via NAT in the firewall".

What did you actually want?


> 6) Added  Safe_Ports and SSL_Ports
> 
> Here is a complete config file. Please let me know if missed anything
> and thank you again!
> 
> acl manager proto cache_object
> acl localhost src 127.0.0.1/32
> acl to_localhost dst 127.0.0.0/8
> acl localnet src 176.16.0.0/21 #176.16.0.-176.16.3.254 range

/21 includes .255. if you want to exclude the final .255  you will need to
write these as:
  acl localnet src 176.16.0.0-176.16.3.254

> acl localnet2 src 192.168.11.0/24 #192.168.11.0-254 range
> acl localnet3 src 192.168.200.0/24 #192.168.200.0-254 range
> acl SSL_ports port 443
> acl Safe_ports port 80  # http
> acl Safe_ports port 21  # ftp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70  # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535  # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> acl CONNECT method CONNECT
> http_access allow manager localhost
> http_access deny manager
> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports
> http_access allow localhost
> http_access allow localnet
> http_access allow localnet2
> http_access allow localnet3
> http_access allow all #not restricted because its behind the firewall
> and serving local LAN only. I'm just trying to get this working for
> now...

Testing with the right config is always better then changing things during
the "make live" step.

You can collapse all the localnet ranges down to a single allow, and leave
the "deny all" blocking unknown things. Such as multicast sourced requests
from the internal devices and/or people piggy-backing on your LAN without
your knowledge.


> icp_access allow all
> htcp_access allow all
> http_port 3128 transparent # ok, transparent proxy, no NATing. Not
> sure what WPAD/PAC is...

Enjoy:
 http://wiki.squid-cache.org/SquidFaq/ConfiguringBrowsers


> refresh_pattern ^gopher:14400%  1440
> refresh_pattern (cgi-bin|\?)0   0%  0

The -i and slashes around /cgi-bin/ are important:  -i (/cgi-bin/|\?)

> refresh_pattern . 0 40% 40320
> icp_port 3130

ICP is used between cache_peer. If you don't used that set this to '0'

Amos


RE: [squid-users] Interminted TCP_DENIED

2010-09-20 Thread David Parks
So I fired up 3.2.0.2 today.

I was not able to reproduce the intermittent 407 problem in this version as 
predicted by Amos.

However I did run into some other issues:
1) A bug with digest authentication -
   Open a browser and authenticate. Now restart squid (don't close the browser)
   Try browsing to another page. This crashes squid with the following error in 
squid.out:
   "FATAL: Received Segment Violation...dying."
   It probably doesn't like receiving auth headers without following the 
typical challenge/response process.

2) Question: Is url_rewrite_concurrency gone? I get a config file warning that 
it's not recognized.
   But it's in the squid.conf.documented docs as valid.

I tried testing in 3.2.0.2-20100920 but "make install" fails with:
   forward.cc: In member function 'void FwdState::doneWithRetries()':
   forward.cc:562: error: 'class BodyPipe' has no member named 
'expectNoConsumption'

Would you like me to post #1 in bugzilla?




Re: [squid-users] Re: Persistent Server connections, pipelining and matching responses

2010-09-20 Thread Henrik Nordström
mån 2010-09-20 klockan 14:12 -0700 skrev cachenewbie:
> Thanks for the response. I should have clarified further. See inline below/
> 
> > If we queue each request and send it after receiving the response for the
> > previous one, we should be okay.
> 
> [Henrik]How would that make you do okay?
> ---> I meant that there is no problem in matching request to response if
> they are "sequenced" to the server as one transaction after another. 

Squid always does that, but for other reasons.

There is no trouble matching responses to requests even if you send 1000
requests before receiving the first response.

> Isn't it impossible to match the response to the request

No. First response is always finished before next response is sent.

If the client wants to abort the response to the first request then it
has to drop the TCP connection thereby telling "no longer interested in
that".

A HTTP client can not modify a request already sent.

video streaming maps very poorly on HTTP when you start doing seeks etc.


Regards
Henrik



Re: [squid-users] A question about always_direct

2010-09-20 Thread Amos Jeffries
On Tue, 21 Sep 2010 06:35:49 +0800, Gemmy  wrote:
> I have a cache server running squid2.7.9. I wrote the follow
> configurations:
> acl Safe_ports port 80
> acl Domain dstdomain .china.com
> acl Domain dstdomain .haiyang2012.com
> http_access allow Safe_ports Domain
> http_access deny all
> cache_peer 10.168.168.13 parent 80 0 no-query no-netdb-exchange
> originserver round-robin
> cache_peer 10.168.170.14 parent 80 0 no-query no-netdb-exchange
> originserver round-robin
> cache_peer_access 10.168.168.13 allow Domain
> cache_peer_access 10.168.170.14 allow Domain
> always_direct allow !Domain
> 
> When I request a url like "http://military.china.com/zh_cn/etc/endpage
> /showPic.html",I can see "HTTP/1.0 OK" and so on.
> But when I request a url like
>
"http://military.china.com/zh_cn/etc/endpage/showPic.html?http://image.tuku.china.com/tuku.military.china.com/military//pic/2010-09-20/b12a1145-dd40-4fcb-8ce0-1372ac934f66.jpg";(this
> url just redirect request into
>
"http://image.tuku.china.com/tuku.military.china.com/military//pic/2010-09-20/b12a1145-dd40-4fcb-8ce0-1372ac934f66.jpg";),the
> squid response a "504 time out"!
> I strace the squid process and see that when squid handle the request
> having a "?", he donot back to the ip defined in cache_peer but the ip
> resolved by dnssever which is himself!
> I change the conf as "never_direct allow all",problem solved.
> But I still think that the conf using "always_direct" is right, why its
> not take effect??

always_direct *prevents* peers being used. It does not force them.

" hierarchy_stoplist ? " is the directive preventing the peer being used.
http://www.squid-cache.org/Doc/config/hierarchy_stoplist/

Amos


[squid-users] A question about always_direct

2010-09-20 Thread Gemmy
 I have a cache server running squid2.7.9. I wrote the follow
configurations:
acl Safe_ports port 80
acl Domain dstdomain .china.com
acl Domain dstdomain .haiyang2012.com
http_access allow Safe_ports Domain
http_access deny all
cache_peer 10.168.168.13 parent 80 0 no-query no-netdb-exchange
originserver round-robin
cache_peer 10.168.170.14 parent 80 0 no-query no-netdb-exchange
originserver round-robin
cache_peer_access 10.168.168.13 allow Domain
cache_peer_access 10.168.170.14 allow Domain
always_direct allow !Domain

When I request a url like "http://military.china.com/zh_cn/etc/endpage
/showPic.html",I can see "HTTP/1.0 OK" and so on.
But when I request a url like
"http://military.china.com/zh_cn/etc/endpage/showPic.html?http://image.tuku.china.com/tuku.military.china.com/military//pic/2010-09-20/b12a1145-dd40-4fcb-8ce0-1372ac934f66.jpg";(this
url just redirect request into
"http://image.tuku.china.com/tuku.military.china.com/military//pic/2010-09-20/b12a1145-dd40-4fcb-8ce0-1372ac934f66.jpg";),the
squid response a "504 time out"!
I strace the squid process and see that when squid handle the request
having a "?", he donot back to the ip defined in cache_peer but the ip
resolved by dnssever which is himself!
I change the conf as "never_direct allow all",problem solved.
But I still think that the conf using "always_direct" is right, why its
not take effect??




RE: [squid-users] Interminted TCP_DENIED

2010-09-20 Thread David Parks
Thanks Amos.
I ran the debug mode and took some output from cache.log. Can you take a peek 
at the end of this log file? I see a "Nonce count doesn't match" around the 
time it starts to fail authentication again. The logs below were generated by:
1) open browser, navigate to google.com
2) authenticate with user test/test (successful)
3) open latimes.com
4) get authentication challenge in browser unexpectedly after some resources 
load successfully
5) enter credentials once more
6) another authentication challenges come up immediately - stop test.

Do you think this could be the problem you mentioned? Unless my system just 
tends to exacerbate this bug for some reason I can't imagine this wouldn't be 
identified before (totally unusable w/ digest authentication). I'm using fedora 
12, compiled myself on the system.

I'd be happy to try 3.2 and do some heavy testing, but when I looked at 3.0 and 
3.1 a while back a lot of features weren't ported over still, which is why I 
went with 2.7. The logdaemon is one obvious feature. I don't remember if there 
were others. Do you know if logdaemon will be in 3.2, or are already there? I 
should review the list of unported features again.

2010/09/20 17:22:48| authenticateValidateUser: Auth_user_request was NULL!
2010/09/20 17:22:48| authenticateAuthenticate: broken auth or no proxy_auth 
header. Requesting auth header.
2010/09/20 17:22:48| authenticateDigestNonceNew: created nonce 0x856b588 at 
1285017768
2010/09/20 17:22:50| authenticateAuthenticate: no connection authentication type
2010/09/20 17:22:50| authenticateValidateUser: Validated Auth_user request 
'0x8558b50'.
2010/09/20 17:22:50| authenticateValidateUser: Validated Auth_user request 
'0x8558b50'.
2010/09/20 17:22:50| authenticateDigestAuthenticateuser: user 'test' validated 
OK
2010/09/20 17:22:50| authenticateValidateUser: Validated Auth_user request 
'0x8558b50'.
2010/09/20 17:22:50| authenticateValidateUser: Validated Auth_user request 
'0x8558b50'.
2010/09/20 17:22:50| authenticateValidateUser: Validated Auth_user request 
'0x8558b50'.
2010/09/20 17:22:50| authenticateAuthUserRequestFree: freeing request 0x8558b50
2010/09/20 17:22:50| authenticateAuthenticate: no connection authentication type
2010/09/20 17:22:50| authenticateValidateUser: Validated Auth_user request 
'0x8558b50'.
2010/09/20 17:22:50| authenticateValidateUser: Validated Auth_user request 
'0x8558b50'.

   <<< 90 lines of similar/repeated log statements removed >>>

2010/09/20 17:22:56| authenticateAuthenticate: no connection authentication type
2010/09/20 17:22:56| authenticateValidateUser: Validated Auth_user request 
'0x858b940'.
2010/09/20 17:22:56| authenticateValidateUser: Validated Auth_user request 
'0x858b940'.
2010/09/20 17:22:56| authDigestNonceIsValid: Nonce count doesn't match
2010/09/20 17:22:56| authenticateDigestAuthenticateuser: user 'test' validated 
OK but nonce stale
2010/09/20 17:22:56| authenticateValidateUser: Validated Auth_user request 
'0x858b940'.
2010/09/20 17:22:56| authenticateValidateUser: Validated Auth_user request 
'0x858b940'.
2010/09/20 17:22:56| authenticateDigestNonceNew: created nonce 0x83d23d8 at 
1285017776
2010/09/20 17:22:56| authenticateAuthUserRequestFree: freeing request 0x858b940
2010/09/20 17:22:56| authenticateAuthenticate: no connection authentication type
2010/09/20 17:22:56| authenticateValidateUser: Validated Auth_user request 
'0x858b940'.
2010/09/20 17:22:56| authenticateValidateUser: Validated Auth_user request 
'0x858b940'.
2010/09/20 17:22:56| authDigestNonceIsValid: Nonce already invalidated
2010/09/20 17:22:56| authenticateDigestAuthenticateuser: user 'test' validated 
OK but nonce stale
2010/09/20 17:22:56| authenticateValidateUser: Validated Auth_user request 
'0x858b940'.
2010/09/20 17:22:56| authenticateValidateUser: Validated Auth_user request 
'0x858b940'.
2010/09/20 17:22:56| authenticateDigestNonceNew: created nonce 0x856b9c0 at 
1285017776
2010/09/20 17:22:56| authenticateAuthUserRequestFree: freeing request 0x858b940
2010/09/20 17:22:56| authenticateAuthenticate: no connection authentication type
2010/09/20 17:22:56| authenticateValidateUser: Validated Auth_user request 
'0x858b940'.
2010/09/20 17:22:56| authenticateValidateUser: Validated Auth_user request 
'0x858b940'.
2010/09/20 17:22:56| authDigestNonceIsValid: Nonce already invalidated
2010/09/20 17:22:56| authenticateDigestAuthenticateuser: user 'test' validated 
OK but nonce stale
2010/09/20 17:22:56| authenticateValidateUser: Validated Auth_user request 
'0x858b940'.
2010/09/20 17:22:56| authenticateValidateUser: Validated Auth_user request 
'0x858b940'.
2010/09/20 17:22:56| authenticateDigestNonceNew: created nonce 0x856b6e0 at 
1285017776
2010/09/20 17:22:56| authenticateAuthUserRequestFree: freeing request 0x858b940
2010/09/20 17:22:56| authenticateAuthenticate: no connection authentication type
2010/09/20 17:22:56| authenticateValidateUser: Validated Auth_user request 
'0x858b940'.
2010/09/20 17:22:56| 

Re: [squid-users] Performance tips for squid 3 (config file included)?

2010-09-20 Thread Andrei
Thank you so much! I'm not sure if I understood everything, but here
is what I have so far.

1) 1GB of RAM in this machine (P4, 40GB IDE, 1GB RAM).
2) Running Squid 3.1.3 now :-)
3) Not sure what you meant with AUFS.  Does this need to be changed?
cache_dir ufs /var/spool/squid3 7000 16 256
4) Random port for interception? Like this: http_port 3128 transparent
5) No NATing is done on this machine.
6) Added  Safe_Ports and SSL_Ports

Here is a complete config file. Please let me know if missed anything
and thank you again!

acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl localnet src 176.16.0.0/21 #176.16.0.-176.16.3.254 range
acl localnet2 src 192.168.11.0/24 #192.168.11.0-254 range
acl localnet3 src 192.168.200.0/24 #192.168.200.0-254 range
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow localnet
http_access allow localnet2
http_access allow localnet3
http_access allow all #not restricted because its behind the firewall
and serving local LAN only. I'm just trying to get this working for
now...
icp_access allow all
htcp_access allow all
http_port 3128 transparent # ok, transparent proxy, no NATing. Not
sure what WPAD/PAC is...
hierarchy_stoplist cgi-bin ?
access_log /var/log/squid3/access.log squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern (cgi-bin|\?)0   0%  0
refresh_pattern . 0 40% 40320
icp_port 3130
coredump_dir /var/spool/squid3
refresh_pattern -i \.index.(html|htm)$ 0 40% 10080
refresh_pattern -i \.(html|htm|css|js)$ 1440 40% 40320
cache_mgr h...@mydomain.org
cache_dir ufs /var/spool/squid3 7000 16 256
visible_hostname gw.mydomain.org


[squid-users] Re: Persistent Server connections, pipelining and matching responses

2010-09-20 Thread cachenewbie

Thanks for the response. I should have clarified further. See inline below/

> If we queue each request and send it after receiving the response for the
> previous one, we should be okay.

[Henrik]How would that make you do okay?
---> I meant that there is no problem in matching request to response if
they are "sequenced" to the server as one transaction after another. 

> How will this work for HTTP progressive
> download videos that support "seek streaming" (pseudo-streaming) - The
> first
> request will result in server sending the video file (.swf file) but the
> client can send a HTTP request with a different offset (either using range
> requests or by using URLs like YouTube). When Squid sends out the second
> request to the server, how does it match the incoming bytestream to the
> actual request ? How does this get addressed for caching and non-caching
> scenarios when squid gets deployed as a proxy ?

[Henrik]It matches the response to the request, just as done for any other
request. No difference.
-> Isn't it impossible to match the response to the request - First
request to the server is to start sending the video file - Let's say this
maps to TCP fragment 1 through 1000 (assuming that the video file is 536000
bytes) - Before the entire video is sent, client sends a request (HTTP range
request or separate URL with the offset embedded) to start sending from the
middle of the video (client does a "fast forward").  Squid has to send this
to the server before receiving the complete response to the original HTTP
request. If the server hasn't sent *all* of the original video, it'll
transmit starting from the location specified in the request. Squid could
now receive TCP fragments from the original stream followed by TCP fragments
from the "mid-video" section. There is no way to identify (and cache) the
"mid-video" section separately. This makes it hard to cache segments sent in
response to range requests. This is the issue I was discussing. 


Regards
Henrik

Free Embedda
-- 
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Persistent-Server-connections-pipelining-and-matching-responses-tp2540989p2547693.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: Persistent Server connections, pipelining and matching responses

2010-09-20 Thread Henrik Nordström
mån 2010-09-20 klockan 12:19 -0700 skrev cachenewbie:

> If we queue each request and send it after receiving the response for the
> previous one, we should be okay.

How would that make you do okay?

> How will this work for HTTP progressive
> download videos that support "seek streaming" (pseudo-streaming) - The first
> request will result in server sending the video file (.swf file) but the
> client can send a HTTP request with a different offset (either using range
> requests or by using URLs like YouTube). When Squid sends out the second
> request to the server, how does it match the incoming bytestream to the
> actual request ? How does this get addressed for caching and non-caching
> scenarios when squid gets deployed as a proxy ? 

It matches the response to the request, just as done for any other
request. No difference.

Regards
Henrik



[squid-users] Re: Persistent Server connections, pipelining and matching responses

2010-09-20 Thread cachenewbie


Thanks Chad, Henrik.

If we queue each request and send it after receiving the response for the
previous one, we should be okay. How will this work for HTTP progressive
download videos that support "seek streaming" (pseudo-streaming) - The first
request will result in server sending the video file (.swf file) but the
client can send a HTTP request with a different offset (either using range
requests or by using URLs like YouTube). When Squid sends out the second
request to the server, how does it match the incoming bytestream to the
actual request ? How does this get addressed for caching and non-caching
scenarios when squid gets deployed as a proxy ? 

Thanks.
-- 
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Persistent-Server-connections-pipelining-and-matching-responses-tp2540989p2547553.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] QUESTION ABOUT CHOICE BETWEEN SQUID 2.7 or 3.1.8

2010-09-20 Thread Henrik Nordström
mån 2010-09-20 klockan 19:28 +0200 skrev patrick.la...@inserm.fr:
> 
> Hello
> 
> First of all congratulations for your great work!
> 
> I have one question for you please.
> 
> I set up two squid proxy with WCCP (+squidclamav and squidguard)but I'm
> reinstalling all under 2 esxi vmware (vmware isn't the problem here).It
> would be better that I install the latest version 3.1.8 or version 2.7? 
> version 3.1.8 of 20100920 it is a stable version?

I would use 3.1.8. Allows you to replace squidclamav with c-icap +
clamav for a better virus scanning experience.

Regards
Henrik



Re: [squid-users] SSL Reverse Proxy to Support Multiple Web Site WITHOUT wildcard crt

2010-09-20 Thread Henrik Nordström
mån 2010-09-20 klockan 13:02 +0100 skrev Nikolaos Pavlidis:

> Unfortunately that did not work! If I define an IP address on the port
> it just stops working for some reason! squid reloads with no errors but
> access to the host times out.

Odd. Works for me, and is needed to be able to specify the right
certificate for each site.

Try again, and pay attention to error outputs on the console when
starting squid and in cache.log.

and check your config with "squid -k parse".

Regards
Henrik



[squid-users] QUESTION ABOUT CHOICE BETWEEN SQUID 2.7 or 3.1.8

2010-09-20 Thread patrick.la...@inserm.fr



Hello

First of all congratulations for your great work!

I have one question for you please.

I set up two squid proxy with WCCP (+squidclamav and squidguard)but I'm
reinstalling all under 2 esxi vmware (vmware isn't the problem here).It
would be better that I install the latest version 3.1.8 or version 2.7? 
version 3.1.8 of 20100920 it is a stable version?


Thank you in advance.

Sincerely,

PATRICK.


* PATRICK LANOT
Responsable Régional Informatique
* Délégation régionale Inserm Midi-Pyrénées / Limousin
BP 3048 - CHU Purpan - 31024 Toulouse cedex 3
Téléphone : 05 62 74 45 30
Téléphone : 06 88 07 37 66
Télécopie  : 05 61 31 97 52
patrick.la...@inserm.fr <mailto:patrick.la...@inserm.fr>




Re: [squid-users] Performance tips for squid 3 (config file included)?

2010-09-20 Thread Amos Jeffries

On 18/09/10 06:00, Andrei wrote:

I'm a newbie. To get Squid started all I was able to do is create the
config below. This works but it feels like it could be a little
faster. I have about 300 users.
Are there any other options that you would recommend adding to this
config file? This is my config file for Squid 3.0 on Debian, P4, 40GB
IDE disk.


RAM?

Tip #1:  Add the backports.org repo to your list and pull squid3 (3.1) 
from there.




refresh_pattern -i \.index.(html|htm)$ 0 40% 10080


pattern:   \.index\.(html|htm)$


refresh_pattern -i \.(html|htm|css|js)$ 1440 40% 40320


add here:  refresh_pattern -i (/cgi-bin/|\?) 0 0% 0


refresh_pattern . 0 40% 40320
cache_dir ufs /var/spool/squid3 7000 16 256


AUFS

+ more disk? (that will depend on your available RAM).



visible_hostname proxy.ourdomain.com
http_port 176.16.0.9:3128 transparent


Use a random port for NAT interception. It only needs to be accessible 
to your local machine firewall to send packets.


Regular proxy requests arriving at this port will be slowed by useless 
NAT searches.


Tip #2: avoid NAT.  Use WPAD/PAC to invisibly configure the networks 
browsers and pre-filter broken domains.




acl localnet src 176.16.0.0/255.255.248.0


acl localnet src 176.16.0.0/21

Tip #3: retain the security Safe_Ports and SSL_Ports restrictions to 
prevent internal viral/spam spreading.



http_access allow localnet
debug_options ALL,1
access_log /var/log/squid3/access.log squid


Check and be sure about your response times. They might surprise you one 
way or the other:

  squidclient mgr:info

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.8
  Beta testers wanted for 3.2.0.2


Re: [squid-users] Alerting when cache Peer is used.

2010-09-20 Thread Amos Jeffries

On 20/09/10 19:49, GIGO . wrote:


2010/09/20 12:40:56| WARNING: Forwarding loop detected for:
Client: 10.25.88.175 http_port: 10.1.82.175:8080

As far as alerts are concerned i got your point thanks!

i am getting these kind of messages in my cache.log can i ignore these warnings 
in reference to my requirements(internet backup path of each other) or i need 
to make some configuration changes. Please guide



A small worry.

You can get rid of them by adding ACL to your cache_peer entries 
forbidding relaying to the peer if the request was received from it.


This will convert the forwarding loops and some types of "hung" request 
into clear failure pages indicating that all paths are working.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.8
  Beta testers wanted for 3.2.0.2


Re: [squid-users] SSL Reverse Proxy to Support Multiple Web Site WITHOUT wildcard crt

2010-09-20 Thread Amos Jeffries

On 21/09/10 00:02, Nikolaos Pavlidis wrote:

Hello Amos, all,

Many thanks for taking a look at my config!

Comments inline (easier)

On Fri, 2010-09-17 at 23:17 +1200, Amos Jeffries wrote:

On 17/09/10 19:32, Nikolaos Pavlidis wrote:

Hello Amos, all,

Thank you for your response. As far as understanding what you mean I do
(thats something at least) but I fail to see how this will be syntaxed


Answers inline.



My config is as follows please advise(this is not working of course):

# NETWORK OPTIONS
#
-
http_port 80 accel defaultsite=www.domain.com vhost
https_port 443 cert=/etc/squid/uob/sid_domain.crt
key=/etc/squid/uob/sid_domain.key cafile=/etc/squid/uob/sid_domain.ca
defaultsite=sid.domain.com vhost

  >
  >  https_port 443 cert=/etc/squid/uob/helpdesk_domain.crt
  >  key=/etc/squid/uob/helpdesk_domain.key
  >  cafile=/etc/squid/uob/helpdesk_domain.ca defaultsite=helpdesk.domain.com
  >  vhost

The pubic-facing IP address is needed to open multiple same-numbered ports.

(wrapped for easy reading)

https_port 10.0.0.1:443 accel vhost defaultsite=sid.domain.com
 cert=/etc/squid/uob/sid_domain.crt
 key=/etc/squid/uob/sid_domain.key
 cafile=/etc/squid/uob/sid_domain.ca

https_port 10.0.0.2:443 accel vhost defaultsite=helpdesk.domain.com
 cert=/etc/squid/uob/helpdesk_domain.crt
 key=/etc/squid/uob/helpdesk_domain.key
 cafile=/etc/squid/uob/helpdesk_domain.ca



Unfortunately that did not work! If I define an IP address on the port
it just stops working for some reason! squid reloads with no errors but
access to the host times out.



SSL is on the edge of my knowledge field. This is a bit of a black box 
to me now.


Hopefully someone else here knows more details of what to test.


To me it sounds a little like the SSL layer is failing to be setup or 
something. For example if the IP does not match the certificate info 
domain rDNS, or Host: domain matching the cert, etc.
debug_options 83,6 may have something relevant if it's something 
detected by Squid.






# OPTIONS FOR TUNING THE CACHE
#
-
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i \.css 1440 50% 2880 override-expire
refresh_pattern -i \.swf 1440 50% 2880 ignore-reload override-expire


Missing:
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0


That is actually not suggested for our CMS at the moment :/



huh? it specifies that dynamic pages are not to be cached unless they 
have Cache-Control/Expires. Not having this causes dynamic pages to be 
stored for maybe long periods after they should have been updated.


If there are parts of the site that it matches and are supposed to be 
cached for a while, add rules above it for those specific site parts.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.8
  Beta testers wanted for 3.2.0.2


Re: [squid-users] SSL Reverse Proxy to Support Multiple Web Site WITHOUT wildcard crt

2010-09-20 Thread Nikolaos Pavlidis
Hello Amos, all,

Many thanks for taking a look at my config!

Comments inline (easier)

On Fri, 2010-09-17 at 23:17 +1200, Amos Jeffries wrote:
> On 17/09/10 19:32, Nikolaos Pavlidis wrote:
> > Hello Amos, all,
> >
> > Thank you for your response. As far as understanding what you mean I do
> > (thats something at least) but I fail to see how this will be syntaxed
> 
> Answers inline.
> 
> >
> > My config is as follows please advise(this is not working of course):
> >
> > # NETWORK OPTIONS
> > #
> > -
> > http_port 80 accel defaultsite=www.domain.com vhost
> > https_port 443 cert=/etc/squid/uob/sid_domain.crt
> > key=/etc/squid/uob/sid_domain.key cafile=/etc/squid/uob/sid_domain.ca
> > defaultsite=sid.domain.com vhost
>  >
>  > https_port 443 cert=/etc/squid/uob/helpdesk_domain.crt
>  > key=/etc/squid/uob/helpdesk_domain.key
>  > cafile=/etc/squid/uob/helpdesk_domain.ca defaultsite=helpdesk.domain.com
>  > vhost
> 
> The pubic-facing IP address is needed to open multiple same-numbered ports.
> 
> (wrapped for easy reading)
> 
> https_port 10.0.0.1:443 accel vhost defaultsite=sid.domain.com
> cert=/etc/squid/uob/sid_domain.crt
> key=/etc/squid/uob/sid_domain.key
> cafile=/etc/squid/uob/sid_domain.ca
> 
> https_port 10.0.0.2:443 accel vhost defaultsite=helpdesk.domain.com
> cert=/etc/squid/uob/helpdesk_domain.crt
> key=/etc/squid/uob/helpdesk_domain.key
> cafile=/etc/squid/uob/helpdesk_domain.ca
> 
> 
Unfortunately that did not work! If I define an IP address on the port
it just stops working for some reason! squid reloads with no errors but
access to the host times out.

> > visible_hostname *MailScanner has detected a possible fraud attempt from
> > "www.beds.ac.uk" claiming to be* www. domain.
> > com
> > unique_hostname cache1.domain.com
> > offline_mode off
> > icp_port 3130
> > request_body_max_size 32 MB
> >
> > # OPTIONS WHICH AFFECT THE CACHE SIZE
> > #
> > -
> > cache_mem 4096 MB
> > maximum_object_size 8 MB
> > maximum_object_size_in_memory 256 KB
> >
> > # LOGFILE PATHNAMES AND CACHE DIRECTORIES
> > #
> > -
> > cache_dir aufs /var/cache/squid 61440 16 256
> > emulate_httpd_log on
> > logfile_rotate 100
> > logformat combined %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs % > "%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh
> > access_log /var/log/squid/access.log combined
> 
> Just for my interest how does forcing apache "common" format with 
> emulate_httpd_log mix with explicitly forcing a locally defined 
> "combined" format?
>   Which one do you expect to be used in the log?
> 
Good spot! DOH! :)

> > cache_log /var/log/squid/cache.log
> > cache_store_log /var/log/squid/store.log
> 
> Only if you need it. Otherwise:
>   cache_store_log none
> 
> > debug_options ALL,1,33,3,20,3
> 
> (space needed between each section,level option pair.)
> debug_options ALL,1 33,3 20,3
> 
Another good one!

> >
> > # OPTIONS FOR EXTERNAL SUPPORT PROGRAMS
> > #
> > -
> > auth_param basic children 10
> > auth_param basic realm Squid proxy-caching web server
> > auth_param basic credentialsttl 2 hours
> > auth_param basic casesensitive off
> >
> > # OPTIONS FOR TUNING THE CACHE
> > #
> > -
> > refresh_pattern ^ftp: 1440 20% 10080
> > refresh_pattern ^gopher: 1440 0% 1440
> > refresh_pattern -i \.css 1440 50% 2880 override-expire
> > refresh_pattern -i \.swf 1440 50% 2880 ignore-reload override-expire
> 
> Missing:
> refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
> 
That is actually not suggested for our CMS at the moment :/


The rest were spot on as usual and I applied all of them in the running
configuration.

Any suggestions on how to proceed with the SSL?
Many thanks in advance.

Kind regards,

Nik

-- 
Nikolaos Pavlidis BSc (Hons) MBCS NCLP CEH CHFI
Systems Administrator
University Of Bedfordshire
Park Square LU1 3JU
Luton, Beds, UK
Tel: +441582489277 (Ext 2277)



RE: [squid-users] Alerting when cache Peer is used.

2010-09-20 Thread GIGO .

2010/09/20 12:40:56| WARNING: Forwarding loop detected for:
Client: 10.25.88.175 http_port: 10.1.82.175:8080

As far as alerts are concerned i got your point thanks!
 
i am getting these kind of messages in my cache.log can i ignore these warnings 
in reference to my requirements(internet backup path of each other) or i need 
to make some configuration changes. Please guide
 
thanking you &
 
regards,
 
Bilal Aslam


> Date: Fri, 17 Sep 2010 23:31:55 +1200
> From: squ...@treenet.co.nz
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] Alerting when cache Peer is used.
>
> On 17/09/10 23:14, GIGO . wrote:
>>
>> I have configured my proxy servers in two regions for backup internet path 
>> of each other by declaring the following directives.
>>
>> Directives on Proxy A:
>>
>> cache_peer A parent 8080 0 proxy-only
>> prefer_direct on
>> nonhierarchical_direct off
>> cache_peer_access A allow all
>>
>>
>> Directives on Proxy B:
>>
>> cache_peer B parent 8080 0 proxy-only
>> prefer_direct on
>> nonhierarchical_direct off
>> cache_peer_access B allow all
>>
>>
>> Is there a way that whenever a peer cache is used an email alert is 
>> generated to the admins.
>>
>
> Not from Squid. That is a job for network availability software.
>
> You could hack up a script to scan squid access.log for the peer
> hierarchy codes (DIRECT or FIRST_UP_PARENT etc) being used.
>
>
> Note that the setting is only "prefer" _direct. It can go to the peer
> with perfectly working network access if the origin web server simply
> takes too long to reply to a connect attempt.
>
> Amos
> --
> Please be using
> Current Stable Squid 2.7.STABLE9 or 3.1.8
> Beta testers wanted for 3.2.0.2 

RE: [squid-users] Strange performance effects on squid during off peak hours

2010-09-20 Thread Martin Sperl
Hi!

I have run a test with ab running 1 hits with a single thread loading an 
image with a spacing of 5 minutes of inactivity and there is still a 
"variation" even measurable with apache bench:
Date   time mean RT measured by AB
2010-09-16 02:55:12 37.033
2010-09-16 03:01:22 37.633
2010-09-16 03:07:38 37.245
2010-09-16 03:13:51 38.867
2010-09-16 03:20:20 41.326
2010-09-16 03:29:09 38.427
2010-09-16 03:40:33 40.313
2010-09-16 03:52:16 41.049
2010-09-16 04:02:12 42.100
2010-09-16 04:19:13 40.650
2010-09-16 04:36:00 37.490
2010-09-16 04:52:15 36.126
2010-09-16 05:08:16 37.390
2010-09-16 05:24:30 34.031
2010-09-16 05:40:10 30.392
2010-09-16 05:55:14 26.779
2010-09-16 06:09:42 24.118
2010-09-16 06:23:43 24.283
2010-09-16 06:37:46 24.423
2010-09-16 06:51:50 23.334
2010-09-16 07:05:44 23.633
2010-09-16 07:19:40 22.333
2010-09-16 07:33:24 21.460
2010-09-16 07:46:58 20.632
2010-09-16 08:00:25 21.047
2010-09-16 08:13:55 20.049
2010-09-16 08:27:16 18.903
2010-09-16 08:40:25 19.244
2010-09-16 08:53:37 21.181
2010-09-16 09:07:09 21.196
2010-09-16 09:20:41 19.102
2010-09-16 09:33:52 19.755
2010-09-16 09:47:10 18.674
2010-09-16 10:00:16 18.832
2010-09-16 10:13:25 17.063
2010-09-16 10:26:16 18.207
2010-09-16 10:39:18 18.328
2010-09-16 10:52:21 17.980
2010-09-16 11:05:23 17.868
2010-09-16 11:18:21 17.417
2010-09-16 11:31:16 16.421
2010-09-16 11:44:00 17.059
2010-09-16 11:56:51 17.350
2010-09-16 12:09:44 16.641
2010-09-16 12:22:31 18.211
2010-09-16 12:35:33 16.686
2010-09-16 12:48:20 17.278
2010-09-16 13:01:13 17.172
2010-09-16 13:14:05 16.528
2010-09-16 13:26:50 16.124
2010-09-16 13:39:31 16.353
2010-09-16 13:52:15 18.287
2010-09-16 14:05:18 16.728
2010-09-16 14:18:05 17.055
2010-09-16 14:30:56 17.452
2010-09-16 14:43:50 16.491
2010-09-16 14:56:35 15.851
2010-09-16 15:09:14 16.407
2010-09-16 15:21:58 15.822
2010-09-16 15:34:36 17.049
2010-09-16 15:47:27 16.052
2010-09-16 16:00:07 16.307
2010-09-16 16:12:50 16.408
2010-09-16 16:25:34 17.201
2010-09-16 16:38:26 16.686
2010-09-16 16:51:13 16.076
2010-09-16 17:03:54 17.277
2010-09-16 17:16:47 16.468
2010-09-16 17:29:32 14.842
2010-09-16 17:42:00 15.721
2010-09-16 17:54:37 15.734
2010-09-16 18:07:15 16.160
2010-09-16 18:19:56 16.131
2010-09-16 18:32:38 15.951
2010-09-16 18:45:17 14.994
2010-09-16 18:57:47 15.365
2010-09-16 19:10:21 16.774
2010-09-16 19:23:09 17.303
2010-09-16 19:36:02 16.790
2010-09-16 19:48:50 16.421
2010-09-16 20:01:34 16.380
2010-09-16 20:14:18 15.523
2010-09-16 20:26:53 16.499
2010-09-16 20:39:38 16.596
2010-09-16 20:52:24 16.116
2010-09-16 21:05:05 16.445
2010-09-16 21:17:50 15.919
2010-09-16 21:30:29 16.928
2010-09-16 21:43:18 15.841
2010-09-16 21:55:57 16.378
2010-09-16 22:08:41 17.232
2010-09-16 22:21:33 15.755
2010-09-16 22:34:11 17.264
2010-09-16 22:47:03 18.250
2010-09-16 23:00:06 18.700
2010-09-16 23:13:13 19.165
2010-09-16 23:26:25 23.088
2010-09-16 23:40:15 23.505
2010-09-16 23:54:11 22.105
2010-09-17 00:07:52 23.635
2010-09-17 00:21:48 29.841
2010-09-17 00:36:46 29.847
2010-09-17 00:51:45 31.886
2010-09-17 01:07:04 31.010
2010-09-17 01:22:14 33.142
2010-09-17 01:37:46 35.977
2010-09-17 01:53:45 38.067
2010-09-17 02:10:06 38.245
2010-09-17 02:26:29 39.521
2010-09-17 02:43:04 39.803
2010-09-17 02:59:42 34.372
2010-09-17 03:15:26 33.135

The test got done on a server sitting next to the one tested in the same 
network segment.
Command executed: ab -n 1 -c 1 -X :3128 http:///

As you can see there is still a variation even though ab is producing lots of 
hits/s: something like 30hits/s

This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking  [through :3128] (be patient)
Completed 1000 requests
Completed 2000 requests
Completed 3000 requests
Completed 4000 requests
Completed 5000 requests
Completed 6000 requests
Completed 7000 requests
Completed 8000 requests
Completed 9000 requests
Completed 1 requests
Finished 1 requests


Server Software:Apache-Coyote/1.1
Server Hostname:
Server Port:80

Document Path:  /
Document Length:927 bytes

Concurrency Level:  1
Time taken for tests:   331.351 seconds
Complete requests:  1
Failed requests:0
Write errors:   0
Total transferred:  1332 bytes
HTML transferred:   927 bytes
Requests per second:30.18 [#/sec] (mean)
Time per request:   33.135 [ms] (mean)
Time per request:   33.135 [ms] (mean, across all concurrent requests)
Transfer rate:  39.26 [Kbytes/sec] received

Connection Times (ms)
  min  mean[+/-sd] median   max
Connect:00   1.8  0 130
Processing: 0   33  17.1 31 422
Waiting:0   33  17.1 31 422
Total:  0   33  17.3 32 423

Percentage of the requests served within a certain time (ms)
  50% 32
  66% 40
  75% 46
  80% 50
  90% 58