Re: [squid-users] squid redirecting attempted downloads

2011-08-22 Thread Helmut Hullen
Hallo, Dave,

Du meintest am 22.08.11:

 We are having an issue where users try to download a file (an email
 attachment, setup file, etc.) and are redirected to a page on our
 intranet that says something about file downloads not being allowed.
 The person I took over from here says that it may be something
 configured in the squid.conf file.

 I found the file but have no idea how to disable or modify this
 setting.

What's the name of the file?
Can you find this name somewhere in the squid.conf?

Or does squid invoke some other program like squidGuard?

Viele Gruesse!
Helmut


[squid-users] index key generation mechanism?

2011-08-22 Thread Raymond Wang
hi all:



For the file of somejs.js, there are two urls referring to it,  for
example: url1 is http://www.a.com/somejs.js; and url2 is
http://www.a.com/somejs.js;.

by default, squid would use the above urls to generate certain key
that would be used as index key to write/read the somejs.js file
to/from memory.

My question is that: could I affect the index key generation, so that
squid could save the somejs.js file in memory only one object.  for
example, for url1, we can trim http://www.a.com/somejs.js; to
somejs.js and http://www.a.com/somejs.js;  to somejs.js, then the
key would be somejs.js, e.g. use the file name (or some variants
based on it) as index key. this way, we can save the two different
urls (the referred files have the same content)  as only one object in
Squid.

is it possible?


thanks in advance!


-- 

Best Regards
rmn190


Re: [squid-users] squid redirecting attempted downloads

2011-08-22 Thread Helmut Hullen
Hallo, Dave,

Du meintest am 22.08.11:

 We are having an issue where users try to download a file (an email
 attachment, setup file, etc.) and are redirected to a page on our
 intranet that says something about file downloads not being
 allowed. The person I took over from here says that it may be
 something configured in the squid.conf file.

 I meant I found the squid.conf file.

Then please show the squid.conf.

Viele Gruesse!
Helmut


[squid-users] ICAP Bypassing Causing Performance Issues

2011-08-22 Thread Justin Lawler
Hi,

We have had to put in a number of URLs to the squid bypass

icap_service service_1 reqmod_precache 0 icap://127.0.0.1:1344/reqmod
icap_class class_1 service_1

acl bypassIcapRequestURLregex urlpath_regex 
./squid-3/etc/byPass_ICAP_request_URLregex.properties
icap_access class_1 deny bypassIcapRequestURLregex


When we added 4 regular expressions to this file, we started to see the CPU 
usage going up quite a bit, and we started to see the number of established 
connections from squid to ICAP server double or triple.

Is this a known issue? Is there a better/more efficient way to bypass ICAP than 
above? 

Regular expressions were very simple, just matching end of URLs.

We're running squid 3.0.15 on Solaris 10.

Thanks and regards,
Justin
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



Re: [squid-users] squid using windows seven and

2011-08-22 Thread Amos Jeffries

On 22/08/11 17:57, Xavier Magnaudeix wrote:

Hi list,

When surfing to http://privilege.ft.com/signin/news using
squid-3.0.STABLE7-4 with windows seven(IE8), I never can see the page when I
click on register.
Using Windows XP + iE8 and same proxy, it works.
Using Firefox on windows seven, I t works too.

Can someone give a try with Seven + IE8 and see if you can get the same
issue? And maybe see what is going on?

Here come the logs:

1313683343.451  5 10.34.36.57 TCP_MISS/200 1695 GET
http://privilege.ft.com/sites/all/themes/ftprivilege/img/header-signin-middl
e.png csov FIRST_UP_PARENT/127.0.0.1 image/png
1313683343.453  5 10.34.36.57 TCP_MISS/200 1591 GET
http://privilege.ft.com/sites/all/themes/ftprivilege/img/mask-bg.png csov
FIRST_UP_PARENT/127.0.0.1 image/png
1313683343.490 15 10.34.36.57 TCP_MISS/200 475 GET
http://www.google-analytics.com/__utm.gif? - DIRECT/209.85.146.139 image/gif
1313683343.494  1 10.34.36.57 TCP_MISS/200 5391 GET
http://privilege.ft.com/sites/all/themes/ftprivilege/img/footer-bg.png csov
FIRST_UP_PARENT/127.0.0.1 image/png
1313683343.499  0 10.34.36.57 TCP_DENIED/407 5011 GET
http://privilege.ft.com/sites/all/themes/ftprivilege/img/deal-submit-button.
png - NONE/- text/html
1313683343.554  4 10.34.36.57 TCP_MISS/200 5364 GET
http://privilege.ft.com/sites/all/themes/ftprivilege/img/deal-submit-button.
png csov FIRST_UP_PARENT/127.0.0.1 image/png
1313683367.376 28 10.34.36.57 TCP_DENIED/407 3886 CONNECT
registration.ft.com:443 - NONE/- text/html
1313683367.428  0 10.34.36.57 TCP_DENIED/407 4236 CONNECT
registration.ft.com:443 - NONE/- text/html
1313683368.013214 10.34.36.57 TCP_DENIED/407 4134 GET
http://ocsp.verisign.com/MFEwTzBNMEswSTAJBgUrDgMCGgUABBRsK8Var42Wv2Ct%2BB0CP
yO0igBZwgQUpe8LEc7AQQOjSmWQSLIc4FctfUcCEFRg9%2Fk0x%2FWn%2Fudlr2CmcNo%3D -
NONE/- text/html
1313683368.067  0 10.34.36.57 TCP_DENIED/407 4484 GET
http://ocsp.verisign.com/MFEwTzBNMEswSTAJBgUrDgMCGgUABBRsK8Var42Wv2Ct%2BB0CP
yO0igBZwgQUpe8LEc7AQQOjSmWQSLIc4FctfUcCEFRg9%2Fk0x%2FWn%2Fudlr2CmcNo%3D -
NONE/- text/html
1313683368.140  0 10.34.36.57 TCP_DENIED/407 4404 GET
http://ocsp.verisign.com/MFEwTzBNMEswSTAJBgUrDgMCGgUABBRsK8Var42Wv2Ct%2BB0CP
yO0igBZwgQUpe8LEc7AQQOjSmWQSLIc4FctfUcCEFRg9%2Fk0x%2FWn%2Fudlr2CmcNo%3D -
NONE/- text/html
1313683368.408174 10.34.36.57 TCP_DENIED/407 3744 GET
http://svrsecure-g2-crl.verisign.com/SVRSecureG2.crl - NONE/- text/html
1313683368.461  0 10.34.36.57 TCP_DENIED/407 4094 GET
http://svrsecure-g2-crl.verisign.com/SVRSecureG2.crl - NONE/- text/html
1313683368.533  0 10.34.36.57 TCP_DENIED/407 4014 GET
http://svrsecure-g2-crl.verisign.com/SVRSecureG2.crl - NONE/- text/html
1313683368.600   1126 10.34.36.57 TCP_MISS/200 3214 CONNECT
registration.ft.com:443 csov DIRECT/62.25.103.202 -
1313683368.629  0 10.34.36.57 TCP_DENIED/407 3886 CONNECT
registration.ft.com:443 - NONE/- text/html
1313683368.684  0 10.34.36.57 TCP_DENIED/407 4236 CONNECT
registration.ft.com:443 - NONE/- text/html
1313683368.811 81 10.34.36.57 TCP_MISS/200 74 CONNECT
registration.ft.com:443 csov DIRECT/62.25.103.202 –

If I uncheck the proxy settings on Seven/IE8 it goes through, and even when
I check it again. I seems that if it had worked once without proxy, it’ll
work forever with the proxy…



I spy a bunch of 407 + CONNECT events. Would that happen to be NTLM auth 
taking place? There is a keep-alive bug (#3213) in Squid-3 CONNECT 
handling that breaks the NTLM handshake.


NP: tis is where I'd normally say upgrade. But I'm still working on 
making Windows simply build again :(


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.10


Re: [squid-users] index key generation mechanism?

2011-08-22 Thread Amos Jeffries

On 22/08/11 18:28, Raymond Wang wrote:

hi all:



For the file of somejs.js, there are two urls referring to it,  for
example: url1 is http://www.a.com/somejs.js; and url2 is
http://www.a.com/somejs.js;.

by default, squid would use the above urls to generate certain key
that would be used as index key to write/read the somejs.js file
to/from memory.

My question is that: could I affect the index key generation, so that
squid could save the somejs.js file in memory only one object.  for
example, for url1, we can trim http://www.a.com/somejs.js; to
somejs.js and http://www.a.com/somejs.js;  to somejs.js, then the
key would be somejs.js, e.g. use the file name (or some variants
based on it) as index key. this way, we can save the two different
urls (the referred files have the same content)  as only one object in
Squid.

is it possible?



Possible. Yes. Easy no.

The key Squid uses is the public URL which the client is asking for.

YouTube is a well-known website which behaves like you describe. It is a 
serious nightmare for a great many network admins.

http://wiki.squid-cache.org/ConfigExamples/DynamicContent/YouTube

The storeurl_rewrite feature experiment in squid-2.7 does exactly what 
you describe.



Considering that 1) you must already have a list of patterns for 
matching, and 2) you thus have a known location of at least one instance.


The safer, friendlier, and HTTP compliant method is to simply setup a 
url_rewrite_program helper. Which tests URLs against your patterns and 
emits 303:$new_url when it finds a match on GET requests.


 ** by safer and friendlier, I mean that instead of potentially 
poisoning caches all over the world with broken or corrupt data (see 
recent T-mobile problems) all you do is break any load balancing on the 
websites in question.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.10


Re: [squid-users] ICAP Bypassing Causing Performance Issues

2011-08-22 Thread Amos Jeffries

On 23/08/11 00:03, Justin Lawler wrote:

Hi,

We have had to put in a number of URLs to the squid bypass

icap_service service_1 reqmod_precache 0 icap://127.0.0.1:1344/reqmod
icap_class class_1 service_1

acl bypassIcapRequestURLregex urlpath_regex 
./squid-3/etc/byPass_ICAP_request_URLregex.properties
icap_access class_1 deny bypassIcapRequestURLregex


When we added 4 regular expressions to this file, we started to see the CPU 
usage going up quite a bit, and we started to see the number of established 
connections from squid to ICAP server double or triple.

Is this a known issue? Is there a better/more efficient way to bypass ICAP than 
above?


Other than using other ACL types, no.



Regular expressions were very simple, just matching end of URLs.


a) regex is a bit slow. Did you remember to anchor the ends? and 
manually aggregate the patterns? avoid extended-regex pattern tricks?


b) URLs can be many KB in length. That can make URL regex very CPU 
intensive.


d) routing selection ACLs are run multiple times per request.

You can turn on access control debugging (level 28,3) to see how many 
times those are run and how long they take each test.




We're running squid 3.0.15 on Solaris 10.




Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.10


[squid-users] Accelerating proxy not matching cgi files

2011-08-22 Thread Mateusz Buc
Hello,

at the beginning I would like to mention that I've already search for
the answer to my question, found similar topics, but none of them
helped me to completely solve my problem.

The thing is I have monitoring server with cgi scripted site on it.
The site fetches various data and generates charts 'on-the-fly'. Now
it is only available via HTTPS with htaccess-type authorization.

The bad thing is that it is quite often browsed and everytime it gets
HTTP requests, it has to generate all of the charts (quite a lot of
them) on the fly, which not only makes loading the page slow, but also
affects server's performance.

There 4 most important things about the site:
* index.cgi - checks current timestamps and generates proper GET
requests to generate images via gen.cgi
* gen.cgi - it receives paramers via GET from index.cgi and draws charts
* images ARE NOT files placed on server, but form of gen.cgi links
(e.g. gen.cgi?icon,moni_sys_procs,1314022200,1,161.6,166.4,FF...)
* images generation links contain most up-to-date timestamp for every
certain image

What I want to do is to set another server in the middle, which would
run squid and act as a transparent, accelerating proxy. My main
problem is that squid doesn't want to cache anything at all. My goal
is to:

* cache index.cgi for max 1 minute time - since it provides important
data to generate charts
* somehow cache images generated on the fly as long, as there aren't
new one in index.cgi (only possible if timestamp has changed)

To make it simpler to develop, I've temporary disabled authorization,
so my config looks like:
#
http_port 5080 accel defaultsite=.pl ignore-cc

# HTTP peer
cache_peer 11.11.11.11 parent 5080 0 no-query originserver name=.pl

hierarchy_stoplist cgi-bin cgi ?

refresh_pattern (\.cgi|\?)0   0%  0
refresh_pattern .   0   20% 4320

acl our_sites dstdomain .pl
http_access allow our_sites
cache_peer_access .pl allow our_sites
cache_peer_access .pl deny all
##

Unfortunately, access.log looks in this way:

1314022248.996 66 127.0.0.1 TCP_MISS/200 432 GET
http://.pl/gen.cgi? - FIRST_UP_PARENT/.pl image/png
1314022249.041 65 127.0.0.1 TCP_MISS/200 491 GET
http://.pl/gen.cgi? - FIRST_UP_PARENT/.pl image/png
1314022249.057 65 127.0.0.1 TCP_MISS/200 406 GET
http://.pl/gen.cgi? - FIRST_UP_PARENT/.pl image/png
1314022249.058 62 127.0.0.1 TCP_MISS/200 438 GET
http://.pl/gen.cgi? - FIRST_UP_PARENT/.pl image/png
1314022249.062 68 127.0.0.1 TCP_MISS/200 458 GET
http://.pl/gen.cgi? - FIRST_UP_PARENT/.pl image/png
1314022249.107 64 127.0.0.1 TCP_MISS/200 482 GET
http://.pl/gen.cgi? - FIRST_UP_PARENT/.pl image/png
1314022249.126 66 127.0.0.1 TCP_MISS/200 460 GET
http://.pl/gen.cgi? - FIRST_UP_PARENT/.pl image/png
1314022249.126 67 127.0.0.1 TCP_MISS/200 478 GET
http://.pl/gen.cgi? - FIRST_UP_PARENT/.pl image/png
1314022249.127 63 127.0.0.1 TCP_MISS/200 467 GET
http://.pl/gen.cgi? - FIRST_UP_PARENT/.pl image/png
1314022249.169 61 127.0.0.1 TCP_MISS/200 420 GET
http://.pl/gen.cgi? - FIRST_UP_PARENT/.pl image/png
1314022249.191 63 127.0.0.1 TCP_MISS/200 524 GET
http://.pl/gen.cgi? - FIRST_UP_PARENT/.pl image/png
1314022249.193 66 127.0.0.1 TCP_MISS/200 421 GET
http://.pl/gen.cgi? - FIRST_UP_PARENT/.pl image/png
1314022249.197 69 127.0.0.1 TCP_MISS/200 530 GET
http://.pl/gen.cgi? - FIRST_UP_PARENT/.pl image/png


Could someone tell me how to configure squid to meet my expactations?
I will be so much grateful for any help.

Best regards,
Mateusz

-- 
[ Mateusz 'Blaster' Buc :: blas...@grex.org :: http://blast3r.info ]
[ There's no place like 127.0.0.1. :: +48 724676983 :: GG: 2937287 ]


[squid-users] Build issues with squid 3.1 and 3.2

2011-08-22 Thread gewehre
On Mac OS X 10.4.11, squid-3.1.14 from July 4 doesn't have this problem, but 
the last nightly release (squid-3.1.14-20110804) complains about Bungled 
Default Configuration line 8: miss_access allow all with the exact same 
config. Line 8 in my squid.conf is acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 
::1.

A second issue is --enable-ssl always has to be left out of my ./configure, 
in order for any squid-3.1.14 to compile. Had no such issue with much older 
3.1.x releases, as far as I can recall. I don't use squid as a reverse-proxy, 
so it's no big loss.


With squid 3.2, I got the following a few months ago:

cc1plus: warnings being treated as errors
Address.cc: In member function 'bool Ip::Address::IsSlaac() const':
Address.cc:274: warning: comparison is always false due to limited range of 
data type
Address.cc:275: warning: comparison is always false due to limited range of 
data type
make[3]: *** [Address.lo] Error 1
make[2]: *** [all-recursive] Error 1
make[1]: *** [all] Error 2
make: *** [all-recursive] Error 1


With recent 3.2 nightlies, I encounter another issue that I can get around  by 
explicitly specifying --disable-auth-negotiate:
...
cc1plus: warnings being treated as errors
negotiate_wrapper.cc: In function 'int main(int, char* const*)':
negotiate_wrapper.cc:113: warning: 'length' may be used uninitialized in this 
function
make[3]: *** [negotiate_wrapper.o] Error 1
make[2]: *** [all-recursive] Error 1
make[1]: *** [all-recursive] Error 1
make: *** [all-recursive] Error 1


I'm then confronted with the old nemesis:
...
cc1plus: warnings being treated as errors
Address.cc: In member function 'bool Ip::Address::IsSlaac() const':
Address.cc:279: warning: comparison is always false due to limited range of 
data type
Address.cc:280: warning: comparison is always false due to limited range of 
data type
 
Some googling seems to suggest this is rather common with certain UNIX source 
codes and OS X (x86 little endian hardware).
-- 
Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir
belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de


RE: [squid-users] ICAP Bypassing Causing Performance Issues

2011-08-22 Thread Justin Lawler
Thanks Amos - regex pattern we're using is:

.*some_url_end.html$

We also have many individual domains which we're bypassing 

acl bypassIcapRequest dstdomain 
/apps/cwapps/squid-3/etc/byPass_ICAP_request.properties
icap_access class_1 deny bypassIcapRequest

as time has gone on - we've been adding more URLs to this list also (currently 
up to 39 URLs) - this won't be doing regular expression matching, but we've 
seen as time goes on, more and more established connections on ICAP server 
port. Also CPU usage going up, and we're seeing more 'essential ICAP service is 
down' errors in the logs.

Traffic has not changed significantly - in fact has maybe gone down. The only 
change we can really identify is the extra bypassed domains.

Does squid parse the properties file for every hit?

Also, we've only been reconfiguring squid when we update this file. Is this 
enough, or do we need to restart?

Will look into extra debugging now.

Thanks and regards,
Justin


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Monday, August 22, 2011 10:29 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] ICAP Bypassing Causing Performance Issues

On 23/08/11 00:03, Justin Lawler wrote:
 Hi,

 We have had to put in a number of URLs to the squid bypass

 icap_service service_1 reqmod_precache 0 icap://127.0.0.1:1344/reqmod
 icap_class class_1 service_1

 acl bypassIcapRequestURLregex urlpath_regex 
 ./squid-3/etc/byPass_ICAP_request_URLregex.properties
 icap_access class_1 deny bypassIcapRequestURLregex


 When we added 4 regular expressions to this file, we started to see the CPU 
 usage going up quite a bit, and we started to see the number of established 
 connections from squid to ICAP server double or triple.

 Is this a known issue? Is there a better/more efficient way to bypass ICAP 
 than above?

Other than using other ACL types, no.


 Regular expressions were very simple, just matching end of URLs.

a) regex is a bit slow. Did you remember to anchor the ends? and 
manually aggregate the patterns? avoid extended-regex pattern tricks?

b) URLs can be many KB in length. That can make URL regex very CPU 
intensive.

d) routing selection ACLs are run multiple times per request.

You can turn on access control debugging (level 28,3) to see how many 
times those are run and how long they take each test.


 We're running squid 3.0.15 on Solaris 10.



Amos
-- 
Please be using
   Current Stable Squid 2.7.STABLE9 or 3.1.14
   Beta testers wanted for 3.2.0.10
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp