Re: [squid-users] squid-3.1 client POST buffering

2010-11-30 Thread Amos Jeffries

On 30/11/10 04:04, Oguz Yilmaz wrote:

Graham,

This is the best explanation I have seen about ongoing upload problem
in proxy chains where squid is one part of the chain.

On our systems, we use Squid 3.0.STABLE25. Before squid a
dansguardian(DG) proxy exist to filter. Results of my tests:

1-
DG+Squid 2.6.STABLE12: No problem of uploading
DG+Squid 3.0.STABLE25: Problematic
DG+Squid 3.1.8: Problematic
DG+Squid 3.2.0.2: Problematic

2- We have mostly prıblems with the sites with web based upload status
viewers. Like rapidshare, youtube etc...

3- If Squid is the only proxy, no problem of uploading.

4- ead_ahead_gap 16 KB does not resolv the problem


Dear Developers,

Can you propose some other workarounds for us to test? The problem is
encountered with most active sites of the net, unfortunately.


This sounds like the same problem as 
http://bugs.squid-cache.org/show_bug.cgi?id=3017


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


[squid-users] weighted-round-robin, squid 3.1

2010-11-30 Thread Michael Portz
Hi!

Does the 'cache_peer' option 'weighted-round-robin' only work with ICP open, or 
does it work on HTTP-replies only as well?

Regards
Michael

---
Dr. Michael Portz
IT-Service; IT-Entwicklung

  
NetAachen GmbH
Grüner Weg 100 | 52070 Aachen
Tel: +49 241 91852 28 | Fax: +49 241 91852 99
www.netaachen.de

Geschäftsführer: Dipl-Ing. Andreas Schneider
Amtsgericht Aachen: HRB 15383

Diese Nachricht (inklusive aller Anhänge) ist vertraulich. Sie ist 
ausschließlich für den im Adressfeld ausgewiesenen Adressaten bestimmt. Sollten 
Sie nicht der vorgesehene Empfänger sein, so bitten wir um eine kurze 
Nachricht. Jede unbefugte Weiterleitung oder Fertigung einer Kopie ist 
unzulässig. Da wir nicht die Echtheit oder Vollständigkeit der in dieser 
Nachricht enthaltenen Informationen garantieren können, schließen wir die 
rechtliche Verbindlichkeit der vorstehenden Erklärungen und Äußerungen aus.
 





Re: [squid-users] Beta testers wanted for 3.2.0.1 - Changing 'workers' (from 1 to 2) is not supported and ignored

2010-11-30 Thread Amos Jeffries

On 30/11/10 04:33, Ming Fu wrote:

The cache_dir setting in the if..else ..endif does not seem to take effect.
Squid -z does create the cache subdirectory without issue, but the squid seems 
to use the default cache directory as if didn't see the if statement.

= squid.conf
workers 2
if ${process_number} = 1
cache_dir aufs /usr/local/squid/var/a 500 16 256
else
cache_dir aufs /usr/local/squid/var/b 500 16 256
endif
==

=logs===
2010/11/29 15:23:56 kid1| Starting Squid Cache version 3.2.0.3 for 
amd64-unknown-freebsd8.1...
2010/11/29 15:23:56 kid1| Set Current Directory to /usr/local/squid/var/cache
2010/11/29 15:23:58 kid1| basic/basicScheme.cc(64) done: Basic authentication 
Schema Detached.
2010/11/29 15:23:58 kid3| basic/basicScheme.cc(64) done: Basic authentication 
Schema Detached.


Hmm, Schema Detatch is not good. It means any change to basic auth will 
need a full restart instead of a reconfigure. That limit is no longer 
normal in squid-3.2+.



2010/11/29 15:27:04 kid3| Starting Squid Cache version 3.2.0.3 for 
amd64-unknown-freebsd8.1...
2010/11/29 15:27:04 kid2| Starting Squid Cache version 3.2.0.3 for 
amd64-unknown-freebsd8.1...
2010/11/29 15:27:04 kid1| Starting Squid Cache version 3.2.0.3 for 
amd64-unknown-freebsd8.1...
2010/11/29 15:27:04 kid3| Set Current Directory to /usr/local/squid/var/cache
2010/11/29 15:27:04 kid1| Set Current Directory to /usr/local/squid/var/cache
2010/11/29 15:27:04 kid2| Set Current Directory to /usr/local/squid/var/cache


Note how .../var/cache  is not in your config at all. It is a default 
home location for the core dumps etc.



FATAL: kid2 registration timed out


... something else is causing the worker process not to make contact 
with the coordinator process.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


Re: [squid-users] squid-3.1 client POST buffering

2010-11-30 Thread Oguz Yilmaz
On Tue, Nov 30, 2010 at 10:05 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 30/11/10 04:04, Oguz Yilmaz wrote:

 Graham,

 This is the best explanation I have seen about ongoing upload problem
 in proxy chains where squid is one part of the chain.

 On our systems, we use Squid 3.0.STABLE25. Before squid a
 dansguardian(DG) proxy exist to filter. Results of my tests:

 1-
 DG+Squid 2.6.STABLE12: No problem of uploading
 DG+Squid 3.0.STABLE25: Problematic
 DG+Squid 3.1.8: Problematic
 DG+Squid 3.2.0.2: Problematic

 2- We have mostly prıblems with the sites with web based upload status
 viewers. Like rapidshare, youtube etc...

 3- If Squid is the only proxy, no problem of uploading.

 4- ead_ahead_gap 16 KB does not resolv the problem


 Dear Developers,

 Can you propose some other workarounds for us to test? The problem is
 encountered with most active sites of the net, unfortunately.

 This sounds like the same problem as
 http://bugs.squid-cache.org/show_bug.cgi?id=3017


In my tests, no NTLM auth was used.
The browser has proxy confguration targeting DG and DG uses squid as
provider proxy. If you think it will work,  I can try the patch
located in the bug case.
Upload will stop at about 1MB, so is it about SQUID_TCP_SO_RCVBUF?



 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3



Re: [squid-users] Squid 2.7stable7 and ESPN3

2010-11-30 Thread Amos Jeffries

On 30/11/10 20:33, Eric Vance wrote:

I have also had this issue.  I was able to get the headers both going
through squid and not.  I noticed a few key differences (but skip to
the end because I found the offending difference).

Request Header without Squid:

**
GET http://broadband.espn.go.com/espn3/auth/userData?format=jsonpage=index
HTTP/1.1
Host: broadband.espn.go.com
Connection: keep-alive
Referer: http://espn.go.com/espn3/index
Accept: */*
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US)
AppleWebKit/534.7 (KHTML, like Gecko) Chrome/7.0.517.44 Safari/534.7
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
Cookie: SWID=C2085447-B5B5-4B68-9A02-97B9BEB8AC0C; userAB=C;
ESPN360beta=betaSet;
DE2=KioqOyoqKjtyZXNlcnZlZDticm9hZGJhbmQ7NTs0OzQ7MDswMDAuMDAwOzAwMDAuMDAwOzk5OTs1MzgzOzM0MDM7MDsqKjs=;
CRBLM=CBLM-001:; DS=PzswOz87; CRBLM_LAST_UPDATE=1291054796;
s_vi=[CS]v1|2679F7630516263D-6198C0083F11[CE];
espnAffiliate=invalid;

s_pers=%20s_c24%3D1291061231070%7C1385669231070%3B%20s_c24_s%3DLess%2520than%25201%2520day%7C1291063031070%3B%20s_gpv_pn%3Despn3%253Ainvalid%253Aindex%7C1291063031109%3B
***

Request header after Squid:

***
GET /espn3/auth/userData?format=jsonpage=index
HTTP/1.0
Host: broadband.espn.go.com
Referer: http://espn.go.com/espn3/index
Accept: */*
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US)
AppleWebKit/534.7 (KHTML, like Gecko) Chrome/7.0.5
   17.44 Safari/534.7
Accept-Encoding: identity
Accept-Language: en-US,en;q=0.8
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
Cookie: SWID=C2085447-B5B5-4B68-9A02-97B9BEB8AC0C; userAB=C;
ESPN360beta=betaSet;
DE2=KioqOyoqKjtyZXNlcnZlZDticm9hZGJhbmQ7NTs0OzQ7MDswMDAuMDAwOzAwMDAuMDAwOzk5OTs1MzgzOzM0MDM7MDsqKjs=;
CRBLM=CBLM-001:; DS=PzswOz87; CRBLM_LAST_UPDATE=1291054796;
s_vi=[CS]v1|2679F7630516263D-6198C0083F11[CE];
espnAffiliate=invalid;
broadbandAccess=espn3-false%2Cnetworks-false;
s_pers=%20s_c24%3D1291092114183%7C1385700114183%3B%20s_c24_s%3DLess%2520than%25201%2520day%7C1291093914183%3B%20s_gpv_pn%3Despn3%253Ainvalid%253Aindex%7C1291093914212%3B;
lang=en; 
s_sess=%20s_cc%3Dtrue%3B%20s_omni_lid%3D%3B%20s_sq%3D%3B%20s_ppv%3D16%3B;
PREF=f2=800;
Via: 1.0 ph:3128 (squid/2.7.STABLE9)
X-Forwarded-For: 127.0.0.1
Cache-Control: max-age=259200
Connection: keep-alive
***

I manually issued this request changing one thing at a time until I
found the breaking item.  When I removed this line from the Squid
version the response came back without the redirect (and I assume
would then work correctly):

X-Forwarded-For: 127.0.0.1



D**m, suspected as much when that IP came back in your broken reply 
javascript.




So, I guess the questions are:
1.  Is this line necessary?


Yes and no.
Yes, ... because XFF is important for tracking network bugs down and 
informing the origin client IP. As you noticed this is one site which 
uses it to produce per-user content display.


No, because 127.0.0.1 is a useless thing to be sending in there as the 
first entry. It is an artifact of the way your particular requests went 
to Squid.



2.  Can it safely be removed?


Yes. If you are willing as the squid admin to shoulder all the blame for 
any attacks made through your proxy.



3.  How can it be removed?


In 2.7 configure: forwarded_for off.

There is something else you can do now that you know what and where the 
problem is. You can pass this same report on to the webmaster of that 
site. They are trusting the XFF trail too much.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


Re: [squid-users] weighted-round-robin, squid 3.1

2010-11-30 Thread Amos Jeffries

On 30/11/10 21:17, Michael Portz wrote:

Hi!

Does the 'cache_peer' option 'weighted-round-robin' only work with ICP open, or 
does it work on HTTP-replies only as well?


All contacts with the peer have RTT measured and added to the weighting. 
It just works less well with HTTP-only as opposed to ICP/HTCP as well. 
Adding ICMP measurements with the pinger on top of either is even better.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


Re: [squid-users] squid-3.1 client POST buffering

2010-11-30 Thread Amos Jeffries

On 30/11/10 21:23, Oguz Yilmaz wrote:

On Tue, Nov 30, 2010 at 10:05 AM, Amos Jeffriessqu...@treenet.co.nz  wrote:

On 30/11/10 04:04, Oguz Yilmaz wrote:


Graham,

This is the best explanation I have seen about ongoing upload problem
in proxy chains where squid is one part of the chain.

On our systems, we use Squid 3.0.STABLE25. Before squid a
dansguardian(DG) proxy exist to filter. Results of my tests:

1-
DG+Squid 2.6.STABLE12: No problem of uploading
DG+Squid 3.0.STABLE25: Problematic
DG+Squid 3.1.8: Problematic
DG+Squid 3.2.0.2: Problematic

2- We have mostly prıblems with the sites with web based upload status
viewers. Like rapidshare, youtube etc...

3- If Squid is the only proxy, no problem of uploading.

4- ead_ahead_gap 16 KB does not resolv the problem


Dear Developers,

Can you propose some other workarounds for us to test? The problem is
encountered with most active sites of the net, unfortunately.


This sounds like the same problem as
http://bugs.squid-cache.org/show_bug.cgi?id=3017




Sorry, crossing bug reports in my head.

This one is closer to the suck-everything behaviour you have seen:
http://bugs.squid-cache.org/show_bug.cgi?id=2910

both have an outside chance of working.



In my tests, no NTLM auth was used.
The browser has proxy confguration targeting DG and DG uses squid as
provider proxy. If you think it will work,  I can try the patch
located in the bug case.
Upload will stop at about 1MB, so is it about SQUID_TCP_SO_RCVBUF?


AIUI, Squid is supposed to read SQUID_TCP_SO_RCVBUF + read_ahead_gap and 
wait for some of that to pass on to the server before grabbing some more.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


[squid-users] Plz help me ............

2010-11-30 Thread Ajith P.T
Sir,
  I've some requirement for the squid configuration
1. Can i give time quota(not time range) to each user per day(user1
can use intenet 30 min in a day, he can consume this 30 min in a day
in any time)
2. can we give download quota to each user per day(user1 can download
20 m.b per day)

Please help me.

-- 
With Best Regards,



Ajith P.T
Project Manager
ES Consultants L.L.C,
P.O.Box 46548, Code 640016,Fahaheel, Kuwait.
email- aj...@ensconsultants
Phone +965 9921,99094633
www.ensconsultants.com || www.enaskw.com
ENAS General Trading  Contracting Co.


Re: [squid-users] Plz help me ............

2010-11-30 Thread Amos Jeffries

On 30/11/10 22:14, Ajith P.T wrote:

Sir,
   I've some requirement for the squid configuration
1. Can i give time quota(not time range) to each user per day(user1
can use intenet 30 min in a day, he can consume this 30 min in a day
in any time)
2. can we give download quota to each user per day(user1 can download
20 m.b per day)

Please help me.



You said the same things yesterday requesting this for 3.0 on windows. 
Not getting replies around here mean nobody has a good answer.


Because Squid does not do quotas that way. HTTP is stateless and there 
are few ways for Squid to identify two requests as belonging to the same 
user. None of them completely reliable.
 What Squid does instead is delay pools and/or QoS packet marking. 
Which can set a per-second speed limit on the clients.



To get anything close to absolute traffic limits (quota) you will have 
to find or write your own logdaemon helper or log processor to calculate 
the traffic usage then plug it into something with another custom ACL 
helper to deny requests once the limit is passed.

 NP: there is no way to stop existing transactions once they have begun.

There is likely third-party code floating around to do the byte quota.

The time quota is unusual, you will *only* be able to record the 
download times of individual objects. For most objects these are 
measured in milliseconds. I predict that making guesses whether the 
client was connected between two requests getting you into arguments 
with some of them.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


Re: [squid-users] Plz help me ............

2010-11-30 Thread Luis Daniel Lucio Quiroz
Le mardi 30 novembre 2010 03:14:54, Ajith P.T a écrit :
 Sir,
   I've some requirement for the squid configuration
 1. Can i give time quota(not time range) to each user per day(user1
 can use intenet 30 min in a day, he can consume this 30 min in a day
 in any time)
This is more a radius task than squid

 2. can we give download quota to each user per day(user1 can download
 20 m.b per day)
Again,  radius 

 
 Please help me.


Re: [squid-users] Squid 2.7stable7 and ESPN3

2010-11-30 Thread Eric Vance
Thanks Amos!

I confirmed that adding the config option forwarded_for off does fix espn3.

Can you please give me a little more detail of the risk posed by turning it off?
If it was just espn3 I would try to get them to fix it but I wonder
how many other sites have this same issue.

Thanks!

Eric

On Tue, Nov 30, 2010 at 1:33 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 30/11/10 20:33, Eric Vance wrote:

 I have also had this issue.  I was able to get the headers both going
 through squid and not.  I noticed a few key differences (but skip to
 the end because I found the offending difference).

 Request Header without Squid:


 **
 GET
 http://broadband.espn.go.com/espn3/auth/userData?format=jsonpage=index
 HTTP/1.1
 Host: broadband.espn.go.com
 Connection: keep-alive
 Referer: http://espn.go.com/espn3/index
 Accept: */*
 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US)
 AppleWebKit/534.7 (KHTML, like Gecko) Chrome/7.0.517.44 Safari/534.7
 Accept-Encoding: gzip,deflate,sdch
 Accept-Language: en-US,en;q=0.8
 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
 Cookie: SWID=C2085447-B5B5-4B68-9A02-97B9BEB8AC0C; userAB=C;
 ESPN360beta=betaSet;

 DE2=KioqOyoqKjtyZXNlcnZlZDticm9hZGJhbmQ7NTs0OzQ7MDswMDAuMDAwOzAwMDAuMDAwOzk5OTs1MzgzOzM0MDM7MDsqKjs=;
 CRBLM=CBLM-001:; DS=PzswOz87; CRBLM_LAST_UPDATE=1291054796;
 s_vi=[CS]v1|2679F7630516263D-6198C0083F11[CE];
 espnAffiliate=invalid;


 s_pers=%20s_c24%3D1291061231070%7C1385669231070%3B%20s_c24_s%3DLess%2520than%25201%2520day%7C1291063031070%3B%20s_gpv_pn%3Despn3%253Ainvalid%253Aindex%7C1291063031109%3B

 ***

 Request header after Squid:


 ***
 GET /espn3/auth/userData?format=jsonpage=index
 HTTP/1.0
 Host: broadband.espn.go.com
 Referer: http://espn.go.com/espn3/index
 Accept: */*
 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US)
 AppleWebKit/534.7 (KHTML, like Gecko) Chrome/7.0.5
   17.44 Safari/534.7
 Accept-Encoding: identity
 Accept-Language: en-US,en;q=0.8
 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
 Cookie: SWID=C2085447-B5B5-4B68-9A02-97B9BEB8AC0C; userAB=C;
 ESPN360beta=betaSet;

 DE2=KioqOyoqKjtyZXNlcnZlZDticm9hZGJhbmQ7NTs0OzQ7MDswMDAuMDAwOzAwMDAuMDAwOzk5OTs1MzgzOzM0MDM7MDsqKjs=;
 CRBLM=CBLM-001:; DS=PzswOz87; CRBLM_LAST_UPDATE=1291054796;
 s_vi=[CS]v1|2679F7630516263D-6198C0083F11[CE];
 espnAffiliate=invalid;
 broadbandAccess=espn3-false%2Cnetworks-false;

 s_pers=%20s_c24%3D1291092114183%7C1385700114183%3B%20s_c24_s%3DLess%2520than%25201%2520day%7C1291093914183%3B%20s_gpv_pn%3Despn3%253Ainvalid%253Aindex%7C1291093914212%3B;
 lang=en;
 s_sess=%20s_cc%3Dtrue%3B%20s_omni_lid%3D%3B%20s_sq%3D%3B%20s_ppv%3D16%3B;
 PREF=f2=800;
 Via: 1.0 ph:3128 (squid/2.7.STABLE9)
 X-Forwarded-For: 127.0.0.1
 Cache-Control: max-age=259200
 Connection: keep-alive

 ***

 I manually issued this request changing one thing at a time until I
 found the breaking item.  When I removed this line from the Squid
 version the response came back without the redirect (and I assume
 would then work correctly):

 X-Forwarded-For: 127.0.0.1


 D**m, suspected as much when that IP came back in your broken reply
 javascript.


 So, I guess the questions are:
 1.  Is this line necessary?

 Yes and no.
 Yes, ... because XFF is important for tracking network bugs down and
 informing the origin client IP. As you noticed this is one site which uses
 it to produce per-user content display.

 No, because 127.0.0.1 is a useless thing to be sending in there as the first
 entry. It is an artifact of the way your particular requests went to Squid.

 2.  Can it safely be removed?

 Yes. If you are willing as the squid admin to shoulder all the blame for any
 attacks made through your proxy.

 3.  How can it be removed?

 In 2.7 configure: forwarded_for off.

 There is something else you can do now that you know what and where the
 problem is. You can pass this same report on to the webmaster of that site.
 They are trusting the XFF trail too much.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3



Re: [squid-users] squid-3.1 client POST buffering

2010-11-30 Thread Oguz Yilmaz
--
Oguz YILMAZ



On Tue, Nov 30, 2010 at 10:46 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 30/11/10 21:23, Oguz Yilmaz wrote:

 On Tue, Nov 30, 2010 at 10:05 AM, Amos Jeffriessqu...@treenet.co.nz
  wrote:

 On 30/11/10 04:04, Oguz Yilmaz wrote:

 Graham,

 This is the best explanation I have seen about ongoing upload problem
 in proxy chains where squid is one part of the chain.

 On our systems, we use Squid 3.0.STABLE25. Before squid a
 dansguardian(DG) proxy exist to filter. Results of my tests:

 1-
 DG+Squid 2.6.STABLE12: No problem of uploading
 DG+Squid 3.0.STABLE25: Problematic
 DG+Squid 3.1.8: Problematic
 DG+Squid 3.2.0.2: Problematic

 2- We have mostly prıblems with the sites with web based upload status
 viewers. Like rapidshare, youtube etc...

 3- If Squid is the only proxy, no problem of uploading.

 4- ead_ahead_gap 16 KB does not resolv the problem


 Dear Developers,

 Can you propose some other workarounds for us to test? The problem is
 encountered with most active sites of the net, unfortunately.

 This sounds like the same problem as
 http://bugs.squid-cache.org/show_bug.cgi?id=3017


 Sorry, crossing bug reports in my head.

 This one is closer to the suck-everything behaviour you have seen:
 http://bugs.squid-cache.org/show_bug.cgi?id=2910

 both have an outside chance of working.


I have tried the patch proposed (BodyPipe.h). However does not work.
Note: My system is based on Linux os.


 In my tests, no NTLM auth was used.
 The browser has proxy confguration targeting DG and DG uses squid as
 provider proxy. If you think it will work,  I can try the patch
 located in the bug case.
 Upload will stop at about 1MB, so is it about SQUID_TCP_SO_RCVBUF?

 AIUI, Squid is supposed to read SQUID_TCP_SO_RCVBUF + read_ahead_gap and
 wait for some of that to pass on to the server before grabbing some more.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3



Re: [squid-users] squid-3.1 client POST buffering

2010-11-30 Thread Graham Keeling
On Tue, Nov 30, 2010 at 09:46:47PM +1300, Amos Jeffries wrote:
 On 30/11/10 21:23, Oguz Yilmaz wrote:
 On Tue, Nov 30, 2010 at 10:05 AM, Amos Jeffriessqu...@treenet.co.nz  wrote:
 On 30/11/10 04:04, Oguz Yilmaz wrote:

 Graham,

 This is the best explanation I have seen about ongoing upload problem
 in proxy chains where squid is one part of the chain.

 On our systems, we use Squid 3.0.STABLE25. Before squid a
 dansguardian(DG) proxy exist to filter. Results of my tests:

 1-
 DG+Squid 2.6.STABLE12: No problem of uploading
 DG+Squid 3.0.STABLE25: Problematic
 DG+Squid 3.1.8: Problematic
 DG+Squid 3.2.0.2: Problematic

 2- We have mostly prıblems with the sites with web based upload status
 viewers. Like rapidshare, youtube etc...

 3- If Squid is the only proxy, no problem of uploading.

 4- ead_ahead_gap 16 KB does not resolv the problem


 Dear Developers,

 Can you propose some other workarounds for us to test? The problem is
 encountered with most active sites of the net, unfortunately.

 This sounds like the same problem as
 http://bugs.squid-cache.org/show_bug.cgi?id=3017


 Sorry, crossing bug reports in my head.

 This one is closer to the suck-everything behaviour you have seen:
 http://bugs.squid-cache.org/show_bug.cgi?id=2910

 both have an outside chance of working.

I have tried both suggestions, and neither of them make a difference
(changes to BodyPipe.h and client_side_request.cc).

I am keen to try any further suggestions, or provide you with debug output,
or whatever you like. 

This problem is extremely easy for me to reproduce.
It happens without any authentication, and with squid as the only proxy between
my browser and the website.

Shall I enter a proper bug report?



Re: [squid-users] squid-3.1 client POST buffering

2010-11-30 Thread Graham Keeling
On Tue, Nov 30, 2010 at 11:31:45AM +, Graham Keeling wrote:
 On Tue, Nov 30, 2010 at 09:46:47PM +1300, Amos Jeffries wrote:
  On 30/11/10 21:23, Oguz Yilmaz wrote:
  On Tue, Nov 30, 2010 at 10:05 AM, Amos Jeffriessqu...@treenet.co.nz  
  wrote:
  On 30/11/10 04:04, Oguz Yilmaz wrote:
 
  Graham,
 
  This is the best explanation I have seen about ongoing upload problem
  in proxy chains where squid is one part of the chain.
 
  On our systems, we use Squid 3.0.STABLE25. Before squid a
  dansguardian(DG) proxy exist to filter. Results of my tests:
 
  1-
  DG+Squid 2.6.STABLE12: No problem of uploading
  DG+Squid 3.0.STABLE25: Problematic
  DG+Squid 3.1.8: Problematic
  DG+Squid 3.2.0.2: Problematic
 
  2- We have mostly prıblems with the sites with web based upload status
  viewers. Like rapidshare, youtube etc...
 
  3- If Squid is the only proxy, no problem of uploading.
 
  4- ead_ahead_gap 16 KB does not resolv the problem
 
 
  Dear Developers,
 
  Can you propose some other workarounds for us to test? The problem is
  encountered with most active sites of the net, unfortunately.
 
  This sounds like the same problem as
  http://bugs.squid-cache.org/show_bug.cgi?id=3017
 
 
  Sorry, crossing bug reports in my head.
 
  This one is closer to the suck-everything behaviour you have seen:
  http://bugs.squid-cache.org/show_bug.cgi?id=2910
 
  both have an outside chance of working.
 
 I have tried both suggestions, and neither of them make a difference
 (changes to BodyPipe.h and client_side_request.cc).
 
 I am keen to try any further suggestions, or provide you with debug output,
 or whatever you like. 
 
 This problem is extremely easy for me to reproduce.
 It happens without any authentication, and with squid as the only proxy 
 between
 my browser and the website.
 
 Shall I enter a proper bug report?

To demonstrate the problem happening, I set on 'debug_options 33,2' and
re-ran my test. This shows that ConnStateData::makeSpaceAvailable() in
client_side.cc will eat memory forever.
I can turn on more debug if needed, but others should be able to easily
reproduce this.

2010/11/30 11:57:17.482| growing request buffer: notYetUsed=4095 size=8192
2010/11/30 11:57:17.483| growing request buffer: notYetUsed=8191 size=16384
2010/11/30 11:57:17.483| growing request buffer: notYetUsed=16383 size=32768
2010/11/30 11:57:17.484| growing request buffer: notYetUsed=32767 size=65536
2010/11/30 11:57:17.486| growing request buffer: notYetUsed=65535 size=131072
2010/11/30 11:57:17.488| growing request buffer: notYetUsed=131071 size=262144
2010/11/30 11:57:17.506| growing request buffer: notYetUsed=262143 size=524288
2010/11/30 11:57:17.533| growing request buffer: notYetUsed=524287 size=1048576
2010/11/30 11:57:17.586| growing request buffer: notYetUsed=1048575 size=2097152
2010/11/30 11:57:17.692| growing request buffer: notYetUsed=2097151 size=4194304
2010/11/30 11:57:17.884| growing request buffer: notYetUsed=4194303 size=8388608
2010/11/30 11:57:18.308| growing request buffer: notYetUsed=8388607 
size=16777216
2010/11/30 11:57:19.136| growing request buffer: notYetUsed=16777215 
size=33554432
2010/11/30 11:57:20.792| growing request buffer: notYetUsed=33554431 
size=67108864
2010/11/30 11:57:23.957| growing request buffer: notYetUsed=67108863 
size=134217728
2010/11/30 11:57:31.176| growing request buffer: notYetUsed=134217727 
size=268435456
2010/11/30 11:57:58.433| growing request buffer: notYetUsed=268435455 
size=536870912
...



Re: [squid-users] Plz help me ............

2010-11-30 Thread Nick Cairncross
On 30/11/2010 10:28, Luis Daniel Lucio Quiroz
luis.daniel.lu...@gmail.com wrote:


Le mardi 30 novembre 2010 03:14:54, Ajith P.T a écrit :
 Sir,
   I've some requirement for the squid configuration
 1. Can i give time quota(not time range) to each user per day(user1
 can use intenet 30 min in a day, he can consume this 30 min in a day
 in any time)
This is more a radius task than squid

 2. can we give download quota to each user per day(user1 can download
 20 m.b per day)
Again,  radius 

 
 Please help me.

Another suggestion: utilise a provider further up the chain that allows
for ICAP modified headers (that include user/group membership) and apply
quotas at that level.


The information contained in this e-mail is of a confidential nature and is 
intended only for the addressee.  If you are not the intended addressee, any 
disclosure, copying or distribution by you is prohibited and may be unlawful.  
Disclosure to any party other than the addressee, whether inadvertent or 
otherwise, is not intended to waive privilege or confidentiality.  Internet 
communications are not secure and therefore Conde Nast does not accept legal 
responsibility for the contents of this message.  Any views or opinions 
expressed are those of the author.

The Conde Nast Publications Ltd (No. 226900), Vogue House, Hanover Square, 
London W1S 1JU


RE: [squid-users] Beta testers wanted for 3.2.0.1 - Changing 'workers' (from 1 to 2) is not supported and ignored

2010-11-30 Thread Ming Fu

-Original Message-

 2010/11/29 15:27:04 kid3| Set Current Directory to /usr/local/squid/var/cache
 2010/11/29 15:27:04 kid1| Set Current Directory to /usr/local/squid/var/cache
 2010/11/29 15:27:04 kid2| Set Current Directory to /usr/local/squid/var/cache

Note how .../var/cache  is not in your config at all. It is a default 
home location for the core dumps etc.

 FATAL: kid2 registration timed out

... something else is causing the worker process not to make contact 
with the coordinator process.

Any hint on how can I find out the problem source.


Amos
-- 
Please be using
   Current Stable Squid 2.7.STABLE9 or 3.1.9
   Beta testers wanted for 3.2.0.3


[squid-users] squid_ldap_group syntax

2010-11-30 Thread Marcio Garcia
Hello,

I am having some problems to build my own syntax with
squid_ldap_group against AD because I have users in different OUs,
like bellow:

dc=example,dc=com
|
ou=department1,dc=example,dc=com
|
dn: cn=user 1,ou=department1,dc=example,dc=com
  objectClass=person
  samAccountName=user1
  memberOf=cn=facebook,ou=groups,dc=example,dc=com
  memberOf=cn=youtube,ou=groups,dc=example,dc=com
  
|
ou=department2,dc=example,dc=com
|
dn: cn=user 2,ou=department2,dc=example,dc=com
  objectClass=person
  samAccountName=user2
  memberOf=cn=facebook,ou=groups,dc=example,dc=com
  memberOf=cn=youtube,ou=groups,dc=example,dc=com
  memberOf=cn=linkedin,ou=groups,dc=example,dc=com
  
  |
ou=department3,dc=example,dc=com
|
dn: cn=user 3,ou=department3,dc=example,dc=com
  objectClass=person
  samAccountName=user3
  memberOf=cn=allowed,ou=groups,dc=example,dc=com
  memberOf=cn=denied,ou=groups,dc=example,dc=com
  

This is my squid_ldap_group syntax:

squid_ldap_group -b dc=example,dc=com -D
cn=proxy,cn=adminusers,dc=example,dc=com -w 'test' -f
((objectClass=person)(sAMAccountName=%u)(memberOf=cn=%g,ou=groups,dc=example,dc=com))
-h 192.168.4.3 -K

And the testes:

user1 facebook
ERR

user2 linkedin
ERR

user3 allowed
ERR

PS: I am using kerberos authentication and it works fine and I
don´t know why I am having the error above.


Thanks,

Marcio Garcia


[squid-users] refersh_pattern cache dynamic extensions

2010-11-30 Thread Ghassan Gharabli
Hello,

I have several questions to ask about refresh_pattern

sometimes I see configuration as

refresh_pattern -i *.ico$
refresh_pattern -i .(css|js|xml)   #multiple extensions
refresh_pattern \.(css|js|xml)
refresh_pattern \.(css|js|xml)$
refresh_pattern -i .(css|js|xml)$
refresh_pattern .(\?.*)?$

Please can anyone explain what is the difference between each example
and I have also another question like how to cache multiple extensions
using the same rule incase it was dynamic or static

example :
#I know this rule catches dynamic website or file but i dont know how
to deal with multiple extensions like gif , jpeg , png
refresh_pattern .(\?.*)?$

Why we put $ , ?  or \?.*


Thank  you


Re: [squid-users] Squid 2.7stable7 and ESPN3

2010-11-30 Thread Jason Howlett
Thanks guys. That fixed the problem. I have submitted a bug report at 
the ESPN site. We'll see if it does any good...


On 11/30/2010 3:31 AM, Eric Vance wrote:

Thanks Amos!

I confirmed that adding the config option forwarded_for off does fix espn3.

Can you please give me a little more detail of the risk posed by turning it off?
If it was just espn3 I would try to get them to fix it but I wonder
how many other sites have this same issue.

Thanks!

Eric

On Tue, Nov 30, 2010 at 1:33 AM, Amos Jeffriessqu...@treenet.co.nz  wrote:

On 30/11/10 20:33, Eric Vance wrote:

I have also had this issue.  I was able to get the headers both going
through squid and not.  I noticed a few key differences (but skip to
the end because I found the offending difference).

Request Header without Squid:


**
GET
http://broadband.espn.go.com/espn3/auth/userData?format=jsonpage=index
HTTP/1.1
Host: broadband.espn.go.com
Connection: keep-alive
Referer: http://espn.go.com/espn3/index
Accept: */*
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US)
AppleWebKit/534.7 (KHTML, like Gecko) Chrome/7.0.517.44 Safari/534.7
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
Cookie: SWID=C2085447-B5B5-4B68-9A02-97B9BEB8AC0C; userAB=C;
ESPN360beta=betaSet;

DE2=KioqOyoqKjtyZXNlcnZlZDticm9hZGJhbmQ7NTs0OzQ7MDswMDAuMDAwOzAwMDAuMDAwOzk5OTs1MzgzOzM0MDM7MDsqKjs=;
CRBLM=CBLM-001:; DS=PzswOz87; CRBLM_LAST_UPDATE=1291054796;
s_vi=[CS]v1|2679F7630516263D-6198C0083F11[CE];
espnAffiliate=invalid;


s_pers=%20s_c24%3D1291061231070%7C1385669231070%3B%20s_c24_s%3DLess%2520than%25201%2520day%7C1291063031070%3B%20s_gpv_pn%3Despn3%253Ainvalid%253Aindex%7C1291063031109%3B

***

Request header after Squid:


***
GET /espn3/auth/userData?format=jsonpage=index
HTTP/1.0
Host: broadband.espn.go.com
Referer: http://espn.go.com/espn3/index
Accept: */*
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US)
AppleWebKit/534.7 (KHTML, like Gecko) Chrome/7.0.5
   17.44 Safari/534.7
Accept-Encoding: identity
Accept-Language: en-US,en;q=0.8
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
Cookie: SWID=C2085447-B5B5-4B68-9A02-97B9BEB8AC0C; userAB=C;
ESPN360beta=betaSet;

DE2=KioqOyoqKjtyZXNlcnZlZDticm9hZGJhbmQ7NTs0OzQ7MDswMDAuMDAwOzAwMDAuMDAwOzk5OTs1MzgzOzM0MDM7MDsqKjs=;
CRBLM=CBLM-001:; DS=PzswOz87; CRBLM_LAST_UPDATE=1291054796;
s_vi=[CS]v1|2679F7630516263D-6198C0083F11[CE];
espnAffiliate=invalid;
broadbandAccess=espn3-false%2Cnetworks-false;

s_pers=%20s_c24%3D1291092114183%7C1385700114183%3B%20s_c24_s%3DLess%2520than%25201%2520day%7C1291093914183%3B%20s_gpv_pn%3Despn3%253Ainvalid%253Aindex%7C1291093914212%3B;
lang=en;
s_sess=%20s_cc%3Dtrue%3B%20s_omni_lid%3D%3B%20s_sq%3D%3B%20s_ppv%3D16%3B;
PREF=f2=800;
Via: 1.0 ph:3128 (squid/2.7.STABLE9)
X-Forwarded-For: 127.0.0.1
Cache-Control: max-age=259200
Connection: keep-alive

***

I manually issued this request changing one thing at a time until I
found the breaking item.  When I removed this line from the Squid
version the response came back without the redirect (and I assume
would then work correctly):

X-Forwarded-For: 127.0.0.1


D**m, suspected as much when that IP came back in your broken reply
javascript.


So, I guess the questions are:
1.  Is this line necessary?

Yes and no.
Yes, ... because XFF is important for tracking network bugs down and
informing the origin client IP. As you noticed this is one site which uses
it to produce per-user content display.

No, because 127.0.0.1 is a useless thing to be sending in there as the first
entry. It is an artifact of the way your particular requests went to Squid.


2.  Can it safely be removed?

Yes. If you are willing as the squid admin to shoulder all the blame for any
attacks made through your proxy.


3.  How can it be removed?

In 2.7 configure: forwarded_for off.

There is something else you can do now that you know what and where the
problem is. You can pass this same report on to the webmaster of that site.
They are trusting the XFF trail too much.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3