Re: [squid-users] reverse proxy problem

2012-04-29 Thread Amos Jeffries

On 28/04/2012 9:38 a.m., Bruce Lysik wrote:

Hi guys,

Running latest 3.1 in a reverse proxy mode.  3 beefy servers with 96GB of ram.  
Seeing an odd problem:

Origin ->  customer, equals fast speeds.  (Tested by curling from a desktop to 
origin.)
Origin ->  squid, equals fast speeds.  (Tested by running curl on the squid 
server to the origin.)
Squid cache hit ->  customer, equals fast speed.  (Seen in browser.)
Squid cache miss ->  customer, insanely slow.  36kB/sec, when origin to 
customer direct is like 50MB/sec.

Any ideas on what to look at here?  It's so broken it feels like a 
misconfiguration somewhere.

These are on RHEL6u2, 96GB ram, 1.69TB RAID5 ext4 partition for disk cache, 4gb 
of bonded network interface.  Machines are behind a load balancer operating in 
DSR mode.



The usual stuff is:

* disk I/O loading. Squid still cycles most objects through the disks 
when caching and RAID does horrible things to the write cycle speed.


* forwarding loops. If the traffic is looping in an dout and back again 
Squid impact can be huge.


* delay pools not being bypassed for the reverse-proxy traffic.

* QoS on the underlying system slowing things down.

* ECM or PMTU brokenness preventing the Squid box making fast 
jumbo-packet connections.




Amos


Re: [squid-users] X-Forwarded-For Header

2012-04-29 Thread Fran Márquez

El 29/04/2012 3:23,  escribió:
> Sorry for the top post.
> 
> Firstly that website is broken. Xff is a list header and always has
> been.
> 
> Secondly 3.0 is an extremely old Squid version which only supports 
> on/off for the forwarded_for directive. You need to upgrade.
> 
> Amos

Thank you very much, Amos,

I will update my squid installation as soon as I fix a problem with my
test machine (RHEL + squid + kerberos + msktutil). Meanwhile, I need
fix this problem in my current proxy server.

I bypassed the website restriction using this:

-
request_header_access X-Forwarded-For deny all
#forwarded_for off
-

With this config, squid doesn't include the Xff header and site allow
the full access.

Regards and thank you very much

Fran M.


Re: [squid-users] http to squid to https

2012-04-29 Thread Amos Jeffries

On 28/04/2012 10:37 a.m., Squid Tiz wrote:

I am kinda new to squid.  Been looking over the documentation and I just wanted 
a sanity check on what I am trying to do.

I have a web client that hits my squid server.  The squid connects to an apache 
server via ssl.

Here are the lines of interest from my squid.conf for version 3.1.8

http_port 80 accel defaultsite=123.123.123.123
cache_peer 123.123.123.123 parent 443 0 no-query originserver ssl 
sslflags=DONT_VERIFY_PEER name=apache1

The good news is, that works just as I hoped.  I get a connection.

But I am questioning the DONT_VERIFY_PEER.Don't I want to verify peer?


Ideally yes. It is better security. But up to you whether you need it or 
not.
It means having available to OpenSSL on the squid box (possibly via 
squid.conf settings) the CA certificate which signed the peers 
certificate, so that verification will not fail.





I simply hacked up a self signed cert on the apache server.  Installed mod_ssl 
and restarted apache and everything started to work on 443.

On the command line for the squid server I can curl the apache box with:

curl --cacert  _the_signed_cert_from_the_apache_node_ https://apache.server

Is there a way with sslcert and sslkey to setup a keypair that will verify?


They are for configuring the *client* certificate and key sent by Squid 
to Apache. For when Apache is doing the verification of its clients.


Squid has a sslcacert= option which does the same as curl --cacert 
option. For validating the Apache certificate(s).



   Do I need a signed cert?


Yes, TLS requires signing. Your self-signing CA will do however, so long 
as both ends of the connection are in agreement on the CA trust.




I tried to add the cert and key to the cach_peer line in the config.  Squid did 
restart.  But no connection.  Why would curl work but not squid?


see above.

Amos


Re: [squid-users] Bridge or transparent ?

2012-04-29 Thread Amos Jeffries

On 30/04/2012 4:13 p.m., Ibrahim Lubis wrote:

What the pros n cons using squid as bridge n transparent ?


Squid does not bridge. Squid is a proxy. It only proxies traffic. The 
box underneath Squid can be bridging non-HTTP traffic, but that is not 
Squid.


Also, HTTP specifications mandate certain alterations to the traffic 
when proxied. There is no transparent. There is interception in various 
forms though.


There is one obvious "Pro", and many hidden "Cons" ...
http://wiki.squid-cache.org/SquidFaq/InterceptionProxy#Concepts_of_Interception_Caching


Amos



[squid-users] Bridge or transparent ?

2012-04-29 Thread Ibrahim Lubis
What the pros n cons using squid as bridge n transparent ?

Re: [squid-users] Prevent client spamming

2012-04-29 Thread Amos Jeffries

On 30.04.2012 02:25, Jose-Marcio Martins da Cruz wrote:

squid squid wrote:


Hi,

I have a server running Squid 2.7 stable 15 and facing client 
spamming. The problem happen when a client press and hold on to the F5 
button on the PC and this will generate few hundred of requests to the 
my squid proxy.


Please advise how can I prevent or drop the client traffic when the 
above happen.


F5 = refresh ?

Maybe you should begin understanding what's happening and the kind of
requests being done.



What actual Squid version you are using? Squid 2.7 series is only up to 
point-release "2.7.STABLE9". There is not nor likely to ever be a 
"2.7.STABLE15".



This sounds more like a DoS attack or an infinite retry loop than a 
client sending spam emails through your proxy, or even using a browser 
refresh button.


 * Have you tried it yourself? Which browser permits a DoS attack to be 
performed by one user with a simple button press?


 * Why was the user needing refresh in the first place? (are you 
violating HTTP by force-caching things that should not be cached?)


 * What is your log actually displaying?

 * are you certain its one user and not many? what is your knowledge 
based on (same TCP connection is not sufficient).


All these questions need to be known before we can give real help.

Amos



Re: [squid-users] anyone knows some info about youtube "range" parameter?

2012-04-29 Thread Eliezer Croitoru

On 30/04/2012 02:18, Ghassan Gharabli wrote:

Hello Eliezer,

Are you trying to save all video chunks into same parts or capture /
download the whole video object through CURL or whatever! but i dont
think it should work since it will occure an error with the new
Youtube Player.

What I have reached lately is saving same youtube video chunks  that
has  youtube 360p itag=34 ,480p itag=35 without saving its itag since
i want to save more bandwidth (thats why i only wrote scripts to you
as an example) which means if someone wants to watch 480p then he
would get the cached 360p contents thats why i didnt add the itag but
if he thought of watching 720p and above then another script would
catch it matching ITAG 37 , 22 ... I know that is not the best
solution but at least its working pretty well with no erros at all as
long as the client can always fast forward .

Im using Squid 2.7 Stable9 compiled on windows 64-bit with PERL x64.

Regarding the 302 Redirection  .. I have made sure to update the
source file client_side.c to fix the loop 302 Redirection but really I
dont have to worry now about anything so what is your target regarding
Youtube with argument Range and whats the problem till now ?

I have RAID with 5 HDD and the average HTTP Requests per minute :
2732.6 and because I want to save more bandwidth I try to analyze HTTP
Requests so i can always update my perl script to match most wanted
websites targetting Videos , Mp3 etc.

For a second I thought of maybe someone would help to compile an
intelligent external helper script that would capture  the whole
byte-range and I know it is really hard to do that since we are
dealing with byte-range.

I only have one question that is always teasing me .. what are the
comnparison between SQUID and BLUE COAT so is it because it is a
hardware perfromance or just it has more tricks to cache everything
and reach a maximum ratio ?



Ghassan
i was messing with store_url_rewrite and url_rewrite quite some time 
just for knowledge.


i was researching every concept exists with squid until now.
a while back (year or more)  i wrote store_url_rewrite using java and 
posted the code somewhere.
the reason i was using java was because it's the fastest and simples 
from all other languages i know (ruby perl python).

i was saving bandwidth using nginx because it was simple to setup.
i dont really like the idea of faking my users about the quality and 
also it can make a very problematic state that the user can get partial 
HQ content what will break the user from watching videos.


i dont really have any problems with the ranges.
i just noticed that there are providers in my country that are not using 
the "range" parameter but the "begin" parameter...


i will be very happy to get the client_side.c patch to fix the 302 loop.

the problem with external_helper is that it's not really needed.
if you need something you can use ICAP for that but you still there is a 
lot of work to be dont so as far it seems that store_url_rewrite is the 
best option.


BLUECOAT has option to relate objects also using ETAG and other objects 
parameters so it makes it a very robust caching system.


nginx does the work just fine for youtube videos but it seems like some 
headers are messing and should be added.


by the way,
What video\mp3 sites are you caching using your scripts?


Eliezer





On Mon, Apr 30, 2012 at 1:29 AM, Eliezer Croitoru  wrote:

On 24/04/2012 21:02, Eliezer Croitoru wrote:


as for some people asking me recently about youtube cache i have checked
again and found that youtube changed their video uris and added an
argument called "range" that is managed by the youtube player.
the original url\uri dosnt include range but the youtube player is using
this argument to save bandwidth.

i can implement the cahing with ranges on nginx but i dont know yet the
way that range works.
it can be based on user bandwidth or "fixed" size of chunkes.

if someone up to the mission of analyzing it a bit more to understand it
so the "range" cache will be implemented i will be happy to get some
help with it.

Thanks,
Eliezer



as for now the "minimum_object_size 512 bytes" wont do the trick for 302
redirection on squid2.7 because the 302 response is 963 big size.
so i have used:
minimum_object_size 1024 bytes
just to make sure it will work.
and also this is a youtube videos dedicated server so it's on with this
limit.

Regards,

Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] anyone knows some info about youtube "range" parameter?

2012-04-29 Thread Ghassan Gharabli
Hello Eliezer,

Are you trying to save all video chunks into same parts or capture /
download the whole video object through CURL or whatever! but i dont
think it should work since it will occure an error with the new
Youtube Player.

What I have reached lately is saving same youtube video chunks  that
has  youtube 360p itag=34 ,480p itag=35 without saving its itag since
i want to save more bandwidth (thats why i only wrote scripts to you
as an example) which means if someone wants to watch 480p then he
would get the cached 360p contents thats why i didnt add the itag but
if he thought of watching 720p and above then another script would
catch it matching ITAG 37 , 22 ... I know that is not the best
solution but at least its working pretty well with no erros at all as
long as the client can always fast forward .

Im using Squid 2.7 Stable9 compiled on windows 64-bit with PERL x64.

Regarding the 302 Redirection  .. I have made sure to update the
source file client_side.c to fix the loop 302 Redirection but really I
dont have to worry now about anything so what is your target regarding
Youtube with argument Range and whats the problem till now ?

I have RAID with 5 HDD and the average HTTP Requests per minute :
2732.6 and because I want to save more bandwidth I try to analyze HTTP
Requests so i can always update my perl script to match most wanted
websites targetting Videos , Mp3 etc.

For a second I thought of maybe someone would help to compile an
intelligent external helper script that would capture  the whole
byte-range and I know it is really hard to do that since we are
dealing with byte-range.

I only have one question that is always teasing me .. what are the
comnparison between SQUID and BLUE COAT so is it because it is a
hardware perfromance or just it has more tricks to cache everything
and reach a maximum ratio ?



Ghassan



On Mon, Apr 30, 2012 at 1:29 AM, Eliezer Croitoru  wrote:
> On 24/04/2012 21:02, Eliezer Croitoru wrote:
>>
>> as for some people asking me recently about youtube cache i have checked
>> again and found that youtube changed their video uris and added an
>> argument called "range" that is managed by the youtube player.
>> the original url\uri dosnt include range but the youtube player is using
>> this argument to save bandwidth.
>>
>> i can implement the cahing with ranges on nginx but i dont know yet the
>> way that range works.
>> it can be based on user bandwidth or "fixed" size of chunkes.
>>
>> if someone up to the mission of analyzing it a bit more to understand it
>> so the "range" cache will be implemented i will be happy to get some
>> help with it.
>>
>> Thanks,
>> Eliezer
>>
>>
> as for now the "minimum_object_size 512 bytes" wont do the trick for 302
> redirection on squid2.7 because the 302 response is 963 big size.
> so i have used:
> minimum_object_size 1024 bytes
> just to make sure it will work.
> and also this is a youtube videos dedicated server so it's on with this
> limit.
>
> Regards,
>
> Eliezer
>
> --
> Eliezer Croitoru
> https://www1.ngtech.co.il
> IT consulting for Nonprofit organizations
> eliezer  ngtech.co.il


Re: [squid-users] anyone knows some info about youtube "range" parameter?

2012-04-29 Thread Eliezer Croitoru

On 24/04/2012 21:02, Eliezer Croitoru wrote:

as for some people asking me recently about youtube cache i have checked
again and found that youtube changed their video uris and added an
argument called "range" that is managed by the youtube player.
the original url\uri dosnt include range but the youtube player is using
this argument to save bandwidth.

i can implement the cahing with ranges on nginx but i dont know yet the
way that range works.
it can be based on user bandwidth or "fixed" size of chunkes.

if someone up to the mission of analyzing it a bit more to understand it
so the "range" cache will be implemented i will be happy to get some
help with it.

Thanks,
Eliezer


as for now the "minimum_object_size 512 bytes" wont do the trick for 302 
redirection on squid2.7 because the 302 response is 963 big size.

so i have used:
minimum_object_size 1024 bytes
just to make sure it will work.
and also this is a youtube videos dedicated server so it's on with this 
limit.


Regards,
Eliezer

--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] slow internet browsing.

2012-04-29 Thread Eliezer Croitoru

On 29/04/2012 08:49, Muhammad Yousuf Khan wrote:

IT seems that things are doing good with out huge domain list. so now
my next goal is squidguard.

but the problem with squid guard  was that i tried it configuring and
i saw many online manuals but it didnt activated so i just started
using domain list. however if thing doesnt work ill update the status.

Thanks you all for your kind help.

Thanks

On Fri, Apr 27, 2012 at 1:09 PM, Muhammad Yousuf Khan  wrote:


i have used squidguard from source and it seems to work very well.
it took me a while to understand and configure but it works perfectly.
have a look at:
http://www.visolve.com/squid/whitepapers/redirector.php#Configuring_Squid_for_squidGuard



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer  ngtech.co.il


Re: [squid-users] Prevent client spamming

2012-04-29 Thread Jose-Marcio Martins da Cruz
squid squid wrote:
> 
> Hi,
> 
> I have a server running Squid 2.7 stable 15 and facing client spamming. The 
> problem happen when a client press and hold on to the F5 button on the PC and 
> this will generate few hundred of requests to the my squid proxy.
> 
> Please advise how can I prevent or drop the client traffic when the above 
> happen.

F5 = refresh ?

Maybe you should begin understanding what's happening and the kind of requests 
being done.




[squid-users] Re: Prevent client spamming

2012-04-29 Thread babajaga
I suggest, you use some rules for "iptables", to limit # of connections from
same IP, for example.


--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Prevent-client-spamming-tp4596348p4596377.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Prevent client spamming

2012-04-29 Thread squid squid

Hi,

I have a server running Squid 2.7 stable 15 and facing client spamming. The 
problem happen when a client press and hold on to the F5 button on the PC and 
this will generate few hundred of requests to the my squid proxy.

Please advise how can I prevent or drop the client traffic when the above 
happen.

Thank you.