Re: [squid-users] Fwd: access.log destinatin server ip

2014-08-29 Thread Manuel Ramírez
Thanks again Antony

2014-08-29 14:26 GMT+02:00 Antony Stone :
> On Friday 29 August 2014 at 14:15:32 (EU time), Manuel Ramírez wrote:
>
>> Ok thank you so much.
>>
>> will i see only this field empty when the objet is in the local cache
>> or may happend in other situation like if the destination ip is in the
>> ip address cache?
>
> There is no IP address cache.
>
> Basically, you will see a destination server IP address if a destination
> server was contacted.
>
> You will not see an IP address if a destination server was not contacted.
>
> Reasons for not contacting a destination server might include the object being
> in the local cache (as in this example) or the request being passed to another
> proxy (which then decides whether to contact the destination server itself,
> but this proxy definitely doesn't).
>
>> 2014-08-29 13:53 GMT+02:00 Antony Stone :
>> > On Friday 29 August 2014 at 13:43:27 (EU time), Manuel Ramirez Montero
> wrote:
>> >> Hi,
>> >>
>> >> 10.23.11.243 - user[29/Aug/2014:13:23:33 +0200] - "GET
>> >> http://www.cassa.cat/taps/templates/business_pro/images/s5_scroll_arrow.
>> >> png
>> >>
>> >>  HTTP/1.1" 304 324  TCP_IMS_HIT:NONE
>> >>
>> >> 10.23.11.243 - user [29/Aug/2014:13:23:33 +0200] 37.152.88.16 "GET
>> >> http://www.cassa.cat/taps/templates/business_pro/js/s5_columns_equalizer
>> >> .js
>> >>
>> >>  HTTP/1.1" 304 261  TCP_MISS:DIRECT
>> >>
>> >> What is the reason in the first line the destination ip is not
>> >> displayed and in the next line yes?
>> >
>> > The first line shows "TCP_IMS_HIT:NONE" meaning that the required object
>> > was found in the local cache and returned to the client from there, and
>> > no connection was made to the remote server, therefore there is no IP
>> > address to show having been contacted.
>> >
>> > The second line shows "TCP_MISS:DIRECT" meaning that there was no
>> > matching object found in the local cache, and the content was requested
>> > directly from the remote server, therefore the IP address of the server
>> > which was contacted is shown.
>
> --
> What is this talk of "software release"?
> Our software evolves and matures until it is capable of escape, leaving a
> bloody trail of designers and quality assurance people in its wake.
>
>Please reply to the list;
>  please *don't* CC me.


Re: [squid-users] Fwd: access.log destinatin server ip

2014-08-29 Thread Manuel Ramírez
Ok thank you so much.

will i see only this field empty when the objet is in the local cache
or may happend in other situation like if the destination ip is in the
ip address cache?

thanks

2014-08-29 13:53 GMT+02:00 Antony Stone :
> On Friday 29 August 2014 at 13:43:27 (EU time), Manuel Ramirez Montero wrote:
>
>> Hi,
>>
>> 10.23.11.243 - user[29/Aug/2014:13:23:33 +0200] - "GET
>> http://www.cassa.cat/taps/templates/business_pro/images/s5_scroll_arrow.png
>>  HTTP/1.1" 304 324  TCP_IMS_HIT:NONE
>>
>> 10.23.11.243 - user [29/Aug/2014:13:23:33 +0200] 37.152.88.16 "GET
>> http://www.cassa.cat/taps/templates/business_pro/js/s5_columns_equalizer.js
>>  HTTP/1.1" 304 261  TCP_MISS:DIRECT
>>
>> What is the reason in the first line the destination ip is not
>> displayed and in the next line yes?
>
> The first line shows "TCP_IMS_HIT:NONE" meaning that the required object was
> found in the local cache and returned to the client from there, and no
> connection was made to the remote server, therefore there is no IP address to
> show having been contacted.
>
> The second line shows "TCP_MISS:DIRECT" meaning that there was no matching
> object found in the local cache, and the content was requested directly from
> the remote server, therefore the IP address of the server which was contacted
> is shown.
>
>
> Regards,
>
>
> Antony.
>
> --
> Ramdisk is not an installation procedure.
>
>Please reply to the list;
>  please *don't* CC me.


[squid-users] Fwd: access.log destinatin server ip

2014-08-29 Thread Manuel Ramirez Montero
-- Forwarded message --
From: Manuel Ramirez Montero 
Date: 2014-08-29 13:40 GMT+02:00
Subject: access.log destinatin server ip
To: squid-users@squid-cache.org


Hi,

first of all i would like to excuse my limited English.I´m newby with
Squid and i need to see in the access.log  the destination server ip.
My squid version is 2.7.
I have this logformat directive:

logformat ipdestino %>a %ui %un [%tl] %http://www.cassa.cat/taps/templates/business_pro/images/s5_scroll_arrow.png
 HTTP/1.1" 304 324  TCP_IMS_HIT:NONE
10.23.11.243 - user [29/Aug/2014:13:23:33 +0200] 37.152.88.16 "GET
http://www.cassa.cat/taps/templates/business_pro/js/s5_columns_equalizer.js
 HTTP/1.1" 304 261  TCP_MISS:DIRECT


What is the reason in the first line the destination ip is not
displayed and in the next line yes?

Thanks in advance

Regards


[squid-users] Re: Why are we getting bad percentage of hits in Squid3.3 compared with Squid2.6 ?

2013-10-27 Thread Manuel
Hi Eliezer, thank you for your answer

The origin servers are the same in 2.6 and in 3.3 (in both cases Squid
connect to the same origin remote servers) and the squid.conf is exactly the
same except in the very first lines (since acl manager proto cache_object ,
etc. are obsolote).

The vast majority of the misses are TCP_MISS/200. I checked several times
the last 200 requests to the homepage of our site (the min/max age is 1
minute -but also tried with a few more minutes-) in the access.log file and
these were the results:

Squid 2.6:
1st check: 5 misses of 200 requests
2nd check: 0 misses of 200 requests
3rd check: 2 misses of 200 requests

Squid 3.3:
1st check: 59 misses of 200 requests
2nd check: 32 misses of 200 requests
3rd check: 108 misses of 200 requests

*Nothing was touched between each check, just a pause of a few seconds or
minutes.

I was think that maybe I should --enable-http-violations in Squid3.3 to get
use of override-expire ignore-reload but I think that it is already enabled
by default since negative_ttl is working properly and requires
--enable-http-violations . Indeed I reduced some misses by using
negative_ttl on squid.conf because Squid3.3 was doing misses with 404
requests while Squid2.6 was doing hits without the need of setting that
directive.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Why-are-we-getting-bad-percentage-of-hits-in-Squid3-3-compared-with-Squid2-6-tp4662949p4662956.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Why are we getting bad percentage of hits in Squid3.3 compared with Squid2.6 ?

2013-10-27 Thread Manuel
Hi,

We are moving to new more powerful servers with CentOS 6 64bit instead of
CentOS 5, with SSD instead of HDD and with Squid 3.3.9 instead of Squid
2.6.STABLE21. We are running Squid as a reverse proxy.

The performance of Squid 3.3.9 seems to be excellent tested with around
125087 "clients accessing cache" and 2 "file descriptors in use" in a
single server but there is one big problem: we are getting much worst
percentage of hits in Squid 2.6 in comparison with the same config in Squid
2.6. With the same config we are getting around 99% of hits in Squid 2.6 and
just 55-60% of hits on Squid 3.3.9 

We are using the refresh_pattern options override-expire ignore-reload (as
mentioned before is the same squid.conf config in every server -old and new
ones-)

Any idea on what might be the problem or any suggestion on how to find it?

Thank you in advance

PS: Since we have lots of concurrent connections requesting the same
addresses we expect to add to the main webpages the Cache-Control
stale-while-revalidate=60 header which I guess that will increase the number
of hits on the servers running Squid 3.3.9 but for the moment the goal is to
first try to reach a more similar percentage to Squid 2.6 in the same
situation since stale-while-revalidate would be ignored by Squid 2.6.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Why-are-we-getting-bad-percentage-of-hits-in-Squid3-3-compared-with-Squid2-6-tp4662949.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Do not refresh the cache if cache_peer is unavailable

2013-09-25 Thread Manuel
Is it possible to do not refresh the cache in case of an error in the newest
request to the cache_peer?

What I mean is; suppose that in a reverse proxy I want to refresh the cache
of the base URL (the homepage of the website) around every 60 seconds but in
case that the cache peers are temporarily unavailable I do not want to
refresh the cache because otherwise no content will be shown to the clients.
Is there are method to achieve this?

Thank you in advance



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Do-not-refresh-the-cache-if-cache-peer-is-unavailable-tp4662306.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Create acl based on Accept-Language?

2013-08-26 Thread Manuel
OK so we can only have one cached copy per URL, is that correct? If so the
only solution I can think of is by at least using that copy for the main
content and once an user requests the Italian version of the content we can
always provide the user with non-cached content and non-caching the content,
correct?

To do so what we have to do is to just add:
cache deny itlanguage

right?

What in any case surprises me is that in the example I gave you requests
affected by acl specialuser is always fetching the content from the
cache_peer named specialserver despite there is almost always a cached copy
of the exact URL that requests (which is the base URL http://www.xx.com/
)

With regards to the HTTP responses the reverse proxy is ignoring them
(override-expire ignore-reload ignore-no-cache) and Squid decides what to
cache, what not to cache, etc. by the rules based on acl,
refresh_pattern 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Create-acl-based-on-Accept-Language-tp4661764p4661773.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Create acl based on Accept-Language?

2013-08-25 Thread Manuel
I just found out that Squid caches the content properly from the right
cache_peer but once another user request the same URL the content is fetched
always from the first cached copy which is always based on the first
request. This problems does not seem to happen with another config that I
have where the acl is based on an IP (not based on Accept-Language). In this
other case even when the user request an URL that has already been cached,
because the user matches an specific IP the content is fetched from the
right cache_peer.

Examples 1 (not working properly):

acl itlanguage req_header Accept-Language ^it

cache_peer 127.0.0.1 parent 81 0 no-query no-digest originserver
name=maincontent
cache_peer_access maincontent allow !itlanguage
cache_peer_access maincontent deny all

cache_peer 127.0.0.1 parent 81 0 no-query no-digest originserver
name=itcontent
cache_peer_access itcontent allow itlanguage
cache_peer_access itcontent deny all

Example 2 (working properly):

cache_peer 127.0.0.1 parent 81 0 no-query no-digest originserver
name=webserver weight=1000 connect-timeout=2
cache_peer 127.0.0.2 parent 81 0 no-query no-digest originserver
name=webserver2 weight=5 connect-timeout=2
acl maindomain dstdomain www.mydomain.com
acl specialuser src 80.80.80.80/32
cache_peer_access webserver allow maindomain !specialuser
cache_peer_access webserver deny all
cache_peer_access webserver2 allow maindomain !specialuser
cache_peer_access webserver2 deny all

cache_peer 127.0.0.3 parent 81 0 no-query no-digest originserver
name=specialserver
cache_peer_access specialserver allow specialuser maindomain
cache_peer_access specialserver deny all


The only differences I can see are:
- ACL is based on an IP rather than in req_header Accept-Language
- In the second case the cache_peer are not only named different but also
have a different IP (127.0.0.1, 127.0.0.2...) while in the first case both
cache_peer have the same address but it are named different (because the
webserver delivers different content based on the Accept-Language).

Any idea?

Thank you in advance




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Create-acl-based-on-Accept-Language-tp4661764p4661767.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Create acl based on Accept-Language?

2013-08-25 Thread Manuel
Is it possible to create an acl like the following in order to a cache_peer
per language:

acl itlanguage req_header Accept-Language ^it
cache_peer 127.0.0.1 parent 80 0 no-query no-digest originserver
name=itcontent
cache_peer_access itcontent allow itlanguage 
cache_peer_access itcontent deny all

My test unfortunately does not seem to be working




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Create-acl-based-on-Accept-Language-tp4661764.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Create acl based on Accept-Language?

2013-08-25 Thread Manuel
Is it possible to create an acl like the following in order to a cache_peer
per language:

acl itlanguage req_header Accept-Language ^it
cache_peer 127.0.0.1 parent 80 0 no-query no-digest originserver
name=itcontent
cache_peer_access servidorredireccionesit allow itlanguage 
cache_peer_access servidorredireccionesit deny all

My test unfortunately does not seem to be working




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Create-acl-based-on-Accept-Language-tp4661763.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: About bottlenecks (Max number of connections, etc.)

2013-02-26 Thread Manuel
After rebuilding the rpm without the with-maxfd=16384 and installing it in
two very different servers the are "32768 file descriptors available" for
Squid in each server.

No idea why there are no more file descriptors available. The SO config
seems to be correct with regards to file descriptors available in both
servers. Here is an example of one of them:

[root@anything ]# cat /proc/sys/fs/file-max
100451
[root@anything ]# ulimit -Hn
65535
[root@anything ]# ulimit -Sn
65535
[root@anything ]# cat /proc/sys/fs/file-max
100451
[root@anything ]# sysctl fs.file-max
fs.file-max = 100451
[root@anything ]# service squid stop
Stopping squid: .  [  OK  ]
[root@anything ]# su - squid
This account is currently not available.
[root@anything ]# service squid start
Starting squid: .  [  OK  ]
[root@anything ]# sysctl fs.file-nr
fs.file-nr = 1152   0   100451

Also max_filedesc 98304 is set at the end of squid.conf which clearly works
because when removed there are only 1026 file descriptors or so. ulimit -HSn
98304 is also added to the beginning of vi /etc/init.d/squid

As you can see now there is no with-maxfd=16384 when squid -v is used:
Squid Cache: Version 2.6.STABLE21
configure options:  '--host=x86_64-redhat-linux-gnu'
'--build=x86_64-redhat-linux-gnu' '--target=x86_64-redhat-linux'
'--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin'
'--sbindir=/usr/sbin' '--sysconfdir=/etc' '--includedir=/usr/include'
'--libdir=/usr/lib64' '--libexecdir=/usr/libexec'
'--sharedstatedir=/usr/com' '--mandir=/usr/share/man'
'--infodir=/usr/share/info' '--exec_prefix=/usr' '--bindir=/usr/sbin'
'--libexecdir=/usr/lib64/squid' '--localstatedir=/var'
'--datadir=/usr/share' '--sysconfdir=/etc/squid' '--enable-arp-acl'
'--enable-epoll' '--enable-snmp' '--enable-removal-policies=heap,lru'
'--enable-storeio=aufs,coss,diskd,null,ufs' '--enable-ssl'
'--with-openssl=/usr/kerberos' '--enable-delay-pools'
'--enable-linux-netfilter' '--with-pthreads'
'--enable-ntlm-auth-helpers=SMB,fakeauth'
'--enable-external-acl-helpers=ip_user,ldap_group,unix_group,wbinfo_group'
'--enable-auth=basic,digest,ntlm,negotiate'
'--enable-negotiate-auth-helpers=squid_kerb_auth'
'--enable-digest-auth-helpers=password' '--with-winbind-auth-challenge'
'--enable-useragent-log' '--enable-referer-log'
'--disable-dependency-tracking' '--enable-cachemgr-hostname=localhost'
'--enable-underscores'
'--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SMB,YP,getpwnam,multi-domain-NTLM,SASL'
'--enable-cache-digests' '--enable-ident-lookups'
'--enable-follow-x-forwarded-for' '--enable-wccpv2' '--enable-fd-config'
'build_alias=x86_64-redhat-linux-gnu' 'host_alias=x86_64-redhat-linux-gnu'
'target_alias=x86_64-redhat-linux' 'CFLAGS=-D_FORTIFY_SOURCE=2 -fPIE -Os -g
-pipe -fsigned-char' 'LDFLAGS=-pie'


The only thing I can think of to try is to rebuild again the rpm but with
with-maxfd=98304 (instead of simply removing with-maxfd=16384). Also I will
probably try soon with a more recent version of Squid (because of the better
performance and in order to see whether that limit of 32768 disappears or
not).

Any ideas?

Thank you in advance



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/About-bottlenecks-Max-number-of-connections-etc-tp4658650p4658732.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: About bottlenecks (Max number of connections, etc.)

2013-02-24 Thread Manuel
*"You say that in your slow server you are able to achieve twice req/sec than
in your fastest one" I obviously meant the opposite



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/About-bottlenecks-Max-number-of-connections-etc-tp4658650p4658689.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: About bottlenecks (Max number of connections, etc.)

2013-02-24 Thread Manuel
I have noticed that it always start to fail when there are only available
exactly 3276 file descriptors and 13108 file desc in use. That is almost
exactly 20% free file descriptors. Still it look for me that there is a
problem of not enough file descriptors (just because of the with-maxfd=16384
config in the installation of Squid) but I wonder whether is it normal that
it always stick at that number (and not in something much closer to 0
available file descriptors and 16384 file desc in use). If file desc are the
problem I also wonder why I am not getting any error in the logs while in
the past I did get the "Your cache is running out of filedescriptors" error.
Any ideas?

This was the activity in two different moments and even in different servers
(if I am not wrong), as you can see it stuck in the same number:
Server 1:
Maximum number of file descriptors:   16384
Largest file desc currently in use:   13125
Number of file desc currently in use: 13108
Files queued for open:   0
Available number of file descriptors: 3276
Reserved number of file descriptors:   100
Store Disk files open:  73
IO loop method: epoll

Server 2:
Maximum number of file descriptors:   16384
Largest file desc currently in use:   13238
Number of file desc currently in use: 13108
Files queued for open:   0
Available number of file descriptors: 3276
Reserved number of file descriptors:   100
Store Disk files open: 275
IO loop method: epoll

You say that in your slow server you are able to achieve twice req/sec than
in your fastest one but in both cases active connections remain in a max of
around 20k, is it true? How many file descriptors do you reach at that
point? 2? Those machines are also different in RAM? How important is the
RAM difference for the performance of Squid? According to the bottlenecks
you said, I wonder whether from 2 GB onwards the rest of the RAM is useless
or not for Squid.

Thank you Amos



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/About-bottlenecks-Max-number-of-connections-etc-tp4658650p4658688.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: About bottlenecks (Max number of connections, etc.)

2013-02-23 Thread Manuel
I have found that Squid only has 16384 file descriptors available (despite
ulimit, etc. seems to be config properly). Squid was installed via yum
(squid-2.6.STABLE21-6.el5) and it seems that it has this config
--with-maxfd=16384 so max_filedesc setting is probably being useless. I will
rebuild squid-2.6.STABLE21-6.el5.src.rpm without --with-maxfd setting and
see if it works



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/About-bottlenecks-Max-number-of-connections-etc-tp4658650p4658666.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] About bottlenecks (Max number of connections, etc.)

2013-02-22 Thread Manuel
Hi,

We are having problems with our Squid servers during traffic peaks. We had
problems in the past and we got different error such as "Your cache is
running out of filedescriptors", syncookies errors, etc. but nowadays we
have optimized that and we are not getting those errors anymore. The problem
is that the servers, which many of them are different in resources and in
two different datacenters (which are all running squid as a reverse proxy
caching contents from several webservers in other datacenters), during big
traffic peaks all of them fail to deliver content (html pages, js files and
css files gziped and non gziped as well as images) and we do not see any
error at all. The more connections/requests, the highest is the percentage
of clients that fails to get the content. So we are tring to find out where
is the bottleneck. Is Squid unable to deal with more than X connections per
second or any other bottleneck? I think the bottleneck starts to fail when
there is around 20,000 connections to each server.

Thank you in advance



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/About-bottlenecks-Max-number-of-connections-etc-tp4658650.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Reverse proxy; finding out what robots (IPs and user-agents) are sending you most of the requests

2012-10-11 Thread Manuel
I have finally found the solution; logformat combined  configuration
directive in Squid



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Reverse-proxy-finding-out-what-robots-IPs-and-user-agents-are-sending-you-most-of-the-requests-tp4656972p4656973.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Clarification on delay_pool bandwidth restricting with external acls

2012-06-07 Thread Carlos Manuel Trepeu Pupo
I have similar configuration, the download of the user its not more
than 128 bytes, but the squid consume all the bandwidth.

squid 3.0 STABLE1

On Thu, Jun 7, 2012 at 6:43 AM, Amos Jeffries  wrote:
> On 1/06/2012 2:20 p.m., Cameron Charles wrote:
>>
>> Hi all, im just after some clarification on using delay pools with
>> external acls as triggers, i have a good understanding of the
>> components of delay pools and how they operate but most documentation
>> only mentions users aka ip addresses as the method of triggering the
>> restriction, i would like to use an external acl to decide whether a
>> request should be limited or not regardless of any other factors, so
>> any and all traffic coming through squid, if a match to this acl, is
>> restricted to say 128bps, is this possible and is the following the
>> correct way to achieve this???
>>
>> acl bandwidth_UNIQUENAME_acl external bandwidth_check_ext_acl_type
>> UNIQUENAME
>> http_reply_access allow bandwidth_UNIQUENAME_acl !all
>> delay_class 1 1
>> delay_parameters 1 128.0/128.0
>> delay_access 1 allow bandwidth_128.0_acl
>> delay_initial_bucket_level 1 100
>>
>> additionally im a inexperienced when it comes to actually testing
>> bandwidth limits is it possible to simply download a file that is
>> known to be a match to the ext acl and observe that it doesn't
>> download at over the bandwidth restriction or is testing this more
>> complicated.
>
>
> Yes that is pretty much it when testing. Max download should be no more than
> 128 bytes per second according to that config.
>
> If that shows a problem the other thing is to set debug_options ALL,5  (or
> the specific delay pools, comm, external ACL and access control levels
> specifically) and watch for external ACL results and delay pool operations
> to see if the issue shows up.
>
> Amos


[squid-users] delay_pools fail

2012-06-01 Thread Carlos Manuel Trepeu Pupo
I'm using squid 3.0 STABLE1 on ubuntu 8.04, I have this conf:


delay_pools 1

delay_class 1 1
delay_parameters 1 15000/15000
delay_access 1 allow all

To limit all traffic to 15 KB, but the traffic reach 45, 60, 25 .
What happening this ?? And how can I limit this ???


Re: [squid-users] limiting connections

2012-05-29 Thread Carlos Manuel Trepeu Pupo
Here I make this post alive because a make a few changes. Here you
have, if anyone need it:

#!/bin/bash
while read line; do

   shortLine=`echo $line | awk -F "/" '{print $NF}'`
   #echo $shortLine >> /home/carlos/guarda &  -> This is for debugging
   result=`squidclient -h 127.0.0.1 mgr:active_requests | grep
-c "$shortLine"`

 if [ $result == 1 ]
   then
   echo 'OK'
   #echo 'OK'>>/home/carlos/guarda &  -> This is for debugging
 else
   echo 'ERR'
   #echo 'ERR'>>/home/carlos/guarda &  -> This is for debugging
 fi
done


The main change is to compare the file to download and not the URL, to
avoid the use of mirrors to increase the simultaneous connections.


On Tue, May 29, 2012 at 9:46 AM, Carlos Manuel Trepeu Pupo
 wrote:
> Here I make this post alive because a make a few changes. Here you
> have, if anyone need it:
>
> #!/bin/bash
> while read line; do
>
>        shortLine=`echo $line | awk -F "/" '{print $NF}'`
>        #echo $shortLine >> /home/carlos/guarda &  -> This is for debugging
>        result=`squidclient -h 127.0.0.1 mgr:active_requests | grep
> -c "$shortLine"`
>
>  if [ $result == 1 ]
>        then
>        echo 'OK'
>        #echo 'OK'>>/home/carlos/guarda &  -> This is for debugging
>  else
>        echo 'ERR'
>        #echo 'ERR'>>/home/carlos/guarda &  -> This is for debugging
>  fi
> done
>
>
> The main change is to compare the file to download and not the URL, to
> avoid the use of mirrors to increase the simultaneous connections.
>
>
> On Thu, Apr 5, 2012 at 12:52 PM, H  wrote:
>> Carlos Manuel Trepeu Pupo wrote:
>>> On Thu, Apr 5, 2012 at 10:32 AM, H  wrote:
>>>> Carlos Manuel Trepeu Pupo wrote:
>>>>>>> what is your purpose? solve bandwidth problems? Connection rate?
>>>>>>> Congestion? I believe that limiting to *one* download is not your real
>>>>>>> intention, because the browser could still open hundreds of regular
>>>>>>> pages and your download limit is nuked and was for nothing ...
>>>>>>>
>>>>>>> what is your operating system?
>>>>>>>
>>>>> I pretend solve bandwidth problems. For the persons who uses download
>>>>> manager or accelerators, just limit them to 1 connection. Otherwise I
>>>>> tried to solve with delay_pool, the packet that I delivery to the
>>>>> client was just like I configured, but with accelerators the upload
>>>>> saturate the channel.
>>>>>
>>>>
>>>>
>>>> since you did not say what OS youŕe running I can give you only some
>>>> direction, any or most Unix firewall can solve this easy, if you use
>>>> Linux you may like pf with FBSD you should go with ipfw, the latter
>>>> probably is easier to understand but for both you will find zillions of
>>>> examples on the net, look for short setups
>>>
>>> Sorry, I forgot !! Squid is in Debian 6.0 32 bits. My firewall is
>>> Kerio but in Windows, and i'm not so glad to use it !!!
>>>
>>>>
>>>> first you "divide" your bandwidth between your users
>>>
>>> First I search about the dynamic bandwidth with Squid, but squid do
>>> not do this, and them after many search I just find ISA Server with a
>>> third-party plugin, but I prefer linux.
>>>
>>>>
>>>> if you use TPROXy you can devide/limit the bandwidth on the outside
>>>> interface in order to limit only access to the link but if squid has the
>>>> object in cache it might go out as fast as it can
>>>>
>>>> you still can manage the bandwidth pool with delay parameters if you wish
>>>
>>> I tried with delay_pool, but the delay_pool just manage the download
>>> average, and not the upload, I need the both. The last time I tried
>>> with delay_pool the "download accelerator" download at the speed that
>>> I specify, but the proxy consume all channel with the download,
>>> something that I never understand.
>>>
>>>>
>>>>
>>>> I guess you meant downlaod accelerator, not manager, you can then limit
>>>> the connection rate within the bandwidth for each user and each
>>>> protocol, for DL-accelerator you should pay attention to udp packages as
>>>> well, you did not say

Re: [squid-users] Program for realtime monitoring

2012-04-17 Thread Carlos Manuel Trepeu Pupo
On Tue, Apr 17, 2012 at 4:24 AM, Maqsood Ahmad  wrote:
>
> Well the idea behind this requirement was initially a precaution in case of 
> something goes wrong with the squid.
> But i have tested it and it work great. Separate Web server machine running 
> sqstat pointed to squid server.
>

You mean in separate server, I understand many server pointing to squid, sorry

>
>
>
>> Date: Fri, 13 Apr 2012 08:17:37 -0400
>> From: charlie@gmail.com
>> To: squid-users@squid-cache.org
>> Subject: Re: [squid-users] Program for realtime monitoring
>>
>> I don´t understand !!! Why can you need to configure in different machines ??
>>
>> On Fri, Apr 13, 2012 at 8:11 AM, Carlos Manuel Trepeu Pupo
>>  wrote:
>> > I don´t understand !!! Why can you need to configure in different machines 
>> > ??
>> >
>> > On Fri, Apr 13, 2012 at 12:24 AM, Maqsood Ahmad  
>> > wrote:
>> >>
>> >> Hi,
>> >>
>> >>
>> >> Is it possible that we can configure sqstat on separate machine
>> >>
>> >>
>> >>
>> >> Maqsood Ahmad
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>> Date: Thu, 12 Apr 2012 09:49:43 -0400
>> >>> From: charlie@gmail.com
>> >>> To: squid-users@squid-cache.org
>> >>> Subject: Re: [squid-users] Program for realtime monitoring
>> >>>
>> >>> Give 777 permissions  to the folder sqstat !!!
>> >>>
>> >>> 2012/4/12 CyberSoul :
>> >>> >>The first time I make the test, I just install apache without any
>> >>> >>particular configuration.
>> >>> >
>> >>> >>I just make a folder "sqstat" inside WWW, and copy all the content
>> >>> >>there. To call the script you need to type
>> >>> >>http://ip-address-server/sqstat/sqstat.php
>> >>> >
>> >>> >>For the Squid you just need to permit the access from IP where sqstat
>> >>> >>is, to the cache. For sqstat you need to make this config:
>> >>> >>$squidhost[0]="ip_proxy";
>> >>> >>$squidhost[1]="ip_proxy1";
>> >>> >> as many proxies server you have
>> >>> >>$squidport[0]=3128;
>> >>> >>$squidport[1]=3128;
>> >>> >>. one for each proxy server
>> >>> >>$cachemgr_passwd[0]="";
>> >>> >>$cachemgr_passwd[1]="";
>> >>> >>. one for each proxy server and between "" puts the password if
>> >>> >>you have one.
>> >>> >>resolveip[0]=false;
>> >>> >>resolveip[1]=false;
>> >>> >>. one for each proxy server
>> >>> >>$group_by[0]="username";
>> >>> >>$group_by[1]="host";
>> >>> >>... depend if you want to group by ip or username.
>> >>> >
>> >>> >>That's all you need, anything else you need don't be afraid to ask
>> >>> >
>> >>> > Well, try to call script type by
>> >>> > http://ip-address-server/sqstat/sqstat.php
>> >>> > and in browser I see just text of sqstat.php file or again
>> >>> > SqStat Error
>> >>> > Error (13) Permission Denied
>> >>> >
>> >>> > I think trouble in httpd.conf, can you send me your httpd.conf file or 
>> >>> > httpd.conf.default file?
>> >>> >
>> >>> >
>> >>> >
>> >>
>


Re: [squid-users] Program for realtime monitoring

2012-04-13 Thread Carlos Manuel Trepeu Pupo
I don´t understand !!! Why can you need to configure in different machines ??

On Fri, Apr 13, 2012 at 8:11 AM, Carlos Manuel Trepeu Pupo
 wrote:
> I don´t understand !!! Why can you need to configure in different machines ??
>
> On Fri, Apr 13, 2012 at 12:24 AM, Maqsood Ahmad  
> wrote:
>>
>> Hi,
>>
>>
>> Is it possible that we can configure sqstat on separate machine
>>
>>
>>
>> Maqsood Ahmad
>>
>>
>>
>>
>>
>>
>>> Date: Thu, 12 Apr 2012 09:49:43 -0400
>>> From: charlie@gmail.com
>>> To: squid-users@squid-cache.org
>>> Subject: Re: [squid-users] Program for realtime monitoring
>>>
>>> Give 777 permissions  to the folder sqstat !!!
>>>
>>> 2012/4/12 CyberSoul :
>>> >>The first time I make the test, I just install apache without any
>>> >>particular configuration.
>>> >
>>> >>I just make a folder "sqstat" inside WWW, and copy all the content
>>> >>there. To call the script you need to type
>>> >>http://ip-address-server/sqstat/sqstat.php
>>> >
>>> >>For the Squid you just need to permit the access from IP where sqstat
>>> >>is, to the cache. For sqstat you need to make this config:
>>> >>$squidhost[0]="ip_proxy";
>>> >>$squidhost[1]="ip_proxy1";
>>> >> as many proxies server you have
>>> >>$squidport[0]=3128;
>>> >>$squidport[1]=3128;
>>> >>. one for each proxy server
>>> >>$cachemgr_passwd[0]="";
>>> >>$cachemgr_passwd[1]="";
>>> >>. one for each proxy server and between "" puts the password if
>>> >>you have one.
>>> >>resolveip[0]=false;
>>> >>resolveip[1]=false;
>>> >>. one for each proxy server
>>> >>$group_by[0]="username";
>>> >>$group_by[1]="host";
>>> >>... depend if you want to group by ip or username.
>>> >
>>> >>That's all you need, anything else you need don't be afraid to ask
>>> >
>>> > Well, try to call script type by
>>> > http://ip-address-server/sqstat/sqstat.php
>>> > and in browser I see just text of sqstat.php file or again
>>> > SqStat Error
>>> > Error (13) Permission Denied
>>> >
>>> > I think trouble in httpd.conf, can you send me your httpd.conf file or 
>>> > httpd.conf.default file?
>>> >
>>> >
>>> >
>>


Re: [squid-users] Program for realtime monitoring

2012-04-12 Thread Carlos Manuel Trepeu Pupo
Give 777 permissions  to the folder sqstat !!!

2012/4/12 CyberSoul :
>>The first time I make the test, I just install apache without any
>>particular configuration.
>
>>I just make a folder "sqstat" inside WWW, and copy all the content
>>there. To call the script you need to type
>>http://ip-address-server/sqstat/sqstat.php
>
>>For the Squid you just need to permit the access from IP where sqstat
>>is, to the cache. For sqstat you need to make this config:
>>$squidhost[0]="ip_proxy";
>>$squidhost[1]="ip_proxy1";
>> as many proxies server you have
>>$squidport[0]=3128;
>>$squidport[1]=3128;
>>. one for each proxy server
>>$cachemgr_passwd[0]="";
>>$cachemgr_passwd[1]="";
>>. one for each proxy server and between "" puts the password if
>>you have one.
>>resolveip[0]=false;
>>resolveip[1]=false;
>>. one for each proxy server
>>$group_by[0]="username";
>>$group_by[1]="host";
>>... depend if you want to group by ip or username.
>
>>That's all you need, anything else you need don't be afraid to ask
>
> Well, try to call script type by
> http://ip-address-server/sqstat/sqstat.php
> and in browser I see just text of sqstat.php file or again
> SqStat Error
> Error (13) Permission Denied
>
> I think trouble in httpd.conf, can you send me your httpd.conf file or 
> httpd.conf.default file?
>
>
>


Re: [squid-users] Program for realtime monitoring

2012-04-11 Thread Carlos Manuel Trepeu Pupo
I'm using SqStat 1.20 and work great for me !!!

2012/4/11 Alex Crow :
> On 11/04/12 06:08, CyberSoul wrote:
>>
>> Hi all, could anyone give me suggestion for utilite or script realtime
>> monitoring for Squid, which can do the following requirements:
>>
>> 1) work through web-inteface
>> 2) show current connection speed and username ( or ip)
>> 3) show the full path of the file that is currently downloaded or browsing
>>
>> For example, I open browser and go to the address
>> http://ip-address-squid/program-for-realtime-monitoring
>>
>> and I can see about the following columns (for example)
>>
>> No.Username (or IP)   Current connection speedURL
>>
>> 1user1 (192.168.1.17)  145 KB/s
>>  
>> http://uk.download.nvidia.com/XFree86/Linux-x86/295.33/NVIDIA-Linux-x86-295.33.run
>> 2user2 (192.168.1.53)  89 KB/s
>> http://www.centos.org/modules/tinycontent/index.php?id=2
>>
>> Any ideas?
>>
>>
> Ntop (http://www.ntop.org/products/ntop/) is pretty nice but as Amos said to
> get things like "what file are they downloading" you'll probably have to
> write something to parse the cachemgr data from squid.
>
> Alex
>
>


Re: [squid-users] limiting connections

2012-04-05 Thread Carlos Manuel Trepeu Pupo
On Thu, Apr 5, 2012 at 10:32 AM, H  wrote:
> Carlos Manuel Trepeu Pupo wrote:
>>> > what is your purpose? solve bandwidth problems? Connection rate?
>>> > Congestion? I believe that limiting to *one* download is not your real
>>> > intention, because the browser could still open hundreds of regular
>>> > pages and your download limit is nuked and was for nothing ...
>>> >
>>> > what is your operating system?
>>> >
>> I pretend solve bandwidth problems. For the persons who uses download
>> manager or accelerators, just limit them to 1 connection. Otherwise I
>> tried to solve with delay_pool, the packet that I delivery to the
>> client was just like I configured, but with accelerators the upload
>> saturate the channel.
>>
>
>
> since you did not say what OS youŕe running I can give you only some
> direction, any or most Unix firewall can solve this easy, if you use
> Linux you may like pf with FBSD you should go with ipfw, the latter
> probably is easier to understand but for both you will find zillions of
> examples on the net, look for short setups

Sorry, I forgot !! Squid is in Debian 6.0 32 bits. My firewall is
Kerio but in Windows, and i'm not so glad to use it !!!

>
> first you "divide" your bandwidth between your users

First I search about the dynamic bandwidth with Squid, but squid do
not do this, and them after many search I just find ISA Server with a
third-party plugin, but I prefer linux.

>
> if you use TPROXy you can devide/limit the bandwidth on the outside
> interface in order to limit only access to the link but if squid has the
> object in cache it might go out as fast as it can
>
> you still can manage the bandwidth pool with delay parameters if you wish

I tried with delay_pool, but the delay_pool just manage the download
average, and not the upload, I need the both. The last time I tried
with delay_pool the "download accelerator" download at the speed that
I specify, but the proxy consume all channel with the download,
something that I never understand.

>
>
> I guess you meant downlaod accelerator, not manager, you can then limit
> the connection rate within the bandwidth for each user and each
> protocol, for DL-accelerator you should pay attention to udp packages as
> well, you did not say how much user and bandwdith you have but limit the
> tcp connection to 25 and udp to 40 to begin with, then test it until
> coming to something what suites your wish

I have 128 kbps, and I have no idea about the UDP packages !!! That's
new for me !! Any documentation that I can read ???

>
> you still could check which DLaccel your people are using and then limit
> or block only this P2P ports which used to be very effective

Even if I do not permit "CONNECT" the users can use P2P ports ??

Thanks for this, I can get clear many question about squid that I have !!!

>
>
>
>
> --
> H
> +55 11 4249.
>


Re: [squid-users] limiting connections

2012-04-05 Thread Carlos Manuel Trepeu Pupo
On Thu, Apr 5, 2012 at 7:01 AM, H  wrote:
> Carlos Manuel Trepeu Pupo wrote:
>> On Tue, Apr 3, 2012 at 6:35 PM, H  wrote:
>>> Eliezer Croitoru wrote:
>>>> On 03/04/2012 18:30, Carlos Manuel Trepeu Pupo wrote:
>>>>> On Mon, Apr 2, 2012 at 6:43 PM, Amos Jeffries
>>>>> wrote:
>>>>>> On 03.04.2012 02:21, Carlos Manuel Trepeu Pupo wrote:
>>>>>>>
>>>>>>> Thanks a looot !! That's what I'm missing, everything work
>>>>>>> fine now. So this script can use it cause it's already works.
>>>>>>>
>>>>>>> Now, I need to know if there is any way to consult the active request
>>>>>>> in squid that work faster that squidclient 
>>>>>>>
>>>>>>
>>>>>> ACL types are pretty easy to add to the Squid code. I'm happy to
>>>>>> throw an
>>>>>> ACL patch your way for a few $$.
>>>>>>
>>>>>> Which comes back to me earlier still unanswered question about why
>>>>>> you want
>>>>>> to do this very, very strange thing?
>>>>>>
>>>>>> Amos
>>>>>>
>>>>>
>>>>>
>>>>> OK !! Here the complicate and strange explanation:
>>>>>
>>>>> Where I work we have 128 Kbps for the use of almost 80 PCs, a few of
>>>>> them use download accelerators and saturate the channel. I began to
>>>>> use the ACL maxconn but I have still a few problems. 60 of the clients
>>>>> are under an ISA server that I don't administrate, so I can't limit
>>>>> the maxconn to them like the others. Now with this ACL, everyone can
>>>>> download but with only one connection. that's the strange main idea.
>>>> what do you mean by only one connection?
>>>> if it's under one isa server then all of them share the same external IP.
>>>>
>>>
>>> Hi
>>>
>>> I am following this thread with mixed feelings of weirdness and
>>> admiration ...
>>>
>>> there are always two ways to reach a far point, it's left around or
>>> right around the world, depending on your position one of the ways is
>>> always the longer one. I can understand that some without hurry and
>>> money issues chose the longer one, perhaps also because of more chance
>>> for adventurous happenings, unknown and the unexpected
>>>
>>> so know I explained in a similar long way what I do not understand, why
>>> would you make such a complicated out of scope code, slow, certainly
>>> dangerous ... if at least it would be perl, but bash calling external
>>> prog and grepping, whow ... when you can solve it with a line of code ?
>>>
>>> this task would fit pf or ipfw much better, would be more elegant and
>>> zillions times faster and secure, not speaking about time investment,
>>> how much time you need to write 5/6 keywords of code?
>>>
>>> or is it for demonstration purpose, showing it as an alternative
>>> possibility?
>>>
>>
>> It's great read this. I just know BASH SHELL, but if you tell me that
>> I can make this safer and faster... Previously post I talk about
>> this!! That someone tell me if there is a better way of do that, I'm
>> newer !! Please, if you can guide me
>>
>
>
> who knows ...
>
> what is your purpose? solve bandwidth problems? Connection rate?
> Congestion? I believe that limiting to *one* download is not your real
> intention, because the browser could still open hundreds of regular
> pages and your download limit is nuked and was for nothing ...
>
> what is your operating system?
>

I pretend solve bandwidth problems. For the persons who uses download
manager or accelerators, just limit them to 1 connection. Otherwise I
tried to solve with delay_pool, the packet that I delivery to the
client was just like I configured, but with accelerators the upload
saturate the channel.

>
>
> --
> H
> +55 11 4249.
>


Re: [squid-users] limiting connections

2012-04-04 Thread Carlos Manuel Trepeu Pupo
On Tue, Apr 3, 2012 at 6:35 PM, H  wrote:
> Eliezer Croitoru wrote:
>> On 03/04/2012 18:30, Carlos Manuel Trepeu Pupo wrote:
>>> On Mon, Apr 2, 2012 at 6:43 PM, Amos Jeffries
>>> wrote:
>>>> On 03.04.2012 02:21, Carlos Manuel Trepeu Pupo wrote:
>>>>>
>>>>> Thanks a looot !! That's what I'm missing, everything work
>>>>> fine now. So this script can use it cause it's already works.
>>>>>
>>>>> Now, I need to know if there is any way to consult the active request
>>>>> in squid that work faster that squidclient 
>>>>>
>>>>
>>>> ACL types are pretty easy to add to the Squid code. I'm happy to
>>>> throw an
>>>> ACL patch your way for a few $$.
>>>>
>>>> Which comes back to me earlier still unanswered question about why
>>>> you want
>>>> to do this very, very strange thing?
>>>>
>>>> Amos
>>>>
>>>
>>>
>>> OK !! Here the complicate and strange explanation:
>>>
>>> Where I work we have 128 Kbps for the use of almost 80 PCs, a few of
>>> them use download accelerators and saturate the channel. I began to
>>> use the ACL maxconn but I have still a few problems. 60 of the clients
>>> are under an ISA server that I don't administrate, so I can't limit
>>> the maxconn to them like the others. Now with this ACL, everyone can
>>> download but with only one connection. that's the strange main idea.
>> what do you mean by only one connection?
>> if it's under one isa server then all of them share the same external IP.
>>
>
> Hi
>
> I am following this thread with mixed feelings of weirdness and
> admiration ...
>
> there are always two ways to reach a far point, it's left around or
> right around the world, depending on your position one of the ways is
> always the longer one. I can understand that some without hurry and
> money issues chose the longer one, perhaps also because of more chance
> for adventurous happenings, unknown and the unexpected
>
> so know I explained in a similar long way what I do not understand, why
> would you make such a complicated out of scope code, slow, certainly
> dangerous ... if at least it would be perl, but bash calling external
> prog and grepping, whow ... when you can solve it with a line of code ?
>
> this task would fit pf or ipfw much better, would be more elegant and
> zillions times faster and secure, not speaking about time investment,
> how much time you need to write 5/6 keywords of code?
>
> or is it for demonstration purpose, showing it as an alternative
> possibility?
>

It's great read this. I just know BASH SHELL, but if you tell me that
I can make this safer and faster... Previously post I talk about
this!! That someone tell me if there is a better way of do that, I'm
newer !! Please, if you can guide me

>
> --
> H
> +55 11 4249.
>


Re: [squid-users] limiting connections

2012-04-03 Thread Carlos Manuel Trepeu Pupo
On Tue, Apr 3, 2012 at 4:36 PM, Eliezer Croitoru  wrote:
> On 03/04/2012 18:30, Carlos Manuel Trepeu Pupo wrote:
>>
>> On Mon, Apr 2, 2012 at 6:43 PM, Amos Jeffries
>>  wrote:
>>>
>>> On 03.04.2012 02:21, Carlos Manuel Trepeu Pupo wrote:
>>>>
>>>>
>>>> Thanks a looot !! That's what I'm missing, everything work
>>>> fine now. So this script can use it cause it's already works.
>>>>
>>>> Now, I need to know if there is any way to consult the active request
>>>> in squid that work faster that squidclient 
>>>>
>>>
>>> ACL types are pretty easy to add to the Squid code. I'm happy to throw an
>>> ACL patch your way for a few $$.
>>>
>>> Which comes back to me earlier still unanswered question about why you
>>> want
>>> to do this very, very strange thing?
>>>
>>> Amos
>>>
>>
>>
>> OK !! Here the complicate and strange explanation:
>>
>> Where I work we have 128 Kbps for the use of almost 80 PCs, a few of
>> them use download accelerators and saturate the channel. I began to
>> use the ACL maxconn but I have still a few problems. 60 of the clients
>> are under an ISA server that I don't administrate, so I can't limit
>> the maxconn to them like the others. Now with this ACL, everyone can
>> download but with only one connection. that's the strange main idea.
>
> what do you mean by only one connection?
> if it's under one isa server then all of them share the same external IP.
>

Yes, all the users under ISA server just can download the same file
with one connection, no more, because as you say have the same IP.

>
> --
> Eliezer Croitoru
> https://www1.ngtech.co.il
> IT consulting for Nonprofit organizations
> eliezer  ngtech.co.il


Re: [squid-users] limiting connections

2012-04-03 Thread Carlos Manuel Trepeu Pupo
On Mon, Apr 2, 2012 at 6:43 PM, Amos Jeffries  wrote:
> On 03.04.2012 02:21, Carlos Manuel Trepeu Pupo wrote:
>>
>> Thanks a looot !! That's what I'm missing, everything work
>> fine now. So this script can use it cause it's already works.
>>
>> Now, I need to know if there is any way to consult the active request
>> in squid that work faster that squidclient 
>>
>
> ACL types are pretty easy to add to the Squid code. I'm happy to throw an
> ACL patch your way for a few $$.
>
> Which comes back to me earlier still unanswered question about why you want
> to do this very, very strange thing?
>
> Amos
>


OK !! Here the complicate and strange explanation:

Where I work we have 128 Kbps for the use of almost 80 PCs, a few of
them use download accelerators and saturate the channel. I began to
use the ACL maxconn but I have still a few problems. 60 of the clients
are under an ISA server that I don't administrate, so I can't limit
the maxconn to them like the others. Now with this ACL, everyone can
download but with only one connection. that's the strange main idea.


Re: [squid-users] limiting connections

2012-04-02 Thread Carlos Manuel Trepeu Pupo
Thanks a looot !! That's what I'm missing, everything work
fine now. So this script can use it cause it's already works.

Now, I need to know if there is any way to consult the active request
in squid that work faster that squidclient 

On Sat, Mar 31, 2012 at 9:58 PM, Amos Jeffries  wrote:
> On 1/04/2012 7:58 a.m., Carlos Manuel Trepeu Pupo wrote:
>>
>> On Sat, Mar 31, 2012 at 4:18 AM, Amos Jeffries
>>  wrote:
>>>
>>> On 31/03/2012 3:07 a.m., Carlos Manuel Trepeu Pupo wrote:
>>>>
>>>>
>>>> Now I have the following question:
>>>> The possible error to return are 'OK' or 'ERR', if I assume like
>>>> Boolean answer, "OK"->TRUE&    "ERR"->FALSE. Is this right ?
>>>
>>>
>>> Equivalent, yes. Specifically it means success / failure or match /
>>> non-match on the ACL.
>>>
>>>
>>>> So, if I deny my acl:
>>>> http_access deny external_helper_acl
>>>>
>>>> work like this (with the http_access below):
>>>> If return "OK" ->    I denied
>>>> If return "ERR" ->    I do not denied
>>>>
>>>> It's right this ??? Tanks again for the help !!!
>>>
>>>
>>> Correct.
>>
>> OK, following the idea of this thread that's what I have:
>>
>> #!/bin/bash
>> while read line; do
>>         # ->  This it for debug (Testing i saw that not always save to
>> file, maybe not always pass from this ACL)
>>         echo $line>>  /home/carlos/guarda&
>>
>>         result=`squidclient -h 10.11.10.18 mgr:active_requests | grep
>> -c "$line"`
>>
>>   if [ $result == 1 ]
>>         then
>>         echo 'OK'
>>         echo 'OK'>>/home/carlos/guarda&
>>   else
>>         echo 'ERR'
>>         echo 'ERR'>>/home/carlos/guarda&
>>   fi
>> done
>>
>> In the squid.conf this is the configuration:
>>
>> acl test src 10.11.10.12/32
>> acl test src 10.11.10.11/32
>>
>> acl extensions url_regex "/etc/squid3/extensions"
>> # extensions contains:
>>
>> \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|mpg|wma|ogg|wmv|asx|asf|deb|rpm|exe|zip|tar|tgz|rar|ppt|doc|tiff|pdf)$
>> external_acl_type one_conn %URI /home/carlos/contain
>> acl limit external one_conn
>>
>> http_access allow localhost
>> http_access deny extensions !limit
>> deny_info ERR_LIMIT limit
>> http_access allow test
>>
>>
>> I start to download from:
>> 10.11.10.12 ->
>>  http://ch.releases.ubuntu.com//oneiric/ubuntu-11.10-desktop-i386.iso
>> then start from:
>> 10.11.10.11 ->
>>  http://ch.releases.ubuntu.com//oneiric/ubuntu-11.10-desktop-i386.iso
>>
>> And let me download. What I'm missing ???
>
>
> You must set "ttl=0 negative_ttl=0 grace=0" as options for your
> external_acl_type directive. To disable caching optimizations on the helper
> results.
>
> Amos


Re: [squid-users] limiting connections

2012-03-31 Thread Carlos Manuel Trepeu Pupo
On Sat, Mar 31, 2012 at 4:18 AM, Amos Jeffries  wrote:
> On 31/03/2012 3:07 a.m., Carlos Manuel Trepeu Pupo wrote:
>>
>>
>> Now I have the following question:
>> The possible error to return are 'OK' or 'ERR', if I assume like
>> Boolean answer, "OK"->TRUE&  "ERR"->FALSE. Is this right ?
>
>
> Equivalent, yes. Specifically it means success / failure or match /
> non-match on the ACL.
>
>
>> So, if I deny my acl:
>> http_access deny external_helper_acl
>>
>> work like this (with the http_access below):
>> If return "OK" ->  I denied
>> If return "ERR" ->  I do not denied
>>
>> It's right this ??? Tanks again for the help !!!
>
>
> Correct.

OK, following the idea of this thread that's what I have:

#!/bin/bash
while read line; do
# -> This it for debug (Testing i saw that not always save to
file, maybe not always pass from this ACL)
echo $line >> /home/carlos/guarda &

result=`squidclient -h 10.11.10.18 mgr:active_requests | grep
-c "$line"`

  if [ $result == 1 ]
then
echo 'OK'
echo 'OK'>>/home/carlos/guarda &
  else
echo 'ERR'
echo 'ERR'>>/home/carlos/guarda &
  fi
done

In the squid.conf this is the configuration:

acl test src 10.11.10.12/32
acl test src 10.11.10.11/32

acl extensions url_regex "/etc/squid3/extensions"
# extensions contains:
\.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|mpg|wma|ogg|wmv|asx|asf|deb|rpm|exe|zip|tar|tgz|rar|ppt|doc|tiff|pdf)$
external_acl_type one_conn %URI /home/carlos/contain
acl limit external one_conn

http_access allow localhost
http_access deny extensions !limit
deny_info ERR_LIMIT limit
http_access allow test


I start to download from:
10.11.10.12 -> 
http://ch.releases.ubuntu.com//oneiric/ubuntu-11.10-desktop-i386.iso
then start from:
10.11.10.11 -> 
http://ch.releases.ubuntu.com//oneiric/ubuntu-11.10-desktop-i386.iso

And let me download. What I'm missing ???


# -

http_access deny all



>
> Amos
>


Re: [squid-users] limiting connections

2012-03-30 Thread Carlos Manuel Trepeu Pupo
On Thu, Mar 29, 2012 at 4:03 PM, Eliezer Croitoru  wrote:
> On 29/03/2012 21:05, Carlos Manuel Trepeu Pupo wrote:
>>
>> On Tue, Mar 27, 2012 at 1:23 PM, Eliezer Croitoru
>>  wrote:
>>>
>>> On 27/03/2012 17:27, Carlos Manuel Trepeu Pupo wrote:
>>>>
>>>>
>>>> On Mon, Mar 26, 2012 at 5:45 PM, Amos Jeffries
>>>>  wrote:
>>>>>
>>>>>
>>>>> On 27.03.2012 10:13, Carlos Manuel Trepeu Pupo wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Sat, Mar 24, 2012 at 6:31 PM, Amos Jeffries
>>>>>> wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 25/03/2012 7:23 a.m., Carlos Manuel Trepeu Pupo wrote:
>>>>>>>
>>>>>>>> On Thu, Mar 22, 2012 at 10:00 PM, Amos Jeffries wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 23/03/2012 5:42 a.m., Carlos Manuel Trepeu Pupo wrote:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> I need to block each user to make just one connection to download
>>>>>>>>>> specific extension files, but I dont know how to tell that can
>>>>>>>>>> make
>>>>>>>>>> just one connection to each file and not just one connection to
>>>>>>>>>> every
>>>>>>>>>> file with this extension.
>>>>>>>>>>
>>>>>>>>>> i.e:
>>>>>>>>>> www.google.com #All connection that required
>>>>>>>>>> www.any.domain.com/my_file.rar #just one connection to that file
>>>>>>>>>> www.other.domain.net/other_file.iso #just connection to this file
>>>>>>>>>> www.other_domain1.com/other_file1.rar #just one connection to that
>>>>>>>>>> file
>>>>>>>>>>
>>>>>>>>>> I hope you understand me and can help me, I have my boss hurrying
>>>>>>>>>> me
>>>>>>>>>> !!!
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> There is no easy way to test this in Squid.
>>>>>>>>>
>>>>>>>>> You need an external_acl_type helper which gets given the URI and
>>>>>>>>> decides
>>>>>>>>> whether it is permitted or not. That decision can be made by
>>>>>>>>> querying
>>>>>>>>> Squid
>>>>>>>>> cache manager for the list of active_requests and seeing if the URL
>>>>>>>>> appears
>>>>>>>>> more than once.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Hello Amos, following your instructions I make this
>>>>>>>> external_acl_type
>>>>>>>> helper:
>>>>>>>>
>>>>>>>> #!/bin/bash
>>>>>>>> result=`squidclient -h 192.168.19.19 mgr:active_requests | grep -c
>>>>>>>> "$1"`
>>>>>>>> if [ $result -eq 0 ]
>>>>>>>> then
>>>>>>>> echo 'OK'
>>>>>>>> else
>>>>>>>> echo 'ERR'
>>>>>>>> fi
>>>>>>>>
>>>>>>>> # If I have the same URI then I denied. I make a few test and it
>>>>>>>> work
>>>>>>>> for me. The problem is when I add the rule to the squid. I make
>>>>>>>> this:
>>>>>>>>
>>>>>>>> acl extensions url_regex "/etc/squid3/extensions"
>>>>>>>> external_acl_type one_conn %URI /home/carlos/script
>>>>>>>> acl limit external one_conn
>>>>>>>>
>>>>>>>> # where extensions have:
>>>>>>

Re: [squid-users] limiting connections

2012-03-29 Thread Carlos Manuel Trepeu Pupo
On Thu, Mar 29, 2012 at 4:03 PM, Eliezer Croitoru  wrote:
> On 29/03/2012 21:05, Carlos Manuel Trepeu Pupo wrote:
>>
>> On Tue, Mar 27, 2012 at 1:23 PM, Eliezer Croitoru
>>  wrote:
>>>
>>> On 27/03/2012 17:27, Carlos Manuel Trepeu Pupo wrote:
>>>>
>>>>
>>>> On Mon, Mar 26, 2012 at 5:45 PM, Amos Jeffries
>>>>  wrote:
>>>>>
>>>>>
>>>>> On 27.03.2012 10:13, Carlos Manuel Trepeu Pupo wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Sat, Mar 24, 2012 at 6:31 PM, Amos Jeffries
>>>>>> wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 25/03/2012 7:23 a.m., Carlos Manuel Trepeu Pupo wrote:
>>>>>>>
>>>>>>>> On Thu, Mar 22, 2012 at 10:00 PM, Amos Jeffries wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 23/03/2012 5:42 a.m., Carlos Manuel Trepeu Pupo wrote:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> I need to block each user to make just one connection to download
>>>>>>>>>> specific extension files, but I dont know how to tell that can
>>>>>>>>>> make
>>>>>>>>>> just one connection to each file and not just one connection to
>>>>>>>>>> every
>>>>>>>>>> file with this extension.
>>>>>>>>>>
>>>>>>>>>> i.e:
>>>>>>>>>> www.google.com #All connection that required
>>>>>>>>>> www.any.domain.com/my_file.rar #just one connection to that file
>>>>>>>>>> www.other.domain.net/other_file.iso #just connection to this file
>>>>>>>>>> www.other_domain1.com/other_file1.rar #just one connection to that
>>>>>>>>>> file
>>>>>>>>>>
>>>>>>>>>> I hope you understand me and can help me, I have my boss hurrying
>>>>>>>>>> me
>>>>>>>>>> !!!
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> There is no easy way to test this in Squid.
>>>>>>>>>
>>>>>>>>> You need an external_acl_type helper which gets given the URI and
>>>>>>>>> decides
>>>>>>>>> whether it is permitted or not. That decision can be made by
>>>>>>>>> querying
>>>>>>>>> Squid
>>>>>>>>> cache manager for the list of active_requests and seeing if the URL
>>>>>>>>> appears
>>>>>>>>> more than once.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Hello Amos, following your instructions I make this
>>>>>>>> external_acl_type
>>>>>>>> helper:
>>>>>>>>
>>>>>>>> #!/bin/bash
>>>>>>>> result=`squidclient -h 192.168.19.19 mgr:active_requests | grep -c
>>>>>>>> "$1"`
>>>>>>>> if [ $result -eq 0 ]
>>>>>>>> then
>>>>>>>> echo 'OK'
>>>>>>>> else
>>>>>>>> echo 'ERR'
>>>>>>>> fi
>>>>>>>>
>>>>>>>> # If I have the same URI then I denied. I make a few test and it
>>>>>>>> work
>>>>>>>> for me. The problem is when I add the rule to the squid. I make
>>>>>>>> this:
>>>>>>>>
>>>>>>>> acl extensions url_regex "/etc/squid3/extensions"
>>>>>>>> external_acl_type one_conn %URI /home/carlos/script
>>>>>>>> acl limit external one_conn
>>>>>>>>
>>>>>>>> # where extensions have:
>>>>>>

Re: [squid-users] limiting connections

2012-03-29 Thread Carlos Manuel Trepeu Pupo
On Thu, Mar 29, 2012 at 4:03 PM, Eliezer Croitoru  wrote:
> On 29/03/2012 21:05, Carlos Manuel Trepeu Pupo wrote:
>>
>> On Tue, Mar 27, 2012 at 1:23 PM, Eliezer Croitoru
>>  wrote:
>>>
>>> On 27/03/2012 17:27, Carlos Manuel Trepeu Pupo wrote:
>>>>
>>>>
>>>> On Mon, Mar 26, 2012 at 5:45 PM, Amos Jeffries
>>>>  wrote:
>>>>>
>>>>>
>>>>> On 27.03.2012 10:13, Carlos Manuel Trepeu Pupo wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Sat, Mar 24, 2012 at 6:31 PM, Amos Jeffries
>>>>>> wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 25/03/2012 7:23 a.m., Carlos Manuel Trepeu Pupo wrote:
>>>>>>>
>>>>>>>> On Thu, Mar 22, 2012 at 10:00 PM, Amos Jeffries wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 23/03/2012 5:42 a.m., Carlos Manuel Trepeu Pupo wrote:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> I need to block each user to make just one connection to download
>>>>>>>>>> specific extension files, but I dont know how to tell that can
>>>>>>>>>> make
>>>>>>>>>> just one connection to each file and not just one connection to
>>>>>>>>>> every
>>>>>>>>>> file with this extension.
>>>>>>>>>>
>>>>>>>>>> i.e:
>>>>>>>>>> www.google.com #All connection that required
>>>>>>>>>> www.any.domain.com/my_file.rar #just one connection to that file
>>>>>>>>>> www.other.domain.net/other_file.iso #just connection to this file
>>>>>>>>>> www.other_domain1.com/other_file1.rar #just one connection to that
>>>>>>>>>> file
>>>>>>>>>>
>>>>>>>>>> I hope you understand me and can help me, I have my boss hurrying
>>>>>>>>>> me
>>>>>>>>>> !!!
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> There is no easy way to test this in Squid.
>>>>>>>>>
>>>>>>>>> You need an external_acl_type helper which gets given the URI and
>>>>>>>>> decides
>>>>>>>>> whether it is permitted or not. That decision can be made by
>>>>>>>>> querying
>>>>>>>>> Squid
>>>>>>>>> cache manager for the list of active_requests and seeing if the URL
>>>>>>>>> appears
>>>>>>>>> more than once.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Hello Amos, following your instructions I make this
>>>>>>>> external_acl_type
>>>>>>>> helper:
>>>>>>>>
>>>>>>>> #!/bin/bash
>>>>>>>> result=`squidclient -h 192.168.19.19 mgr:active_requests | grep -c
>>>>>>>> "$1"`
>>>>>>>> if [ $result -eq 0 ]
>>>>>>>> then
>>>>>>>> echo 'OK'
>>>>>>>> else
>>>>>>>> echo 'ERR'
>>>>>>>> fi
>>>>>>>>
>>>>>>>> # If I have the same URI then I denied. I make a few test and it
>>>>>>>> work
>>>>>>>> for me. The problem is when I add the rule to the squid. I make
>>>>>>>> this:
>>>>>>>>
>>>>>>>> acl extensions url_regex "/etc/squid3/extensions"
>>>>>>>> external_acl_type one_conn %URI /home/carlos/script
>>>>>>>> acl limit external one_conn
>>>>>>>>
>>>>>>>> # where extensions have:
>>>>>>

Fwd: [squid-users] limiting connections

2012-03-29 Thread Carlos Manuel Trepeu Pupo
On Tue, Mar 27, 2012 at 1:23 PM, Eliezer Croitoru  wrote:
> On 27/03/2012 17:27, Carlos Manuel Trepeu Pupo wrote:
>>
>> On Mon, Mar 26, 2012 at 5:45 PM, Amos Jeffries
>>  wrote:
>>>
>>> On 27.03.2012 10:13, Carlos Manuel Trepeu Pupo wrote:
>>>>
>>>>
>>>> On Sat, Mar 24, 2012 at 6:31 PM, Amos Jeffries
>>>> wrote:
>>>>>
>>>>>
>>>>> On 25/03/2012 7:23 a.m., Carlos Manuel Trepeu Pupo wrote:
>>>>>
>>>>>> On Thu, Mar 22, 2012 at 10:00 PM, Amos Jeffries wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 23/03/2012 5:42 a.m., Carlos Manuel Trepeu Pupo wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> I need to block each user to make just one connection to download
>>>>>>>> specific extension files, but I dont know how to tell that can make
>>>>>>>> just one connection to each file and not just one connection to
>>>>>>>> every
>>>>>>>> file with this extension.
>>>>>>>>
>>>>>>>> i.e:
>>>>>>>> www.google.com #All connection that required
>>>>>>>> www.any.domain.com/my_file.rar #just one connection to that file
>>>>>>>> www.other.domain.net/other_file.iso #just connection to this file
>>>>>>>> www.other_domain1.com/other_file1.rar #just one connection to that
>>>>>>>> file
>>>>>>>>
>>>>>>>> I hope you understand me and can help me, I have my boss hurrying me
>>>>>>>> !!!
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> There is no easy way to test this in Squid.
>>>>>>>
>>>>>>> You need an external_acl_type helper which gets given the URI and
>>>>>>> decides
>>>>>>> whether it is permitted or not. That decision can be made by querying
>>>>>>> Squid
>>>>>>> cache manager for the list of active_requests and seeing if the URL
>>>>>>> appears
>>>>>>> more than once.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Hello Amos, following your instructions I make this external_acl_type
>>>>>> helper:
>>>>>>
>>>>>> #!/bin/bash
>>>>>> result=`squidclient -h 192.168.19.19 mgr:active_requests | grep -c
>>>>>> "$1"`
>>>>>> if [ $result -eq 0 ]
>>>>>> then
>>>>>> echo 'OK'
>>>>>> else
>>>>>> echo 'ERR'
>>>>>> fi
>>>>>>
>>>>>> # If I have the same URI then I denied. I make a few test and it work
>>>>>> for me. The problem is when I add the rule to the squid. I make this:
>>>>>>
>>>>>> acl extensions url_regex "/etc/squid3/extensions"
>>>>>> external_acl_type one_conn %URI /home/carlos/script
>>>>>> acl limit external one_conn
>>>>>>
>>>>>> # where extensions have:
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|mpg|wma|ogg|wmv|asx|asf|deb|rpm|exe|zip|tar|tgz|rar|ppt|doc|tiff|pdf)$
>>>>>>
>>>>>> http_access deny extensions limit
>>>>>>
>>>>>>
>>>>>> So when I make squid3 -k reconfigure the squid stop working
>>>>>>
>>>>>> What can be happening ???
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> * The helper needs to be running in a constant loop.
>>>>> You can find an example
>>>>>
>>>>>
>>>>>
>>>>> http://bazaar.launchpad.net/~squid/squid/3.2/view/head:/helpers/url_rewrite/fake/url_fake_rewrite.sh
>>>>> although that is re-writer and you do need to keep the OK/ERR for
>>>>> external
>>>>> ACL.
>>>>
>>>>
>>>>
>>>> Sorry, this is my first helper, I do not understand the meaning of
>>>&g

Re: [squid-users] limiting connections

2012-03-27 Thread Carlos Manuel Trepeu Pupo
On Mon, Mar 26, 2012 at 5:45 PM, Amos Jeffries  wrote:
> On 27.03.2012 10:13, Carlos Manuel Trepeu Pupo wrote:
>>
>> On Sat, Mar 24, 2012 at 6:31 PM, Amos Jeffries 
>> wrote:
>>>
>>> On 25/03/2012 7:23 a.m., Carlos Manuel Trepeu Pupo wrote:
>>>
>>>> On Thu, Mar 22, 2012 at 10:00 PM, Amos Jeffries wrote:
>>>>>
>>>>>
>>>>> On 23/03/2012 5:42 a.m., Carlos Manuel Trepeu Pupo wrote:
>>>>>>
>>>>>>
>>>>>> I need to block each user to make just one connection to download
>>>>>> specific extension files, but I dont know how to tell that can make
>>>>>> just one connection to each file and not just one connection to every
>>>>>> file with this extension.
>>>>>>
>>>>>> i.e:
>>>>>> www.google.com #All connection that required
>>>>>> www.any.domain.com/my_file.rar #just one connection to that file
>>>>>> www.other.domain.net/other_file.iso #just connection to this file
>>>>>> www.other_domain1.com/other_file1.rar #just one connection to that
>>>>>> file
>>>>>>
>>>>>> I hope you understand me and can help me, I have my boss hurrying me
>>>>>> !!!
>>>>>
>>>>>
>>>>>
>>>>> There is no easy way to test this in Squid.
>>>>>
>>>>> You need an external_acl_type helper which gets given the URI and
>>>>> decides
>>>>> whether it is permitted or not. That decision can be made by querying
>>>>> Squid
>>>>> cache manager for the list of active_requests and seeing if the URL
>>>>> appears
>>>>> more than once.
>>>>
>>>>
>>>> Hello Amos, following your instructions I make this external_acl_type
>>>> helper:
>>>>
>>>> #!/bin/bash
>>>> result=`squidclient -h 192.168.19.19 mgr:active_requests | grep -c "$1"`
>>>> if [ $result -eq 0 ]
>>>> then
>>>> echo 'OK'
>>>> else
>>>> echo 'ERR'
>>>> fi
>>>>
>>>> # If I have the same URI then I denied. I make a few test and it work
>>>> for me. The problem is when I add the rule to the squid. I make this:
>>>>
>>>> acl extensions url_regex "/etc/squid3/extensions"
>>>> external_acl_type one_conn %URI /home/carlos/script
>>>> acl limit external one_conn
>>>>
>>>> # where extensions have:
>>>>
>>>>
>>>>
>>>> \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|mpg|wma|ogg|wmv|asx|asf|deb|rpm|exe|zip|tar|tgz|rar|ppt|doc|tiff|pdf)$
>>>>
>>>> http_access deny extensions limit
>>>>
>>>>
>>>> So when I make squid3 -k reconfigure the squid stop working
>>>>
>>>> What can be happening ???
>>>
>>>
>>>
>>> * The helper needs to be running in a constant loop.
>>> You can find an example
>>>
>>>
>>> http://bazaar.launchpad.net/~squid/squid/3.2/view/head:/helpers/url_rewrite/fake/url_fake_rewrite.sh
>>> although that is re-writer and you do need to keep the OK/ERR for
>>> external
>>> ACL.
>>
>>
>> Sorry, this is my first helper, I do not understand the meaning of
>> running in a constant loop, in the example I see something like I do.
>> Making some test I found that without this line :
>> result=`squidclient -h 192.168.19.19 mgr:active_requests | grep -c "$1"`
>> the helper not crash, dont work event too, but do not crash, so i
>> consider this is in some way the problem.
>
>
>
> Squid starts helpers then uses the STDIN channel to pass it a series of
> requests, reading STDOUt channel for the results. The helper once started is
> expected to continue until a EOL/close/terminate signal is received on its
> STDIN.
>
> Your helper is exiting without being asked to be Squid after only one
> request. That is logged by Squid as a "crash".
>
>
>>
>>>
>>> * "eq 0" - there should always be 1 request matching the URL. Which is
>>> the
>>> request you are testing to see if its >1 or not. You are wanting to deny
>>> for
>>> the case where there are *2* requests in existence.
>>
>>
>> This is true, but the way I saw was: "If the URL do not exist, so
>

Re: [squid-users] limiting connections

2012-03-26 Thread Carlos Manuel Trepeu Pupo
On Sat, Mar 24, 2012 at 6:31 PM, Amos Jeffries  wrote:
> On 25/03/2012 7:23 a.m., Carlos Manuel Trepeu Pupo wrote:
>
>> On Thu, Mar 22, 2012 at 10:00 PM, Amos Jeffries wrote:
>>>
>>> On 23/03/2012 5:42 a.m., Carlos Manuel Trepeu Pupo wrote:
>>>>
>>>> I need to block each user to make just one connection to download
>>>> specific extension files, but I dont know how to tell that can make
>>>> just one connection to each file and not just one connection to every
>>>> file with this extension.
>>>>
>>>> i.e:
>>>> www.google.com #All connection that required
>>>> www.any.domain.com/my_file.rar #just one connection to that file
>>>> www.other.domain.net/other_file.iso #just connection to this file
>>>> www.other_domain1.com/other_file1.rar #just one connection to that file
>>>>
>>>> I hope you understand me and can help me, I have my boss hurrying me !!!
>>>
>>>
>>> There is no easy way to test this in Squid.
>>>
>>> You need an external_acl_type helper which gets given the URI and decides
>>> whether it is permitted or not. That decision can be made by querying
>>> Squid
>>> cache manager for the list of active_requests and seeing if the URL
>>> appears
>>> more than once.
>>
>> Hello Amos, following your instructions I make this external_acl_type
>> helper:
>>
>> #!/bin/bash
>> result=`squidclient -h 192.168.19.19 mgr:active_requests | grep -c "$1"`
>> if [ $result -eq 0 ]
>> then
>> echo 'OK'
>> else
>> echo 'ERR'
>> fi
>>
>> # If I have the same URI then I denied. I make a few test and it work
>> for me. The problem is when I add the rule to the squid. I make this:
>>
>> acl extensions url_regex "/etc/squid3/extensions"
>> external_acl_type one_conn %URI /home/carlos/script
>> acl limit external one_conn
>>
>> # where extensions have:
>>
>> \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|mpg|wma|ogg|wmv|asx|asf|deb|rpm|exe|zip|tar|tgz|rar|ppt|doc|tiff|pdf)$
>>
>> http_access deny extensions limit
>>
>>
>> So when I make squid3 -k reconfigure the squid stop working
>>
>> What can be happening ???
>
>
> * The helper needs to be running in a constant loop.
> You can find an example
> http://bazaar.launchpad.net/~squid/squid/3.2/view/head:/helpers/url_rewrite/fake/url_fake_rewrite.sh
> although that is re-writer and you do need to keep the OK/ERR for external
> ACL.

Sorry, this is my first helper, I do not understand the meaning of
running in a constant loop, in the example I see something like I do.
Making some test I found that without this line :
result=`squidclient -h 192.168.19.19 mgr:active_requests | grep -c "$1"`
the helper not crash, dont work event too, but do not crash, so i
consider this is in some way the problem.

>
> * "eq 0" - there should always be 1 request matching the URL. Which is the
> request you are testing to see if its >1 or not. You are wanting to deny for
> the case where there are *2* requests in existence.

This is true, but the way I saw was: "If the URL do not exist, so
can't be duplicate", I think isn't wrong !!

>
> * ensure you have manager requests form localhost not going through the ACL
> test.

I was making this wrong, the localhost was going through the ACL, but
I just changed !!! The problem persist, What can I do ???

>
>
> Amos
>


Re: [squid-users] limiting connections

2012-03-24 Thread Carlos Manuel Trepeu Pupo
On Thu, Mar 22, 2012 at 10:00 PM, Amos Jeffries  wrote:
> On 23/03/2012 5:42 a.m., Carlos Manuel Trepeu Pupo wrote:
>>
>> I need to block each user to make just one connection to download
>> specific extension files, but I dont know how to tell that can make
>> just one connection to each file and not just one connection to every
>> file with this extension.
>>
>> i.e:
>> www.google.com #All connection that required
>> www.any.domain.com/my_file.rar #just one connection to that file
>> www.other.domain.net/other_file.iso #just connection to this file
>> www.other_domain1.com/other_file1.rar #just one connection to that file
>>
>> I hope you understand me and can help me, I have my boss hurrying me !!!
>
>
> There is no easy way to test this in Squid.
>
> You need an external_acl_type helper which gets given the URI and decides
> whether it is permitted or not. That decision can be made by querying Squid
> cache manager for the list of active_requests and seeing if the URL appears
> more than once.

Hello Amos, following your instructions I make this external_acl_type helper:

#!/bin/bash
result=`squidclient -h 192.168.19.19 mgr:active_requests | grep -c "$1"`
if [ $result -eq 0 ]
then
echo 'OK'
else
echo 'ERR'
fi

# If I have the same URI then I denied. I make a few test and it work
for me. The problem is when I add the rule to the squid. I make this:

acl extensions url_regex "/etc/squid3/extensions"
external_acl_type one_conn %URI /home/carlos/script
acl limit external one_conn

# where extensions have:
\.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|mpg|wma|ogg|wmv|asx|asf|deb|rpm|exe|zip|tar|tgz|rar|ppt|doc|tiff|pdf)$

http_access deny extensions limit


So when I make squid3 -k reconfigure the squid stop working

What can be happening ???

This is my log of squid:
Mar 24 09:25:04 test squid[28075]: helperHandleRead: unexpected read
from one_conn #1, 3 bytes 'OK '
Mar 24 09:25:04 test squid[28075]: helperHandleRead: unexpected read
from one_conn #2, 3 bytes 'OK '
Mar 24 09:25:04 test squid[28075]: WARNING: one_conn #1 (FD 15) exited
Mar 24 09:25:04 test squid[28075]: WARNING: one_conn #2 (FD 16) exited
Mar 24 09:25:04 test squid[28075]: CACHEMGR: @192.168.19.19
requesting 'active_requests'
Mar 24 09:25:04 test squid[28075]: helperHandleRead: unexpected read
from one_conn #3, 3 bytes 'OK '
Mar 24 09:25:04 test squid[28075]: WARNING: one_conn #3 (FD 24) exited
Mar 24 09:25:04 test squid[28075]: helperHandleRead: unexpected read
from one_conn #4, 4 bytes 'ERR '
Mar 24 09:25:04 test squid[28075]: WARNING: one_conn #4 (FD 27) exited
Mar 24 09:25:04 test squid[28075]: Too few one_conn processes are running
Mar 24 09:25:04 test squid[28075]: storeDirWriteCleanLogs: Starting...
Mar 24 09:25:04 test squid[28075]: WARNING: Closing open FD   12
Mar 24 09:25:04 test squid[28075]:   Finished.  Wrote 25613 entries.
Mar 24 09:25:04 test squid[28075]:   Took 0.00 seconds (7740404.96 entries/sec).
Mar 24 09:25:04 test squid[28075]: The one_conn helpers are crashing
too rapidly, need help!


>
> Amos
>


Re: [squid-users] Re: Need some help about delay_parameters directive

2012-03-22 Thread Carlos Manuel Trepeu Pupo
If you want to limit to 128 KB/s when the download reach 10 MB, then
must be something like this:

delay_parameters 1 131072/10485760

Remember that 1 MB = 1*1024*1024 bytes , thats why "1024000" bytes
represent 1 MB, I hope can help you !!!

On Thu, Mar 22, 2012 at 4:41 PM, Muhammad Yousuf Khan  wrote:
> bandwidth


[squid-users] limiting connections

2012-03-22 Thread Carlos Manuel Trepeu Pupo
I need to block each user to make just one connection to download
specific extension files, but I dont know how to tell that can make
just one connection to each file and not just one connection to every
file with this extension.

i.e:
www.google.com #All connection that required
www.any.domain.com/my_file.rar #just one connection to that file
www.other.domain.net/other_file.iso #just connection to this file
www.other_domain1.com/other_file1.rar #just one connection to that file

I hope you understand me and can help me, I have my boss hurrying me !!!


Re: [squid-users] Re: Need some help about delay_parameters directive

2012-03-22 Thread Carlos Manuel Trepeu Pupo
The delay_parameter it's in bytes not bits. Try and tell if work !!

On Wed, Mar 21, 2012 at 2:11 AM, Muhammad Yousuf Khan  wrote:
> please help me. delay_parameter 1 32000/1024< this means  if i
> complete10MB what ever the size would be regardless of that my
> bandwidth would be limited to 32KB but in this case i can only able to
> download 5MB and then my bandwidth shrink downs to 32KB. why ..
> please help me. i search squid website it is clearly stating that
> delay parameters accepts "bytes" as a value. please help
>
> Thanks.
>
> On Tue, Mar 20, 2012 at 6:58 PM, Muhammad Yousuf Khan  
> wrote:
>> here is my acl and i want to limit download after every 10 MB of
>> download. now i am a bit confuse now. why this value giving me
>> expected result.
>> my_ip src 10.51.100.240
>> delay_pools 1
>> delay_class 1 1
>> delay_parameters 1 1/2000
>> delay_access 1 allow my_ip
>>
>> according to my learning and understanding with squid delay parameters
>> directive, it accepts bites as values. please correct me if wrong
>> because i am a newbie.
>> so according to my computation the calculation should be some thing
>> like that. 10M should be
>>
>> 10 x1024 = 10240KB
>> 10240x1024 = 10485760 Bytes
>> 10485760 x 8 = 83886080 bits
>>
>> now 2000 is giving me the desired result except 83886080. why?
>>
>> Please correct me and tell me what is wrong with my calculation or
>> understanding.
>>
>> Thanks,
>>
>> MYK


[squid-users] Different cache_peer based on GeoIP

2012-03-04 Thread Manuel
Hello,

is there anyway to run some GeoIP solution such as that one from Maxmind
(http://www.maxmind.com/app/country) with Squid? The idea is to use
different cache_peer based on GeoIP. For example if a user visits an address
from the United States he will receive a different page (cached in a
different cache_peer) than a user in Canada.

Any ideas?

Kind regards

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Different-cache-peer-based-on-GeoIP-tp4443549p4443549.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] squid with parent sock5 proxy

2012-03-01 Thread Carlos Manuel Trepeu Pupo
Hi, I need to install squid, but my parent proxy only have sock5.
Can I use squid in windows for this ???


[squid-users] ACL

2012-02-17 Thread Carlos Manuel Trepeu Pupo
Hi !

I want to block:
http://*.google.com.cu

but allow:
http://www.google.com.cu/custom*

I mean deny all the subdomain of google.com.cu except all the URL that
contain the line below

I have Ubuntu with Squid 3.0 STABLE1 with this conf:

acl deny_google dstdom_regex -i google.com

acl allow_google urlpath_regex -i www.google.com.cu/custom

http_access allow allow_google
http_access deny deny_google

With this conf allow all the custom search but deny the rest. The
problem is that this configuration do not work ... what it's wrong ??


Thanksss !!


[squid-users] blocking page preview of google

2012-02-14 Thread Carlos Manuel Trepeu Pupo
I have squid 3.0 STABLE1 and I allow to my user google it. Now I have
too low bandwidth so, I need to block the preview page and other
services of google, how can I do something like that with Squid ? What
else can help to do that ?


Re: [squid-users] limit maxconn

2012-01-27 Thread Carlos Manuel Trepeu Pupo
On 1/26/12, Amos Jeffries  wrote:
> On 27/01/2012 2:46 p.m., Carlos Manuel Trepeu Pupo wrote:
>> I have squid 3.0 STABLE1 giving service to 340 clients. I need to
>> limit the maxconn to 20, but I need to know if I put 192.168.10.0/24
>> will limit each IP to 20 or the entire /24 to 20. In case that the
>> rule it's for the entire /24, so I need to create the rule for each IP
>> ?
>
> Put "192.168.10.0/24" where exactly?

Sorry for the explication !!

>In the maxconn ACL? Wont work, maxconn takes a single value.
>In a separate unrelated src ACL? notice how src != maxconn. And its
> test result is equally independent when tested. src looks for an
> individual IP (the packet src IP) in a set.
>
> Amos
>

# I have this:
acl client src 10.10.10.0/24
acl client src 10.71.0.0/24
acl client src 10.1.0.0/24

acl max_conn maxconn 10

http_access deny client max_conn

# The idea of above configuration is allow a maximum of 10 http
connection from each IP from clients networks to access the proxy.

I need to know if this work or this configuration allow just 10 http
connection between all !!!


[squid-users] limit maxconn

2012-01-26 Thread Carlos Manuel Trepeu Pupo
I have squid 3.0 STABLE1 giving service to 340 clients. I need to
limit the maxconn to 20, but I need to know if I put 192.168.10.0/24
will limit each IP to 20 or the entire /24 to 20. In case that the
rule it's for the entire /24, so I need to create the rule for each IP
?

Thanks


Re: [squid-users] save last access

2012-01-23 Thread Carlos Manuel Trepeu Pupo
By user just "real people" and maybe his IP.
By Surf Last -> only when a user is loading the page
I need the report in real-time but maybe one user surfed many days
ago, but I need to save this track.

I use Squid 3.0 Stable1. What daemon can I use to make this ?

Thanks a lot for your answer !!

On Sat, Jan 21, 2012 at 1:23 AM, Amos Jeffries  wrote:
> On 21/01/2012 5:06 a.m., Carlos Manuel Trepeu Pupo wrote:
>>
>> Hello ! I need to know when my users surf last time, so I need to know
>> if there is any way to have this information and save to an sql
>> database.
>
>
> The Squid log files are text data. So the answer is yes.
>
> Please explain "user".  Only real people? or any machine which connects to
> Squid?
>
> Please explain "surf last". Only when a user is loading the page? or even
> when their machine is doing something automatically by itself?
>
> Please explain under what conditions you are wantign the information back.
> monthly report? weekly? daily? hourly? real-time?
>
>
> Current Squid releases support logging daemons which can send log data
> anywhere and translate it to any form. Squid-3.2 bundles with a DB
> (database) daemon which is also available from SourceForge for squid-2.7
>
> Older Squid need log file reader daemons. Like squidtaild, and logger2sql.
>
> Amos
>


[squid-users] save last access

2012-01-20 Thread Carlos Manuel Trepeu Pupo
Hello ! I need to know when my users surf last time, so I need to know
if there is any way to have this information and save to an sql
database.


Re: [squid-users] Configuring Squid LDAP Authentication

2012-01-11 Thread Carlos Manuel Trepeu Pupo
Post the rest of the .conf to see the ACL and http_access, until there
I see everything fine !!

On Wed, Jan 11, 2012 at 1:43 PM, berry guru  wrote:
> Thanks for the response Carlos!  So I've copied and pasted the part of
> the configuration I modified.  Let me know if I should post all the
> config.  I'm running Squid 2.7
>
> auth_param basic program /usr/lib/squid/ldap_auth -R -b
> "dc=cyberdyne,dc=local" -D
> "cn=Administrator,cn=Users,dc=cyberdyne,dc=local" -w "passwordhere" -f
> sAMAccountName=%s -h 192.168.100.237
>    auth_param basic children 5
>    auth_param basic realm CYBERDYNE.LOCAL
>    auth_param basic credentialsttl 5 minutes
>
>
>
> On Wed, Jan 11, 2012 at 10:35 AM, Carlos Manuel Trepeu Pupo
>  wrote:
>> With that tutorial from papercut I just configure my LDAP auth and
>> everything work great, post you .conf and the version of squid.
>>
>> On Wed, Jan 11, 2012 at 1:30 PM, berry guru  wrote:
>>> first s


Re: [squid-users] Configuring Squid LDAP Authentication

2012-01-11 Thread Carlos Manuel Trepeu Pupo
With that tutorial from papercut I just configure my LDAP auth and
everything work great, post you .conf and the version of squid.

On Wed, Jan 11, 2012 at 1:30 PM, berry guru  wrote:
> first s


[squid-users] something in the logs

2011-12-14 Thread Carlos Manuel Trepeu Pupo
In the log I see this:
1323897149.888  9 10.10.10.3 TCP_MEM_HIT/200 454242 GET
http://proxy.mydomain:3128/squid-internal-periodic/store_digest -
NONE/- application/cache-digest

What it's the meaning of this ?


[squid-users] about rewrite an URL

2011-12-05 Thread Carlos Manuel Trepeu Pupo
I have in my FTP a mirror of the Bases of Kaspersky (all versions), I
use KLUpdater to make that (from the Kaspersky site), now I want to
redirect everyone to search for the domain of Kaspersky's update to my
FTP, how can I do that ?

That's a lot


Re: AW: [squid-users] block TOR

2011-12-05 Thread Carlos Manuel Trepeu Pupo
I want to block the Tor traffic because my clients use it to jump my
rules about the blocked site. In my firewall it's a little more
difficult refresh the Node that I want to block.

Jenny told about he/she can't establish a connection to the TOR net
across squid, but I can't see the problem, using CONNECT and 443 port
it's all the client needs !!!

I'm waiting for you guys !!!

On Sun, Dec 4, 2011 at 1:50 AM, Jenny Lee  wrote:
>
> Judging from "dst" acl, ultrasurf traffic and all in this thread, this is 
> talking about outgoing traffic to Tor via squid.
>
> Why would anyone want to block Tor traffic to his/her webserver (if this is 
> not an ecommerce site)? If it was an ecommerce site, they would know what to 
> do already and not ask this question here. Tor exists are made available 
> daily and firewall is hte place to drop them.
>
> I still want to hear what OP would say.
>
> Jenny
>
>
>
>
>> From: amuel...@gmx.de
>> To: squid-users@squid-cache.org
>> Date: Sun, 4 Dec 2011 00:39:01 +0100
>> Subject: AW: [squid-users] block TOR
>>
>> The question is with traffic of tor should be blocked. Outgoing client
>> traffic to the tor network or incoming httpd requests from tor exit nodes ?
>>
>> Andreas
>>
>> -Ursprüngliche Nachricht-
>> Von: Jenny Lee [mailto:bodycar...@live.com]
>> Gesendet: Sonntag, 4. Dezember 2011 00:09
>> An: charlie@gmail.com; leolis...@solutti.com.br
>> Cc: squid-users@squid-cache.org
>> Betreff: RE: [squid-users] block TOR
>>
>>
>> I dont understand how you are managing to have anything to do with Tor to
>> start with.
>>
>> Tor is speaking SOCKS5. You need Polipo to speak HTTP on the client side and
>> SOCKS on the server side.
>>
>> I have actively tried to connect to 2 of our SOCKS5 machines (and Tor) via
>> my Squid and I could not succeed. I have even tried Amos' custom squid with
>> SOCKS support and still failed.
>>
>> Can someone explain to me as to how you are connecting to Tor with squid
>> (and consequently having a need to block it)?
>>
>> Jenny
>>
>>
>> > Date: Sat, 3 Dec 2011 16:37:05 -0500
>> > Subject: Re: [squid-users] block TOR
>> > From: charlie@gmail.com
>> > To: leolis...@solutti.com.br
>> > CC: bodycar...@live.com; squid-users@squid-cache.org
>> >
>> > Sorry for reopen an old post, but a few days ago i tried with this
>> > solution, and . like magic, all traffic to the Tor net it's
>> > blocked, just typing this:
>> > acl tor dst "/etc/squid3/tor"
>> > http_access deny tor
>> > where /etc/squid3/tor it's the file that I download from the page you
>> > people recommend me !!!
>> >
>> > Thanks a lot, this is something that are searching a lot of admin that
>> > I know, you should put somewhere where are easily to find !!! Thanks
>> > again !!
>> >
>> > Sorry for my english
>> >
>> > On Fri, Nov 18, 2011 at 4:17 PM, Carlos Manuel Trepeu Pupo
>> >  wrote:
>> > > Thanks a lot, I gonna make that script to refresh the list. You´ve
>> > > been lot of helpful.
>> > >
>> > > On Fri, Nov 18, 2011 at 3:39 PM, Leonardo Rodrigues
>> > >  wrote:
>> > >>
>> > >> i dont know if this is valid for TOR ... but at least Ultrasurf,
>> > >> which i have analized a bit further, encapsulates traffic over
>> > >> squid always using CONNECT method and connecting to an IP address.
>> > >> It's basically different from normal HTTPS traffic, which also uses
>> > >> CONNECT method but almost always (i have found 2-3 exceptions in some
>> years) connects to a FQDN.
>> > >>
>> > >> So, at least with Ultrasurf, i could handle it over squid simply
>> > >> blocking CONNECT connections which tries to connect to an IP
>> > >> address instead of a FQDN.
>> > >>
>> > >> Of course, Ultrasurf (and i suppose TOR) tries to encapsulate
>> > >> traffic to the browser-configured proxy as last resort. If it finds
>> > >> an NAT-opened network, it will always tries to go direct instead of
>> > >> through the proxy. So, its mandatory that you do NOT have a
>> > >> NAT-opened network, specially on ports
>> > >> TCP/80 and TCP/443. If you have those ports opened with your NAT
>> > >> rules, than i really think you'll never get rid of those services,

Re: [squid-users] block TOR

2011-12-03 Thread Carlos Manuel Trepeu Pupo
Sorry for reopen an old post, but a few days ago i tried with this
solution, and . like magic, all traffic to the Tor net it's
blocked, just typing this:
acl tor dst "/etc/squid3/tor"
http_access deny tor
where /etc/squid3/tor it's the file that I download from the page you
people recommend me !!!

Thanks a lot, this is something that are searching a lot of admin that
I know, you should put somewhere where are easily to find !!! Thanks
again !!

Sorry for my english

On Fri, Nov 18, 2011 at 4:17 PM, Carlos Manuel Trepeu Pupo
 wrote:
> Thanks a lot, I gonna make that script to refresh the list. You´ve
> been lot of helpful.
>
> On Fri, Nov 18, 2011 at 3:39 PM, Leonardo Rodrigues
>  wrote:
>>
>>    i dont know if this is valid for TOR ... but at least Ultrasurf, which i
>> have analized a bit further, encapsulates traffic over squid always using
>> CONNECT method and connecting to an IP address. It's basically different
>> from normal HTTPS traffic, which also uses CONNECT method but almost always
>> (i have found 2-3 exceptions in some years) connects to a FQDN.
>>
>>    So, at least with Ultrasurf, i could handle it over squid simply blocking
>> CONNECT connections which tries to connect to an IP address instead of a
>> FQDN.
>>
>>    Of course, Ultrasurf (and i suppose TOR) tries to encapsulate traffic to
>> the browser-configured proxy as last resort. If it finds an NAT-opened
>> network, it will always tries to go direct instead of through the proxy. So,
>> its mandatory that you do NOT have a NAT-opened network, specially on ports
>> TCP/80 and TCP/443. If you have those ports opened with your NAT rules, than
>> i really think you'll never get rid of those services, like TOR and
>> Ultrasurf.
>>
>>
>>
>>
>> Em 18/11/11 14:03, Carlos Manuel Trepeu Pupo escreveu:
>>>
>>> So, like I see, we (the admin) have no way to block it !!
>>>
>>> On Thu, Sep 29, 2011 at 3:30 PM, Jenny Lee  wrote:
>>>>
>>>>> Date: Thu, 29 Sep 2011 11:24:55 -0400
>>>>> From: charlie@gmail.com
>>>>> To: squid-users@squid-cache.org
>>>>> Subject: [squid-users] block TOR
>>>>>
>>>>> There is any way to block TOR with my Squid ?
>>>>
>>>> How do you get it working with tor in the first place?
>>>>
>>>> I really tried for one of our users. Even used Amos's custom squid with
>>>> SOCKS option but no go.
>>>>
>>>> Jenny
>>
>>
>> --
>>
>>
>>        Atenciosamente / Sincerily,
>>        Leonardo Rodrigues
>>        Solutti Tecnologia
>>        http://www.solutti.com.br
>>
>>        Minha armadilha de SPAM, NÃO mandem email
>>        gertru...@solutti.com.br
>>        My SPAMTRAP, do not email it
>>
>>
>>
>>
>>


Re: [squid-users] about include

2011-11-23 Thread Carlos Manuel Trepeu Pupo
But all the newer version supported, even the 3.1.x ?

On Wed, Nov 23, 2011 at 10:36 AM, Matus UHLAR - fantomas
 wrote:
> On 23.11.11 09:56, Carlos Manuel Trepeu Pupo wrote:
>>
>> Hello !! I want to know if Squid Cache: Version 3.0.STABLE1 permits
>> "include". I tried to use it but tell me not recognized . Why don't
>> use this option in all versions? This could be helpful to organize the
>> squid.conf in many single files with the parameter that we never or
>> almost never touch. Sorry about my english !!
>
> Try upgrading to newer version, that
> - is supported
> - has less bugs
> - supports include.
>
> --
> Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
> Warning: I wish NOT to receive e-mail advertising to this address.
> Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
> Save the whales. Collect the whole set.
>


[squid-users] about include

2011-11-23 Thread Carlos Manuel Trepeu Pupo
Hello !! I want to know if Squid Cache: Version 3.0.STABLE1 permits
"include". I tried to use it but tell me not recognized . Why don't
use this option in all versions? This could be helpful to organize the
squid.conf in many single files with the parameter that we never or
almost never touch. Sorry about my english !!


[squid-users] about SSL client

2011-11-21 Thread Carlos Manuel Trepeu Pupo
Can I make an encrypted connection between mu clients an mu Squid
server, how can I make this and what version I need ?

Thanks


Re: [squid-users] block TOR

2011-11-18 Thread Carlos Manuel Trepeu Pupo
Thanks a lot, I gonna make that script to refresh the list. You´ve
been lot of helpful.

On Fri, Nov 18, 2011 at 3:39 PM, Leonardo Rodrigues
 wrote:
>
>    i dont know if this is valid for TOR ... but at least Ultrasurf, which i
> have analized a bit further, encapsulates traffic over squid always using
> CONNECT method and connecting to an IP address. It's basically different
> from normal HTTPS traffic, which also uses CONNECT method but almost always
> (i have found 2-3 exceptions in some years) connects to a FQDN.
>
>    So, at least with Ultrasurf, i could handle it over squid simply blocking
> CONNECT connections which tries to connect to an IP address instead of a
> FQDN.
>
>    Of course, Ultrasurf (and i suppose TOR) tries to encapsulate traffic to
> the browser-configured proxy as last resort. If it finds an NAT-opened
> network, it will always tries to go direct instead of through the proxy. So,
> its mandatory that you do NOT have a NAT-opened network, specially on ports
> TCP/80 and TCP/443. If you have those ports opened with your NAT rules, than
> i really think you'll never get rid of those services, like TOR and
> Ultrasurf.
>
>
>
>
> Em 18/11/11 14:03, Carlos Manuel Trepeu Pupo escreveu:
>>
>> So, like I see, we (the admin) have no way to block it !!
>>
>> On Thu, Sep 29, 2011 at 3:30 PM, Jenny Lee  wrote:
>>>
>>>> Date: Thu, 29 Sep 2011 11:24:55 -0400
>>>> From: charlie@gmail.com
>>>> To: squid-users@squid-cache.org
>>>> Subject: [squid-users] block TOR
>>>>
>>>> There is any way to block TOR with my Squid ?
>>>
>>> How do you get it working with tor in the first place?
>>>
>>> I really tried for one of our users. Even used Amos's custom squid with
>>> SOCKS option but no go.
>>>
>>> Jenny
>
>
> --
>
>
>        Atenciosamente / Sincerily,
>        Leonardo Rodrigues
>        Solutti Tecnologia
>        http://www.solutti.com.br
>
>        Minha armadilha de SPAM, NÃO mandem email
>        gertru...@solutti.com.br
>        My SPAMTRAP, do not email it
>
>
>
>
>


Re: [squid-users] block TOR

2011-11-18 Thread Carlos Manuel Trepeu Pupo
But this list do not change frequently?

On Fri, Nov 18, 2011 at 11:38 AM, Andreas Müller  wrote:
> Hello,
>
> here you'll find a list of all tor nodes. It should be easy to block them.
>
> <http://torstatus.blutmagie.de/>
>
> Andreas
>
> -Ursprüngliche Nachricht-
> Von: Carlos Manuel Trepeu Pupo [mailto:charlie@gmail.com]
> Gesendet: Freitag, 18. November 2011 17:03
> An: Jenny Lee
> Cc: squid-users@squid-cache.org
> Betreff: Re: [squid-users] block TOR
>
> So, like I see, we (the admin) have no way to block it !!
>
> On Thu, Sep 29, 2011 at 3:30 PM, Jenny Lee  wrote:
>>
>>
>>> Date: Thu, 29 Sep 2011 11:24:55 -0400
>>> From: charlie@gmail.com
>>> To: squid-users@squid-cache.org
>>> Subject: [squid-users] block TOR
>>>
>>> There is any way to block TOR with my Squid ?
>>
>> How do you get it working with tor in the first place?
>>
>> I really tried for one of our users. Even used Amos's custom squid with
> SOCKS option but no go.
>>
>> Jenny
>
>
>


Re: [squid-users] block TOR

2011-11-18 Thread Carlos Manuel Trepeu Pupo
So, like I see, we (the admin) have no way to block it !!

On Thu, Sep 29, 2011 at 3:30 PM, Jenny Lee  wrote:
>
>
>> Date: Thu, 29 Sep 2011 11:24:55 -0400
>> From: charlie@gmail.com
>> To: squid-users@squid-cache.org
>> Subject: [squid-users] block TOR
>>
>> There is any way to block TOR with my Squid ?
>
> How do you get it working with tor in the first place?
>
> I really tried for one of our users. Even used Amos's custom squid with SOCKS 
> option but no go.
>
> Jenny


[squid-users] Re: Any way to cache 302 responses?

2011-10-06 Thread Manuel
Great. Your solution is working on Squid 2.6 with 302 responses. Thanks!

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Any-way-to-cache-302-responses-tp3877286p3880539.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Any way to cache 302 responses?

2011-10-06 Thread Manuel
Hello,

Is there any way to cache 302 responses with Squid?

(We are running a reverse proxy with Squid 2.6)

Thanks in advance

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Any-way-to-cache-302-responses-tp3877286p3877286.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: gzip with Squird working fine with our site but not with our vbulletin forum. Any advice?

2011-10-05 Thread Manuel
Problems 1 2 and 3 should not be a problem, I guess. Only guests users get
content from the cache and only their requests are cached (this is done by
checking a cookie). Logued in users do not get content from cache nor their
requests are cached either.

Regarding your last point seems to have much more sense for this case.
Thanks for the info once again Amos.

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/gzip-with-Squird-working-fine-with-our-site-but-not-with-our-vbulletin-forum-Any-advice-tp3862015p3873925.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: gzip with Squird working fine with our site but not with our vbulletin forum. Any advice?

2011-10-03 Thread Manuel
Any idea of what can be the problem or at least any advice on what to try or
how to find the problem?

BTW is Squid 2.6.STABLE21 with override-expire ignore-reload ignore-private 

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/gzip-with-Squird-working-fine-with-our-site-but-not-with-our-vbulletin-forum-Any-advice-tp3862015p3869631.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] gzip with Squird working fine with our site but not with our vbulletin forum. Any advice?

2011-09-30 Thread Manuel
Hi

We have Squid in the main site and it is delivering content gzipped
perfectly but on the forum when we have HITs (always guests) a lot of the
times (maybe most of them) the content is delivered without the gzip header.
The webserver is lighttpd and gzip is configured in vBulleting settings (not
in lighttpd) https://www.vbulletin.com/docs/html/vboptions_group_http . If
we try to get the content directly to webserver without Squid it is
delivered in gzip perfectly. We have added this code to Squid.conf but is
still not working properly:

acl apacheandlighttpd rep_header Server ^(lighttpd|Apache)
broken_vary_encoding allow apacheandlighttpd


On the main site we have apache with mod_deflate and this code in Squid.conf
and it is working perfecly:
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache

Any idea an what's going on or any recommendations on how to proceed to find
the problem?

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/gzip-with-Squird-working-fine-with-our-site-but-not-with-our-vbulletin-forum-Any-advice-tp3862015p3862015.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: 301 redirection with Squid based on URL (is Squirm the fastest way?)

2011-09-30 Thread Manuel
Forget about the previous code, there was some errors. Something like this is
my idea to deal with the redirections:

cache_peer 172.20.1.3 parent 80 0 no-query no-digest originserver
name=mainweb
acl maindomain dstdomain www.my.domain
cache_peer_access mainweb allow maindomain
cache_peer_access mainweb deny all

cache_peer 172.20.1.4 parent 80 0 no-query no-digest originserver
name=allredirects
acl otherdomains dstdomain !www.my.domain
cache_peer_access allredirects allow otherdomains 
cache_peer_access allredirects deny all

And I will use apache with mod_rewrite in that cache_peer in order to avoid
caching by the browsers:
RewriteRule ^/(.*)$ http://www.my.domain/$1 [R=301,L,E=nocache:1]


--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/301-redirection-with-Squid-based-on-URL-is-Squirm-the-fastest-way-tp3815289p3861963.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: 301 redirection with Squid based on URL (is Squirm the fastest way?)

2011-09-30 Thread Manuel
Than you for your answer. Finally I do not like Squirm for this case since I
need the page to not be cached. Thefore my idea is to use a specific
cache_peer for any domain different than the main one in order to (1st)
point all them to the main one and (2nd) avoid the use of an external
redirector. Something like this:

cache_peer 172.20.1.3 parent 80 0 no-query no-digest originserver
name=mainweb
acl maindomain dstdomain www.my.domain
cache_peer_access mainweb allow maindomain

cache_peer 172.20.1.4 parent 80 0 no-query no-digest originserver
name=allredirects
acl maindomain dstdomain !www.my.domain
cache_peer_access allredirects allow maindomain

And I will use apache with mod_rewrite in that cache_peer in order to avoid
caching by the browsers:
RewriteRule ^/(.*)$ http://www.my.domain/$1 [R=301,L,E=nocache:1]

This should work, don't you think?

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/301-redirection-with-Squid-based-on-URL-is-Squirm-the-fastest-way-tp3815289p3861956.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] block TOR

2011-09-29 Thread Carlos Manuel Trepeu Pupo
There is any way to block TOR with my Squid ?


[squid-users] 301 redirection with Squid based on URL (is Squirm the fastest way?)

2011-09-15 Thread Manuel
I am using Squid 2.6 as a reverse proxy. I am receiving all the traffic from
several domain names (.com .org etc.) but the website is always the same. I
would like to have only one domain (specially for SEO reasons) so I would
like Squid to redirect those domains with a 301 Moved Permanently HTTP
header to the main one. Note that I also want to redirect internal pages.
What is the best way (taking into consideration performance and speed) to do
so? Squirm?

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/301-redirection-with-Squid-based-on-URL-is-Squirm-the-fastest-way-tp3815289p3815289.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Modify the HTML by squid be return to the visitor

2011-08-25 Thread Carlos Manuel Trepeu Pupo
On Thu, Aug 25, 2011 at 1:25 AM, Amos Jeffries  wrote:
> On 25/08/11 15:14, 铀煌林 wrote:
>>
>> I am running the squid on my proxy server and provide proxy service to
>> visitors.
>> I want to add some HTML codes such as "Hi [USERNAME], you are
>> useing my proxy" at the first line inside the  tag. And
>> return this to my visitors. So they can not only see the page they
>> want to, but also my notification.
>> How could squid do that?
>
> This is a very, very bad idea. Your customers will NOT enjoy it.
>
> Before you go any further with this idea. Here are the results from other
> peoples attempts to alter content:
>  * Rogers http://lauren.vortex.com/archive/000349.html
>  * http://davefleet.com/2007/12/canadian-isp-rogers-hijacking-web-pages/
>  * T-Mobile
> http://www.mysociety.org/2011/08/11/mobile-operators-breaking-content/
>  * Phorm
> http://www.guardian.co.uk/media/pda/2008/apr/16/moreonispshijackingourweb
>  *
> http://serverfault.com/questions/298277/add-frame-window-to-all-websites-for-users-on-network
>
> Please consider the more socially acceptable alternative of a splash/welcome
> page instead. Keeping in mind that even this portal approach is only
> acceptable for free or "cheap" services. If your customers are paying for
> access, that is what they are wanting. Not fancy gimicks that interfere with
> their content.

How can I make in my MAN to show a splash/welcome page ?

>
>
>> I do some search, but find nothing. Thank a lot for any reply.
>
> Lookup web page and HTML hijacking methods. ICAP, eCAP.
>
> Then be prepared to risk legal action by the page copyright owners and
> loosing your customers.
> http://en.wikipedia.org/wiki/Network_neutrality#Legal_situation
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.14
>  Beta testers wanted for 3.2.0.10
>


Re: [squid-users] about the cache and CARP

2011-08-24 Thread Carlos Manuel Trepeu Pupo
On Tue, Aug 23, 2011 at 9:01 AM, Amos Jeffries  wrote:
> On 24/08/11 00:47, Carlos Manuel Trepeu Pupo wrote:
>>
>> 2011/8/23 Amos Jeffries:
>>>
>>> On 23/08/11 21:37, Matus UHLAR - fantomas wrote:
>>>>
>>>> On 16.08.11 16:54, Carlos Manuel Trepeu Pupo wrote:
>>>>>
>>>>> I want to make Common Address Redundancy Protocol or CARP with two
>>>>> squid 3.0 STABLE10 that I have, but here I found this question:
>>>>
>>>> the CARP that squid supports is the "Cache Array Routing Protocol"
>>>> http://en.wikipedia.org/wiki/Cache_Array_Routing_Protocol
>>>>
>>>> - this is something different than "Common Address Redundancy Protocol"
>>>> http://en.wikipedia.org/wiki/Common_Address_Redundancy_Protocol
>>>
>>> Well, technically Squid supports both. Though we generally don't use the
>>> term CARP to talk about the OS addressing algorithms. HA, LVS or NonStop
>>> are
>>> usually mentioned directly.
>>
>> Thanks for the tips, from now I will be careful with the term.
>>
>>>
>>>>
>>>>> If the main Squid with 40 GB of cache shutdown for any reason, then
>>>>> the 2nd squid will start up but without any cache.
>>>>>
>>>>> There is any way to synchronize the both cache, so when this happen
>>>>> the 2nd one start with all the cache ?
>>>>
>>>> You would need something that would synchronize squid's caches,
>>>> otherwise it would eat two times the bandwidth.
>>>
>>> Seconded.
>>>
>>> If the second Squid is not running until the event the cache can be
>>> safely
>>> mirrored. Though that method will cause a slow DIRTY startup rather than
>>> a
>>> fast not-swap. On 40GB it could be very slow, and maybe worse than an
>>> empty
>>> cache.
>>>
>>> NP: the traffic spike from an empty cache decreases in exponential
>>> proportion to the hit ratio of the traffic. From a spike peak equal to
>>> the
>>> internal bandwidth rate.
>>>
>>> PS.  I have a feeling you might have some graphs to demonstrate that
>>> spike
>>> effect Carlos. Would you be able to share the images and numeric details?
>>> I'm looking for details to update the 2002 documentation.
>>
>> Thanks to everyone, you guys always helping me !! Now I have a few
>> problem with Debian and LVM, until I solve it I can't do it anything.
>> But here another idea:
>>
>> I put the two squid in cascade and the Master (HA) make the petitions
>> first to the second squid and if it down go directly to Internet. The
>> both squid will cache all the contents, so will be duplicate the
>> contents, but if someone go down, the other one will respond with all
>> the content cached.
>>
>> It look like this:
>>
>> client --->  Server1 --->  Server2 --->  Internet (server1 and server2
>> will cache all)
>> Server1 down
>> client --->  Server2 --->  Internet (server2 will cache all)
>> Server2 down
>> client --->  Server1 --->  Internet (server2 will cache all)
>>
>> What do you think ?
>
> Looks good.
>
> Check your cache_peer directives connect-fail-limit=N values. It affects
> whether and how much breakage a clients sees when Server2 goes down. If that
> option is available on your Server1 squid, you want it set relatively low,
> but not so low that random failures disconnect them.
>
> background-ping option is also useful for recovery once Server2 comes back
> up.
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.14
>  Beta testers wanted for 3.2.0.10
>

Everything it's working fine !! Until now they are still in LAB mode,
but with excellent results in tests. Now I would like to improve the
mechanism to HA of my servers. Any other idea how to improve the work
that I made until now. (I just make squid with UCARP in Debian)


[squid-users] about ICP

2011-08-24 Thread Carlos Manuel Trepeu Pupo
What parameters do I need to configure in cache_peer to cache all the
petitions, even when the parent have it ?


Re: [squid-users] about the cache and CARP

2011-08-23 Thread Carlos Manuel Trepeu Pupo
2011/8/23 Amos Jeffries :
> On 23/08/11 21:37, Matus UHLAR - fantomas wrote:
>>
>> On 16.08.11 16:54, Carlos Manuel Trepeu Pupo wrote:
>>>
>>> I want to make Common Address Redundancy Protocol or CARP with two
>>> squid 3.0 STABLE10 that I have, but here I found this question:
>>
>> the CARP that squid supports is the "Cache Array Routing Protocol"
>> http://en.wikipedia.org/wiki/Cache_Array_Routing_Protocol
>>
>> - this is something different than "Common Address Redundancy Protocol"
>> http://en.wikipedia.org/wiki/Common_Address_Redundancy_Protocol
>
> Well, technically Squid supports both. Though we generally don't use the
> term CARP to talk about the OS addressing algorithms. HA, LVS or NonStop are
> usually mentioned directly.

Thanks for the tips, from now I will be careful with the term.

>
>>
>>> If the main Squid with 40 GB of cache shutdown for any reason, then
>>> the 2nd squid will start up but without any cache.
>>>
>>> There is any way to synchronize the both cache, so when this happen
>>> the 2nd one start with all the cache ?
>>
>> You would need something that would synchronize squid's caches,
>> otherwise it would eat two times the bandwidth.
>
> Seconded.
>
> If the second Squid is not running until the event the cache can be safely
> mirrored. Though that method will cause a slow DIRTY startup rather than a
> fast not-swap. On 40GB it could be very slow, and maybe worse than an empty
> cache.
>
> NP: the traffic spike from an empty cache decreases in exponential
> proportion to the hit ratio of the traffic. From a spike peak equal to the
> internal bandwidth rate.
>
> PS.  I have a feeling you might have some graphs to demonstrate that spike
> effect Carlos. Would you be able to share the images and numeric details?
> I'm looking for details to update the 2002 documentation.

Thanks to everyone, you guys always helping me !! Now I have a few
problem with Debian and LVM, until I solve it I can't do it anything.
But here another idea:

I put the two squid in cascade and the Master (HA) make the petitions
first to the second squid and if it down go directly to Internet. The
both squid will cache all the contents, so will be duplicate the
contents, but if someone go down, the other one will respond with all
the content cached.

It look like this:

client ---> Server1 ---> Server2 ---> Internet (server1 and server2
will cache all)
Server1 down
client ---> Server2 ---> Internet (server2 will cache all)
Server2 down
client ---> Server1 ---> Internet (server2 will cache all)

What do you think ?

Regards.

>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.14
>  Beta testers wanted for 3.2.0.10
>


Re: [squid-users] about the cache and CARP

2011-08-17 Thread Carlos Manuel Trepeu Pupo
Thanks for your reply. I understand what you told me, but here another idea:

If I read ok, with CARP I can define a master PC, so all the traffic
go to that machine and if it go down, then reply the other, no? Well,
I can make that the master ask always to the second machine and cache
all the petitions. Then, I have all the cache duplicate, but if the
master go down, the second one will have all the cache.

It's that right ?

Regards
Carlos Manuel

2011/8/16 Henrik Nordström :
> tis 2011-08-16 klockan 16:54 -0400 skrev Carlos Manuel Trepeu Pupo:
>> I want to make Common Address Redundancy Protocol or CARP with two
>> squid 3.0 STABLE10 that I have, but here I found this question:
>>
>> If the main Squid with 40 GB of cache shutdown for any reason, then
>> the 2nd squid will start up but without any cache.
>
> Why will the second Squid start up without any cache?
>
> If you are using CARP then cache is sort of distributed over the
> available caches, and the amount of cache you loose is proportional to
> the amount of cache space that goes offline.
>
> However, CARP routing in Squid-3.0 only applies when you have multiple
> levels of caches. Still doable with just two servers but you then need
> two Squid instances per server.
>
> * Frontend Squids, doing in-memory cache and CARP routing to Cache
> Squids
> * Cache Squids, doing disk caching
>
> When request routing is done 100% CARP then you loose 50% of the cache
> should one of the two cache servers go down.
>
> There is also possible hybrid models where the cache gets more
> duplicated among the cache servers, but not sure 3.0 can handle those.
>
> Regards
> Henrik
>
>


[squid-users] about the cache and CARP

2011-08-16 Thread Carlos Manuel Trepeu Pupo
I want to make Common Address Redundancy Protocol or CARP with two
squid 3.0 STABLE10 that I have, but here I found this question:

If the main Squid with 40 GB of cache shutdown for any reason, then
the 2nd squid will start up but without any cache.

There is any way to synchronize the both cache, so when this happen
the 2nd one start with all the cache ?

Thank again for all your help !!!


[squid-users] delay_class 3 ???

2011-08-02 Thread Carlos Manuel Trepeu Pupo
Hi everyone, thanks again for all the help !

I have many subnets:

10.10.1.0/24
10.10.2.0/24
10.10.3.0/24
10.10.4.0/24
.
.
.
10.10.200.0/24

and I want to control the bandwidth of each one. I think this could be:

acl clients src "/etc/squid3/net"   # In this file I have all
the subnets

delay_pools 1
delay_class 1 3
delay_parameters 1 491520/491520 16384/16384 -1/-1
delay_access 1 allow clients

Here I just want to restrict  the /24 subnet, I don't care the
individual host, and restrict each one at 16 Kbps, not all to 16 Kbps.
With this conf I make this ?


Re: [squid-users] strange things happend

2011-08-02 Thread Carlos Manuel Trepeu Pupo
I have this config:

delay_pools 1
delay_class 1 1
delay_parameters 8096/8096
delay_access allow all

Just like I describe my squid deliver to all user at 8 Kbps, but when
one user use a download manager squid consume all the bandwidth but
continuing delivering at 8 Kbps. What could be ?


2011/8/2 Amos Jeffries :
> On 31/07/11 02:33, Carlos Manuel Trepeu Pupo wrote:
>>
>> After many days, and after many fights with my users, now I can see my
>> delay_pools work fine (sorry to all the people who tried help me). But
>> now I have other problem that describe here:
>>
>> I see in my Firewall-Router (Kerio Control 7) that my squid 3.0
>> STABLE1 are downloading at 128 kbps (that's all my bandwidth) and in
>> real-time I see a lot of simultaneous connection to a site where the
>> users are downloading. Then I thought my delay_pools don't work, but
>> after many test I check this users, and I can see their speed were the
>> speed configured in my squid, so, they have multiple connection but
>> have the right speed, however my proxy are consuming all the
>> bandwidth.
>>
>> Why this could be happen ?
>>
>> Thanks again !!
>
> What config do you have now?
>
> IIRC last config let _each_ user have 120KBps bandwidth.
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.14
>  Beta testers wanted for 3.2.0.10
>


[squid-users] strange things happend

2011-08-01 Thread Carlos Manuel Trepeu Pupo
After many days, and after many fights with my users, now I can see my
delay_pools work fine (sorry to all the people who tried help me). But
now I have other problem that describe here:

I see in my Firewall-Router (Kerio Control 7) that my squid 3.0
STABLE1 are downloading at 128 kbps (that's all my bandwidth) and in
real-time I see a lot of simultaneous connection to a site where the
users are downloading. Then I thought my delay_pools don't work, but
after many test I check this users, and I can see their speed were the
speed configured in my squid, so, they have multiple connection but
have the right speed, however my proxy are consuming all the
bandwidth.

Why this could be happen ?

Thanks again !!


Re: [squid-users] delay_pool

2011-07-29 Thread Carlos Manuel Trepeu Pupo
Well, here it's the scenario:

few servers 10.10.10.0/24
4 PC's admin 10.10.10.0/24
1 client across a Kerio 10.10.10.52
bandwidth : 120 Kb/s

I just want to filter Kerio client, but nothing happen !! A few weeks
ago, someone tell me about the thrust in XFF, but I don't understand.

I thought that with this delay access I will control everybody, but
not !!! HELP !!!

delay_pools 1

delay_class 1 1
delay_parameters 1 1024/1024
delay_access 1 allow all

This delay it's just a test, and not working !!!



2011/7/29 Christian Tosta :
> acl BadDownloads url_regex -i "/etc/squid/rules/bad_downloads.url_regex"
> acl BigDownloads rep_header Content-Length ^[3-9]?+[0-9]{0,7}$ # Match sizes
> of 30 MB to infinity
> ### Bandwidth Control
> #
> delay_initial_bucket_level 100
> delay_pools 3
> delay_class 1 2
> delay_parameters 1 -1/-1 -1/-1
> delay_access 1 allow Servers # Free bandwidth for servers
> delay_access 1 deny all
> delay_class 2 2
> delay_parameters 2 12/12 4000/8000 # Big Downloads at 32-64kbps per
> IP
> delay_access 2 allow Downloads
> delay_access 2 allow BigDownloads
> delay_access 2 deny all
> delay_class 3 2
> delay_parameters 3 12/12 2/28000 # Other Access at 160-224kbps
> per IP
> delay_access 3 allow IpsIntranet
> delay_access 3 deny all
>
> 2011/7/29 Carlos Manuel Trepeu Pupo 
>>
>> In my squid 3.0 STABLE1 I have the following configuration:
>>
>> delay_pools 1
>>
>> delay_class 1 1
>> delay_parameters 1 1024/1024
>> delay_access 1 allow all
>>
>> But one user are downloading at 120 Kb/s
>>
>> Why it's that ?
>
>


[squid-users] delay_pool

2011-07-29 Thread Carlos Manuel Trepeu Pupo
In my squid 3.0 STABLE1 I have the following configuration:

delay_pools 1

delay_class 1 1
delay_parameters 1 1024/1024
delay_access 1 allow all

But one user are downloading at 120 Kb/s

Why it's that ?


Re: [squid-users] about delay_pools

2011-07-11 Thread Carlos Manuel Trepeu Pupo
2011/7/11 Amos Jeffries 
>
> On 09/07/11 01:40, Carlos Manuel Trepeu Pupo wrote:
>>
>> 2011/7/8 Amos Jeffries:
>>>
>>> On 08/07/11 02:36, Carlos Manuel Trepeu Pupo wrote:
>>>>
>>>> Hi! I'm using squid 3.0 STABLE1. Here are my delay_pool in the squid.conf
>>>>
>>>> acl enterprise src 10.10.10.2/32
>>>> acl bad_guys src 10.10.10.52/32
>>>> acl dsl_bandwidth src 10.10.48.48/32
>>>>
>>>> delay_pools 3
>>>>
>>>> delay_class 1 1
>>>> delay_parameters 1 25600/25600
>>>> delay_access 1 allow bad_guys
>>>> delay_access 1 deny all
>>>>
>>>> delay_class 2 1
>>>> delay_parameters 2 65536/65536
>>>> delay_access 2 allow enterprise
>>>> delay_access 2 deny all
>>>>
>>>> delay_class 3 1
>>>> delay_parameters 3 10240/10240
>>>> delay_access 3 allow dsl_bandwidth
>>>> delay_access 3 deny all
>>>>
>>>>
>>>> I think everything was right, but since yesterday I see "bad_guys"
>>>> downloading from youtube using all my bandwidth !! I have a channel of
>>>> 128 Kb in technology ATM. So I hope you can help me !!!
>>>
>>> step 1) please verify that a recent release still has this problem.
>>> 3.0.STABLE1 was obsoleted years ago.
>>>
>>> step 2) check for things like follow_x_forwarded_for allowing them to fake
>>> their source address. 3.0 series did not check this properly and allows
>>> people to trivially bypass any IP-based security if you trust that header.
>>>
>>> Amos
>>>
>> I
>>
>> If I deny "bad_guys" they can't surf. The user it's a client who have
>> a Kerio Firewall-Proxy with 10 users. I make the test to visit them
>> and stop his service, then the bandwidth go down, so I check they are
>> who violate the delay_pool. Now, the question is why this happen?
>
> I just gave you several possible answers to that.
>
> Considering that you only listed 10.10.10.52 and Kerio pass on 
> X-Forwarded-For headers, the comment I made about follow_x_forwarded_for 
> becomes a very important thing to know. Trusting XFF from their Kerio means 
> firstly that "src 10.10.10.52" does not match and secondly that your delay 
> pools, if it did match, gives each of their 10 internal machines a different 
> pool.

Sorry, but I don't understand how can I gives each of their 10
internal machines a different pool. I read the documentation about
follow_x_forwarded_for.  I will appreciate if you explain me better.
Thanks

>
>> (Every time this happen I check the destination domain it's youtube
>> and they are downloading from there.)
>
> Another possibility is that it is in fact an "upload" that you can see. 
> delay_pools in 3.0 only work on bytes fetched _from_ the server. Outgoing 
> bytes are not limited.
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.14
>  Beta testers wanted for 3.2.0.9


Re: [squid-users] about delay_pools

2011-07-08 Thread Carlos Manuel Trepeu Pupo
2011/7/8 Amos Jeffries :
> On 08/07/11 02:36, Carlos Manuel Trepeu Pupo wrote:
>>
>> Hi! I'm using squid 3.0 STABLE1. Here are my delay_pool in the squid.conf
>>
>> acl enterprise src 10.10.10.2/32
>> acl bad_guys src 10.10.10.52/32
>> acl dsl_bandwidth src 10.10.48.48/32
>>
>> delay_pools 3
>>
>> delay_class 1 1
>> delay_parameters 1 25600/25600
>> delay_access 1 allow bad_guys
>> delay_access 1 deny all
>>
>> delay_class 2 1
>> delay_parameters 2 65536/65536
>> delay_access 2 allow enterprise
>> delay_access 2 deny all
>>
>> delay_class 3 1
>> delay_parameters 3 10240/10240
>> delay_access 3 allow dsl_bandwidth
>> delay_access 3 deny all
>>
>>
>> I think everything was right, but since yesterday I see "bad_guys"
>> downloading from youtube using all my bandwidth !! I have a channel of
>> 128 Kb in technology ATM. So I hope you can help me !!!
>
> step 1) please verify that a recent release still has this problem.
> 3.0.STABLE1 was obsoleted years ago.
>
> step 2) check for things like follow_x_forwarded_for allowing them to fake
> their source address. 3.0 series did not check this properly and allows
> people to trivially bypass any IP-based security if you trust that header.
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.14
>  Beta testers wanted for 3.2.0.9
>
I

If I deny "bad_guys" they can't surf. The user it's a client who have
a Kerio Firewall-Proxy with 10 users. I make the test to visit them
and stop his service, then the bandwidth go down, so I check they are
who violate the delay_pool. Now, the question is why this happen?
(Every time this happen I check the destination domain it's youtube
and they are downloading from there.)


[squid-users] about delay_pools

2011-07-07 Thread Carlos Manuel Trepeu Pupo
Hi! I'm using squid 3.0 STABLE1. Here are my delay_pool in the squid.conf

acl enterprise src 10.10.10.2/32
acl bad_guys src 10.10.10.52/32
acl dsl_bandwidth src 10.10.48.48/32

delay_pools 3

delay_class 1 1
delay_parameters 1 25600/25600
delay_access 1 allow bad_guys
delay_access 1 deny all

delay_class 2 1
delay_parameters 2 65536/65536
delay_access 2 allow enterprise
delay_access 2 deny all

delay_class 3 1
delay_parameters 3 10240/10240
delay_access 3 allow dsl_bandwidth
delay_access 3 deny all


I think everything was right, but since yesterday I see "bad_guys"
downloading from youtube using all my bandwidth !! I have a channel of
128 Kb in technology ATM. So I hope you can help me !!!


[squid-users] ..::Troubleshooting advice::..

2011-06-08 Thread Manuel Rodriguez

Hi list.



We are going to work with an old squid (I mean old because this squid
was installed and administered by another person), It works with LDAP.
I don't have any experience working with LDAP authentication.



I was wondering if you can give me some advices for troubleshooting, any
advice will be appreciated.



Thanks in advance.



Regards.



Alfonso.



Hello Alfonso

Perhaps you found on the net a little documentation about it,
i.e. : http://wiki.squid-cache.org/ConfigExamples/Authenticate/Ldap

But I recommend you to start learning how to use ldapsearch command
by using a simple "man ldapsearch", and also to read about the ldap
service.
( ldapsearch command is provided by openldap-client package in a few
GNU/Linux distributions)

That could help you a lot, because you could retrieve information about
the user records on the ldap database, once you know which attributes has 
a user schema, then you could easily guest what squid is asking about on

squid auth helper.

Happy ldap searching

Regards

RODRIGUEZ Manuel


Re: [squid-users] using reverse squid to manage XMPP

2011-05-19 Thread Carlos Manuel Trepeu Pupo
Sorry, I already know that squid itsn't that I want, but do you know
any relay XMPP, I search and I didn't find anything

2011/5/17 Amos Jeffries :
> On Tue, 17 May 2011 11:24:50 -0400, Carlos Manuel Trepeu Pupo wrote:
>>
>> Hello, until now, everything that I question here have been solved, so
>> here I bring this new situation:
>>
>> Debian 6 64 bits, with squid 3.1.12.
>>
>> I have only one real IP with Kerio as firewall and in my private net
>> one reverse squid to publish my internal pages. I use Kerio because I
>> also have email and more services. So my clients wants to publish
>> their jabber to internet and I have the idea that the Squid could
>> route me the XMPP incoming traffic, because the outgoing traffic pass
>> throw the firewall with NAT.
>>
>> I have a rule that tell all the incoming traffic in XMPP ports go to
>> my squid at 3128 port, but nothing happens, even in the log of squid
>> do not appear nothing.
>>
>> I make a proof with my Jabber (Openfire) in/out throw Kerio and there
>> is no problem, so I'm missing some squid's configuration to do this,
>> or Squid it's not the solution to my trouble.
>>
>>
>> Can you help me?
>
> No, squid is HTTP proxy. XMMP is a completely different protocol.
>
> Look for an XMMP relay.
>
> Amos
>


[squid-users] using reverse squid to manage XMPP

2011-05-17 Thread Carlos Manuel Trepeu Pupo
Hello, until now, everything that I question here have been solved, so
here I bring this new situation:

Debian 6 64 bits, with squid 3.1.12.

I have only one real IP with Kerio as firewall and in my private net
one reverse squid to publish my internal pages. I use Kerio because I
also have email and more services. So my clients wants to publish
their jabber to internet and I have the idea that the Squid could
route me the XMPP incoming traffic, because the outgoing traffic pass
throw the firewall with NAT.

I have a rule that tell all the incoming traffic in XMPP ports go to
my squid at 3128 port, but nothing happens, even in the log of squid
do not appear nothing.

I make a proof with my Jabber (Openfire) in/out throw Kerio and there
is no problem, so I'm missing some squid's configuration to do this,
or Squid it's not the solution to my trouble.


Can you help me?


Re: [squid-users] Cache peer does not work

2011-05-14 Thread Carlos Manuel Trepeu Pupo
2011/5/14 Dr. Muhammad Masroor Ali :
> Yes, I am actually writing
> cache_peer 172.16.101.3     parent    3128  3130  default
>
> I was so exasperated that I did not type it correctly.
>
> Dr. Muhammad Masroor Ali
> Professor
> Department of Computer Science and Engineering
> Bangladesh University of Engineering and Technology
> Dhaka-1000, Bangladesh
>
> Phone: 880 2 966 5650 (PABX)
>
> In a world without walls and fences, who needs Windows and Gates?
>
>
>
>
>
> On Sat, May 14, 2011 at 10:23 PM, Hasanen AL-Bana  wrote:
>> cache_peer 127.0.0.1     parent    3128  3130  default
>> the above link points to the same server ! probably incorrect , you
>> must use your parent IP address instead (172.16.101.3)
>>
>> On Sat, May 14, 2011 at 7:17 PM, Dr. Muhammad Masroor Ali
>>  wrote:
>>>
>>> Dear All,
>>> I thought that this would have been straight forward.
>>>
>>> In an Ubuntu machine, I use proxy 172.16.101.3 as the proxy for
>>> browsing. This does not require any user name or password for access.
>>>
>>> I installed squid3 in this machine and set 127.0.0.1:3128 as proxy in
>>> the browser. Also in the squid.conf file I have put,
>>> cache_peer 127.0.0.1     parent    3128  3130  default
>>>
>>> Now when I try to browse, nothing happens. That is, the browser says
>>> connecting or some such, and after a very long time it fails to open
>>> the page.

You must have to configure the parameter nonhierarchical_direct off

Then, the squid will ask to the parent first before go directly to
find the page.


>>>
>>> No error message has been found in the log files. I am really at a
>>> loss what to do.
>>>
>>> Could somebody please tell me what to do. Thanks in advance.
>>>
>>> Dr. Muhammad Masroor Ali
>>> Professor
>>> Department of Computer Science and Engineering
>>> Bangladesh University of Engineering and Technology
>>> Dhaka-1000, Bangladesh
>>>
>>> Phone: 880 2 966 5650 (PABX)
>>>
>>> In a world without walls and fences, who needs Windows and Gates?
>>
>


[squid-users] about chroot

2011-05-11 Thread Carlos Manuel Trepeu Pupo
I'm right now installing my Debian 6, next will be install Squid
3.1.12, so Amos, I suppose we are in peace, lol. I like to enhance my
security with a chroot, but reading in internet the information it's
no too much, only see this in all the comments:

"if you use a HTTP port less than 1024 and try to reconfigure, you may
get an error saying that Squid can not open the port."

So I want to know if the effort will really worth, and how the hell I
will reconfigure squid in chroot?

Thanks again !!!


[squid-users] partitioning Debian 6 for squid instalation

2011-05-11 Thread Carlos Manuel Trepeu Pupo
Hi everyone, I gonna install the latest Squid STABLE version in Debian
6 64bits, so I like to know the recommended hard disk partition !


Re: [squid-users] you cache is running out of filedescriptors in ubuntu

2011-05-11 Thread Carlos Manuel Trepeu Pupo
2011/5/11 Amos Jeffries :
> On Tue, 10 May 2011 16:10:38 -0400, Carlos Manuel Trepeu Pupo wrote:
>>
>> Hi I have down all my work, I find some information to fix this but
>> tell me modify /etc/default/squid and I don't have this file, what
>> could I do? It's urgent  I have squid 3.0 STABLE1
>
> Create the file if missing. It is an optional user-override config file.
>
> For 3.0 you need to add the ./configure --with-filedescriptors=NUMBER
> option. Where NUMBER is something big enough not to die under your traffic
> load. You also need to run "ulimit -n NUMBER" before starting Squid every
> time.
>
>
> The FD overflow could also be *two* of those "fixed" bugs I warned you about
> the other day...
>
> 3.0 have issues with too many persistent connection FD being held. Which can
> overflow the FD limits on certain types of traffic behaviour.
>
> 3.0 and early 3.1 have issues with connection garbage objects being released
> very late in the transaction, which can waste FD.
>
> Amos
>

Thanks for all, this week or the next, I will change to the most
recently STABLE version, then I will solve all this problem. There's
someplace where I can find all the parameters to compile squid? Thanks
again !!


[squid-users] you cache is running out of filedescriptors in ubuntu

2011-05-10 Thread Carlos Manuel Trepeu Pupo
Hi I have down all my work, I find some information to fix this but
tell me modify /etc/default/squid and I don't have this file, what
could I do? It's urgent  I have squid 3.0 STABLE1


Re: [squid-users] deny_info

2011-05-09 Thread Carlos Manuel Trepeu Pupo
2011/5/9 Amos Jeffries :
> On Mon, 9 May 2011 13:07:50 -0400, Carlos Manuel Trepeu Pupo wrote:
>>
>> Hi, I'm now using deny_info to personalize the error pages. I have
>> installed Squid 3.0 STABLE1 (I know it's an old version). Here is an
>
> So why for the sake of 6 *major* security vulnerabilities did you do that?
> http://www.squid-cache.org/Advisories

I'm making test for all the new thing I will implement, so, when all
work fine I'll make the change !!!
>
>> example of my squid.conf:
>>
>> acl ext url_regex -i \.exe$
>> acl ip src 192.168.10.10
>> acl max maxconn 1
>> http_access deny ip ext max
>> # I already create the page in the directory's errors pages.
>> deny_info ERR_EXT_PAGE max
>> http_access allow !maxconn
>>
>> The problem is that the page that show me it the default of denied and
>> not the mine. What's wrong and how could I fixed ?
>
> Are you sure its being denied by "deny ip ext max"?

yes that's the unique http_access that work with this acl.


I make a few test and this is the result:

#THIS NOT WORK
acl ext url_regex -i \.exe$
acl ip src 192.168.10.10
acl max maxconn 1
http_access deny ip ext max
# I already create the page in the directory's errors pages.
deny_info ERR_EXT_PAGE max
http_access allow !max

#THIS WORK
acl ext url_regex -i \.exe$
acl ip src 192.168.10.10
acl max maxconn 1
http_access deny max
# I already create the page in the directory's errors pages.
deny_info ERR_EXT_PAGE max
http_access allow !max

The difference it's that the "http_access deny" only have an argument
my ACL, but if I combine it, then do not show me the PAGE that I
created. There any way to solve that?

>
> Amos
>


[squid-users] deny_info

2011-05-09 Thread Carlos Manuel Trepeu Pupo
Hi, I'm now using deny_info to personalize the error pages. I have
installed Squid 3.0 STABLE1 (I know it's an old version). Here is an
example of my squid.conf:

acl ext url_regex -i \.exe$
acl ip src 192.168.10.10
acl max maxconn 1
http_access deny ip ext max
# I already create the page in the directory's errors pages.
deny_info ERR_EXT_PAGE max
http_access allow !maxconn

The problem is that the page that show me it the default of denied and
not the mine. What's wrong and how could I fixed ?


Re: [squid-users] modify the delay_pools at fly

2011-05-09 Thread Carlos Manuel Trepeu Pupo
2011/5/4 Amos Jeffries :
> On Wed, 4 May 2011 12:38:43 -0400, Carlos Manuel Trepeu Pupo wrote:
>>
>> 2011/5/4 Amos Jeffries:
>>>
>>> On 05/05/11 03:35, Carlos Manuel Trepeu Pupo wrote:
>>>>
>>>> I tried in previous post to change the established connection when the
>>>> time of the delay_pool change. Amos give me 3 solution and now I'm
>>>> trying with QoS, but I have this idea:
>>>>
>>>> If I have 2, 3 or the count of squid.conf that I could need, and with
>>>> one script I make squid3 -k reconfigure. That not finish any active
>>>> connection and apply the changes, what do you think?
>>>
>>> It is favoured by some. Has the slight side effect of "forgetting" the
>>> delay
>>> pool assigned on older Squid versions.
>>
>> What do you mean about "forget" the delay_pool?
>
> The reconfigure erases old delay pools config and re-creates it.
> As I recall the old code used to leave it at that, with the existing
> connections having no delay pool config set. That got fixed a year or two
> ago to re-calculate all existing requests delay pools after a configure.
> They may get a freshly filled pool suddenly, but stay limited overall.

Please, can you explain me better ?. My english played me a bad play
and I can't understand at all. Thanks

>
>>>>
>>>> Remember that I have Ubuntu 10.04 with Squid 3 STABLE1. This night
>>>
>>> 10.04 and "3.0.STABLE1"? dude!
>>
>> lol I'm now deploying Debian 6, but I don't want to install squid
>> until I solved my problems.
>>
>>>
>>>> when my users gone I gonna try !! Tomorrow I tell you, but if someone
>>>> tried this, please, send the result, so i can use my time in QoS.
>>>
>>
>> Now I just tried the -k reconfigure, but something strange happen, so
>> I backup my squid.conf and in the new one I just put this delay_pool:
>> delay_pools 1
>> delay_class 1 1
>> delay_parameters 1 10240/10240
>> delay_access 1 allow all
>>
>> With this parameters the speed shouldn't be more than 10 KB, but I can
>> see in my firewall the proxy reaches speeds until 32 KB, I guess there
>> are just peaks, but if I have 100 clients, and all them make these
>> peaks, then my DSL will be saturated.
>
> I'd put that down to STABLE1. Try again with the newer version in Deb 6.
>
> Amos
>
>


Re: [squid-users] modify the delay_pools at fly

2011-05-04 Thread Carlos Manuel Trepeu Pupo
2011/5/4 Amos Jeffries :
> On 05/05/11 03:35, Carlos Manuel Trepeu Pupo wrote:
>>
>> I tried in previous post to change the established connection when the
>> time of the delay_pool change. Amos give me 3 solution and now I'm
>> trying with QoS, but I have this idea:
>>
>> If I have 2, 3 or the count of squid.conf that I could need, and with
>> one script I make squid3 -k reconfigure. That not finish any active
>> connection and apply the changes, what do you think?
>
> It is favoured by some. Has the slight side effect of "forgetting" the delay
> pool assigned on older Squid versions.

What do you mean about "forget" the delay_pool?

>
>>
>> Remember that I have Ubuntu 10.04 with Squid 3 STABLE1. This night
>
> 10.04 and "3.0.STABLE1"? dude!

lol I'm now deploying Debian 6, but I don't want to install squid
until I solved my problems.

>
>> when my users gone I gonna try !! Tomorrow I tell you, but if someone
>> tried this, please, send the result, so i can use my time in QoS.
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.12
>  Beta testers wanted for 3.2.0.7 and 3.1.12.1
>

Now I just tried the -k reconfigure, but something strange happen, so
I backup my squid.conf and in the new one I just put this delay_pool:
delay_pools 1
delay_class 1 1
delay_parameters 1 10240/10240
delay_access 1 allow all

With this parameters the speed shouldn't be more than 10 KB, but I can
see in my firewall the proxy reaches speeds until 32 KB, I guess there
are just peaks, but if I have 100 clients, and all them make these
peaks, then my DSL will be saturated.


[squid-users] modify the delay_pools at fly

2011-05-04 Thread Carlos Manuel Trepeu Pupo
I tried in previous post to change the established connection when the
time of the delay_pool change. Amos give me 3 solution and now I'm
trying with QoS, but I have this idea:

If I have 2, 3 or the count of squid.conf that I could need, and with
one script I make squid3 -k reconfigure. That not finish any active
connection and apply the changes, what do you think?

Remember that I have Ubuntu 10.04 with Squid 3 STABLE1. This night
when my users gone I gonna try !! Tomorrow I tell you, but if someone
tried this, please, send the result, so i can use my time in QoS.


[squid-users] Missing cachemgr.cgi

2011-04-30 Thread Carlos Manuel Trepeu Pupo
I installed Squid 3.0 Stable1 in my Ubuntu, but now, I can't find my
cachemgr.cgi or cachemgr.conf, I search in
/usr/lib/squid3
/etc/squid/cachemgr.conf
/usr/lib/cgi-bin/cachemgr3.cgi

I guess was not installed when I use my apt-get install squid3. Anyone
can help me?


[squid-users] About the delay_pool in squid 3.0 STABLE1

2011-04-29 Thread Carlos Manuel Trepeu Pupo
Hi!! I have installed Squid 3.0 STABLE1 in UBUNTU since a few months
ago, and I tried

to use the delay_pool with a fixed speed without any problems. Now, I
want to let the

user to download at more speed in the launch hour, so I put others
delay_pool for that

time and a different speed for the page that not contain the
restriction, but I can

see when the connection starts at this time, and the lunch it's over
the speed do not

decrease. I think it's that the connection established can't be
modified, but I hope

squid do not work like this. Here I send part of my squid.conf.

acl special_client src 192.168.0.10/32
acl client src 192.168.0.20/32
#page control are page like megaupload, hotfile, an others
acl page_control url_regex -i "/etc/squid3/page_control"
#ext_control are extension like .rar .iso, and many others
acl ext_control url_regex -i "/etc/squid3/ext_control"
acl happy_hours time MTWHFA 12:00-13:30

delay_pool 4
delay_class 1 1
delay_parameters 1 51200/51200
delay_access 1 allow special_client page_control happy_hours
delay_access 1 allow special_client ext_control happy_hours
delay_access deny all
delay_class 2 1
delay_parameters 2 3/3
delay_access 2 allow client page_control happy_hours
delay_access 2 allow client ext_control happy_hours
delay_access 2 deny all
delay_class 3 1
delay_parameters 3 9/9
delay_access 3 allow client page_control !happy_hours
delay_access 3 allow special_client ext_control !happy_hours
delay_access 3 deny all
delay_class 4 1
delay_parameters 4 120/120
delay_access 4 allow client !page_control !ext_control
delay_access 4 allow special_client !page_control !ext_control
delay_access 4 deny all


Waiting for your answer !!!


[squid-users] Re: Can squid be configured as SMTP/SMTPS proxy?

2010-12-17 Thread Manuel

I am not sure if I understood what is not capable of Squid. You mean that use
Squid to hide the client IP sender is not possible? This is the goal, the
first message at serverfault is mine:
http://serverfault.com/questions/212333/how-to-hide-the-client-ip-sender-and-show-only-the-smtp-server-ip
-- 
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Can-squid-be-configured-as-SMTP-SMTPS-proxy-tp2727188p3093474.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Can squid be configured as SMTP/SMTPS proxy?

2010-12-16 Thread Manuel

Hello,

How common is for the client app to work with SMTPS proxies? I have a
vBulletin forum in a backend that I want it to send e-mails to the users
through a SMTP server in a different server. vBulletin app works fin with
SMTP servers through TLS and SSL but have not tried with Squid yet. I've
been told that this solution could be done with an VPN like OpenVPN but
since we already have Squid in the frontend as a reverse proxy for the
website and in that frontend is also located the SMTP server maybe we can
just use the already running Squid. What do you think?

Another question: What do I need to use to tell Squid where is the backend?
For the reverse proxy I use cache_peer.

Do I just need to add this? Or what?
acl yes_to_the_25 url_regex myserver.com
acl pt25 port 25
http_access allow si_al_25 pto25

Shouldn't Squid run on 25 and the SMTP server on another port like I do with
the HTTP reverse proxy?

Thanks!
-- 
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Can-squid-be-configured-as-SMTP-SMTPS-proxy-tp2727188p3092172.html
Sent from the Squid - Users mailing list archive at Nabble.com.


  1   2   >