Re: [squid-users] How important is harddisk performance?

2008-12-22 Thread Matus UHLAR - fantomas
On 23.12.08 10:44, rihad wrote:
> I'm planning to build a new dedicated Squid-box, with amd64 and 4 gigs 
> of RAM, with two cache_dir's on two separate harddisks and Squid-3 doing 
> application level striping, all servicing around 6k users. Will two 
> recent IDE disks of 7200 rpm suffice, or I'm better off getting two 
> 15000 rpm SCSI disks on a dedicated controller board? Just not sure if 
> performance gains would be noticeable by an average user, given enough 
> ram. I read this too: http://wiki.squid-cache.org/BestOsForSquid
> Just double checking.

What is the expected load? The information "6k users" is not telling much -
6k users may issue 12k reqests per second, and also 600 requests per second
(or even less). If you expect all of them intensively browse the web, buy
two fast disks, and that may be not enough.

If you don't know what can you expect and can afford that, buy those 15krpm
disks and you'll see if you'll need another one. You may also need more RAM
quickly... (a few for squid, much for caching)

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
It's now safe to throw off your computer.


Re: [squid-users] lru vs. Heap LRU & LFUDA

2008-12-22 Thread Matus UHLAR - fantomas
On 23.12.08 03:55, Nyamul Hassan wrote:
> I've gone through the research documents referred to in the squid config 
> file, that compares the performance between different 
> cache_replacement_policy.  However, both of them appear to be almost 10 
> years old!
> 
> Given that time, has there been any updates to the internal codes of these 
> replacement policies that could change the results if those tests were done 
> today?

I don't think so. The issue lies in algorithms, and heap algorithms are
still more effective than old LRU :)

I doubt the situatuion will change before any massive change in computer
technologies, probably related to paralellism, at which time squid will be
much different, if any :)

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Your mouse has moved. Windows NT will now restart for changes to take
to take effect. [OK]


[squid-users] How important is harddisk performance?

2008-12-22 Thread rihad

Hi there.

I'm planning to build a new dedicated Squid-box, with amd64 and 4 gigs 
of RAM, with two cache_dir's on two separate harddisks and Squid-3 doing 
application level striping, all servicing around 6k users. Will two 
recent IDE disks of 7200 rpm suffice, or I'm better off getting two 
15000 rpm SCSI disks on a dedicated controller board? Just not sure if 
performance gains would be noticeable by an average user, given enough 
ram. I read this too: http://wiki.squid-cache.org/BestOsForSquid

Just double checking.

Thanks for any tips.


[squid-users] Invalid response during POST request

2008-12-22 Thread howard chen
I am using Squid as reverse proxy to web server.

Sometimes (not always), when client POST something to my server, error
will be shown:

=
ERROR

The requuseted URL could not be retrieved

* Invalid Resposne
=

Full Screen cap : http://howachen.googlepages.com/squid-error.gif

Any idea for this error?


[squid-users] squid and dansguardian

2008-12-22 Thread Enrique

Since using DansGuardian my squid ACLs no longer work
Whats i can do to make the ACLs of squid work again??
regars



[squid-users] lru vs. Heap LRU & LFUDA

2008-12-22 Thread Nyamul Hassan

Hi,

I've gone through the research documents referred to in the squid config 
file, that compares the performance between different 
cache_replacement_policy.  However, both of them appear to be almost 10 
years old!


Given that time, has there been any updates to the internal codes of these 
replacement policies that could change the results if those tests were done 
today?


Thank you in advance for your input.

Regards
HASSAN



[squid-users] expires header

2008-12-22 Thread Alin Bugeag
Hi,
 

I have two imageservers behind a squid. 

My issue is that my imageservers are not sending any Expires headers but I 
would like to attaché one from the squid.

So by the time the image reaches the browser I have an Expires header in it.

My imageserver is a custom made app that know one knows what's in it so I do 
not have time to dig it's code to add an expire date.

Is there any way to add that header from squid?

Thanks,
Alin Bugeag


[squid-users] Problem with the cache Web

2008-12-22 Thread Leonel Flor�n Selles
Problem with the cache Web

friends: I am new in this list

I, install the squid and it work ok, but does not store me the Web
in the squid's spool, and I know that  because when I see the traces
it  tells me tcp-miss to each URL,

Also I check /var/spool/squid and it has the spool's structure
created but it's empty

Also I use the command squid -z but nothing at all

What I can do
Greetings



RE: [squid-users] Squid-3 / TProxy v4.1

2008-12-22 Thread Ritter, Nicholas
Although the TProxy I am currently using is not ICMP aware, I am using it in a 
production environment across the midwest of the US. It is working very well. I 
am using CentOS 5.2 x86_64 on custom built Intel Core 2 Duo machines (single 
CPU, 2 cores) with 3GB RAM. I have 20 of these boxes and they each serve about 
75 to 150 clients behind them. I am using Cisco 2811 routers for WCCP 
redirection.
 
I will soon start working on a revised tutorial for setup of CentOS 5.2 x86_64, 
Squid, and TProxy to reflect the newer builds of Squid and TProxy. All though 
the concepts having changed, I haved learned some lessons from my production 
deployments that I would like to pass into the Squid community.
 
Nicholas



From: Amos Jeffries [mailto:squ...@treenet.co.nz]
Sent: Mon 12/22/2008 5:52 AM
To: ri...@mail.ru
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid-3 / TProxy v4.1



rihad wrote:
> Are Squid-3 / TProxy v4.1 still under heavy development? Anyone using it
> in production with any success?

IIRC Nicholas Ritter was using it in Production for the final round of
testing.

>
> Thanks.
>
> P.S.: I know Squid 3 is still beta: http://www.squid-cache.org/Versions/
> But as I'm new to TProxy I'd like to start using the bleeding edge
> version that requires no additional patching.

Both are technically still in beta. The tproxy won't be out formally
until kernel 2.6.28. But yes, we who worked on it believe they are
finished and usable. Even if not proven by years and masses of usage.

Amos
--
Please be using
   Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
   Current Beta Squid 3.1.0.3 or 3.0.STABLE11-RC1





RE: [squid-users] TProxy setup

2008-12-22 Thread Ritter, Nicholas
The docs are accurate for rules and marking. Exactly what you need to do varies 
on if you need to do NAT or not. 
 
Generally, in a Cisco environment where WCCP is used, and NAT is done on the 
outside egress interface of the router, a Squid/WCCP/TProxy setup can be done 
with no NAT being taken into consideration  because all of the redirection, 
etc. happens behind the NAT point for the network as a whole.
 
You should open up a port for the WCCP control traffic though. The doc on the 
squid wiki mentions the port number. Although not always needed, I have found 
from experience that depending on the IOS code level, the port is needed and 
sometimes notbut logically it is needed, so it is safe to always open it up.
 
Here is what I have for iptables rules:
 
# Allow all incoming traffic on the GRE interface
-A INPUT -i gre0 -j ACCEPT 
-A INPUT -p gre -j ACCEPT 
# Allow GRE Protocol on physical interface which the GRE is expected on
-A INPUT -i eth0 -p gre -j ACCEPT 
-A LocalFW -p icmp -m icmp --icmp-type any -j ACCEPT 
# Allow WCCP "control" traffic to UDP port 2048
-A LocalFW -s /32 -p udp -m udp --dport 2048 -j ACCEPT
#divert, mangling, etc. of inbound HTTP request traffic redirected by WCCP on 
the router to the squid box
-A PREROUTING -p tcp -m socket -j DIVERT 
-A PREROUTING -p tcp -m tcp --dport 80 -j TPROXY --on-port 3128 --on-ip  --tproxy-mark 0x1/0x1 
-A DIVERT -j MARK --set-mark 0x1 
-A DIVERT -j ACCEPT 

 
The rules above are not exactly optimal. The "-A PREROUTING -p tcp -m socket -j 
DIVERT" line can break some other functionality on the linux box hosting squid, 
but for a dedicated cache box, this is ok. 
 
I am going to start working on a updated CentOS 5.2/TProxy/Squid setup and 
HOWTO because the one I put up on the Squid wiki is a little incorrect and the 
new version of TProxy has ICMP support that is important.
 
Nicholas


From: rihad [mailto:ri...@mail.ru]
Sent: Mon 12/22/2008 12:28 AM
To: Squid Users
Subject: [squid-users] TProxy setup



Hello there,

How should TProxy/Cisco be configured in iptables/netfilter:
0) as outlined in SquidFaq with just two lines
(http://wiki.squid-cache.org/SquidFaq/InterceptionProxy#head-5887c3744368f290e63fda47fd1e4715c9bdbc9b):
iptables -t nat -A PREROUTING -i wccp0 -j REDIRECT --redirect-to 3128
iptables -t tproxy -A PREROUTING -i eth0 -p tcp -m tcp --dport 80 -j
TPROXY --on-port 80

1) As described in the official TProxy docs
(http://www.balabit.com/downloads/files/tproxy/README.txt):
   ip rule add fwmark 1 lookup 100
   ip route add local 0.0.0.0/0 dev lo table 100

   iptables -t mangle -N DIVERT
   iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT

   # DIVERT chain: mark packets and accept
   iptables -t mangle -A DIVERT -j MARK --set-mark 1
   iptables -t mangle -A DIVERT -j ACCEPT

   iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY \
   --tproxy-mark 0x1/0x1 --on-port 50080

2) Both :-/
3) Something else.

I'm totally confused...





Re: [squid-users] an "squid" question

2008-12-22 Thread goldeneyes


Amos Jeffries-2 wrote:
> 
> LogDaemon with Squid 2.7.
> 
> http://wiki.squid-cache.org/Features/LogDaemon
> 
> Amos
> -- 
> Please be using
>Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
>Current Beta Squid 3.1.0.3 or 3.0.STABLE11-RC1
> 
> 

thank you for help , and for the link
thank you ,
-- 
View this message in context: 
http://www.nabble.com/an-%22squid%22-question-tp21114091p21128442.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] an "squid" question

2008-12-22 Thread Amos Jeffries

goldeneyes wrote:



Amos Jeffries-2 wrote:



Not without altering squid code.

But there are various ways of processing the log stream instead of 
sending direct to a file.


What are you trying to achieve?

Amos
--
Please be using
   Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
   Current Beta Squid 3.1.0.3 or 3.0.STABLE11-RC1




Thank you for the answer,

I search to extract all links of web page using Squid, but I want to
retrieve them before it is put in the log file ,
How can I  process the log stream instead of sending direct to a file?
can you give me some example for various ways of processing the log stream ?

Thank you for any help, 



LogDaemon with Squid 2.7.

http://wiki.squid-cache.org/Features/LogDaemon

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.3 or 3.0.STABLE11-RC1


Re: [squid-users] an "squid" question

2008-12-22 Thread goldeneyes



Amos Jeffries-2 wrote:
> 
> 
> 
> Not without altering squid code.
> 
> But there are various ways of processing the log stream instead of 
> sending direct to a file.
> 
> What are you trying to achieve?
> 
> Amos
> -- 
> Please be using
>Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
>Current Beta Squid 3.1.0.3 or 3.0.STABLE11-RC1
> 
> 

Thank you for the answer,

I search to extract all links of web page using Squid, but I want to
retrieve them before it is put in the log file ,
How can I  process the log stream instead of sending direct to a file?
can you give me some example for various ways of processing the log stream ?

Thank you for any help, 

-- 
View this message in context: 
http://www.nabble.com/an-%22squid%22-question-tp21114091p21127182.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] How to use Squid on Reverse Proxy mode on Windows XP Prof ?

2008-12-22 Thread Amos Jeffries

Balram wrote:

Any one can help me ? I am using squid on Winows XP Prof. It's fine.
But when I try to use in Reverse proxy, it didn't work. My
configuration is as follows:
http_port 192.168.0.1:3128 accel vhost=3Dvirtual

Regards


Which version of Squid?
Any other configuration going on?
... such as any of this: http://wiki.squid.cache.org/SquidFaq/ReverseProxy


FWIW: vhost does not take a parameter. it just means virtual-hosting 
(multiple sites) accelerated.

It's defaultsite= option you want maybe.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.3 or 3.0.STABLE11-RC1


Re: [squid-users] ACL::~ACL ???

2008-12-22 Thread Amos Jeffries

Kinkie wrote:

On Sun, Dec 21, 2008 at 9:59 PM, Vladimir Rudenko  wrote:

when i do "squid -kreconfigure" (or any other command with "squid -k")
i see such list:
2008/12/21 22:42:55.334| ACL::~ACL: '
2008/12/21 22:42:55.334| ACL::~ACL: '

2008/12/21 22:55:38.016| ACL::~ACL: '
2008/12/21 22:55:38.016| ACL::~ACL: '

looks like c++ destrictor, but what it means in log answer?


what are your debug levels settings?
I'm inclined to think of this as a cosmetic bug.



This appears at debug level 28,3+ and ALL,3+.
It's just a debug record for seeing when ACLs are dead and unusable.
We haven't really agreed on any specs for debug levels >1, so its a 
judgement call whether its useful info at that level or should be lower.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.3 or 3.0.STABLE11-RC1


Re: [squid-users] Squid questions

2008-12-22 Thread Amos Jeffries

Kishore Venkat wrote:

Hello everyone,

I have setup Squid 3.0 STABLE 9 for testing purposes and I have the
following questions:

1.  We are trying to take steps to prevent DOS attacks and are
considering the possibility of using squid to cache pages and reducing
the load on the origin servers.  The particular scenario we are
targeting is folks posting url containing member specific information
in the querystring (such as email address or memberid or coupon code)
on social networks to take advantage of promotional offerings (such as
coupons) and all of a sudden we getting a sudden burst of traffic to
our site - these would be either .jsp or .asp urls.  I have tried
using the following line in our squid config:

refresh_pattern -i \.asp$ 10080 90% 99 ignore-no-cache
override-expire ignore-private


and from my testing it appears to cache them only if there is no "?"
in the url (even if you do NOT pass any url parameters, but have the
"?" in the url, it still does not cache them - even if the .asp
contains only html code).  From my understanding, there is no way to
cache .asp / .jsp pages with querystring parameters - could you please
confirm this?


no. The old config files (3.0 included) have the following:

  acl QUERY ...
  cache deny QUERY

To cache specific URL you define an ACL and set "cache allow ..." before 
the deny line.




I was wondering if there is way to cache these dynamic .asp pages?  We
do not want all the .asp pages to go thru the squid cache as a lot of
them are dependant on data in the database and if the values in the db
changes, the content served must change as well.  So, we could place
the pages that need to go thru Squid's cache is a folder called
"squid", and modify the above squid.conf line such that only those
.asp pages that are present in the "squid" folder go thru the squid
cache.


No need to be so tricky. Setting Cache-Control differently for each page 
is possible. And can limit the time items get saved in cache, down to 0 
seconds.




If there are other ways of preventing DOS attacks for the above
mentioned scenario, please let me know.


All I can think of right now is:

* stay away from regex as much as possible. its slow.

* configure the cache_peer link with raw IP and either a dstdomain or 
cache_peer_domain. Cutting DNS load out of the circuit.


* extend object timeouts as long as reasonable.

* use ignore-refresh option to refresh_pattern. maybe others.



2.  The one conern that I have is the Squid server itself being prone
to Denial of Service due to sudden bursts in traffic.  Can someone
share their experience based on the implementation on your web site.


nothing can be completely DDoS secure But Squid has a much higher 
request-per-second capability than most generated pages allow a 
webserver to have. So its a good layer to rise the DDoS damage threshold.




3.  When using the squidclient for testing purposes, if I have a very
long url (something that is 205 characters long, for example), it
appears that the request to the original servers does NOT contain the
entire url (with all the parameters).  The squidclient command
(including the -h and -p options and the 205-length url) is 261
characters long.  I saw a bug to do with the length of the hostname,
but I believe that is in earlier version of Squid and NOT in squid 3
stable 9.  Is there a way to test really long urls?


telnet, wget, or any other web client software should also work.

squidclient should have a 8KB URL limit for any header though.



4.  If the disk space that we allocate is completely used, do we know
what algorithm Squid uses to cache request for new pages - such as
LRU?  And is this configurable?


yes. default is LRU.
http://www.squid-cache.org/Doc/config/cache_replacement_policy/

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.3 or 3.0.STABLE11-RC1


Re: [squid-users] an "squid" question

2008-12-22 Thread Amos Jeffries

goldeneyes wrote:



goldeneyes wrote:

hi,

with the proxy "squid" is there a way to extract the links that have
passed through this one,

thank you for any help, 




Hi,  

I find all of links passed by "Squid"  in 
/var/log/squid/access.log 
 Is it possible to retrieve the url before it is put in the log file ?


thank you for any help, 



Not without altering squid code.

But there are various ways of processing the log stream instead of 
sending direct to a file.


What are you trying to achieve?

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.3 or 3.0.STABLE11-RC1


Re: [squid-users] Question about reverse proxy and Apache Expires headers

2008-12-22 Thread Amos Jeffries

Tom Williams wrote:

Tom Williams wrote:
So, my Squid-3.0STABLE10 reverse proxy seems to be working just fine 
with one exception: some cached content isn't being properly refreshed 
in the Squid cache.
Here is the Directory directive where I specify the Expires header 
info in my Apache virtual host config file:



ExpiresActive On
ExpiresByType image/gif M2592000
ExpiresByType text/css M604800
ExpiresByType text/javascript M604800
ExpiresByType application/x-javascript M604800
ExpiresByType image/jpeg M2592000
Header append Cache-Control public


So, for the type image/jpeg, the file is set to expire one month after 
that last time it was modified.  Cool.  The problem is, when I update 
an image before the month expire time has elapsed, the OLD image in 
the Squid cache is returned instead of the updated image that is 
stored in the directory.  I'm sure this is related to a configuration 
issue in my Squid installation but I'm not sure where to start 
researching it.


I saw the refresh_pattern directive in my Squid.conf file and I 
haven't changed that.  Is this where I start?


The reason I set a one month expiration time on jpeg images in that 
directory is I don't expect the jpegs in that directory to change very 
frequently but when they DO change, I need Squid to refresh its cache 
as soon as that image is changed (or shortly after).


What should I do to address this?

Thanks!

Peace...

Tom

Please ignore this post.  I think I found my problem.   The images I was 
changing were NOT in the above listed directory so something else is 
going on.


Peace...

Tom


No worries. FWIW Squid-3 still suffers from Bug #7.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.3 or 3.0.STABLE11-RC1


Re: [squid-users] Squid-3 / TProxy v4.1

2008-12-22 Thread Amos Jeffries

rihad wrote:
Are Squid-3 / TProxy v4.1 still under heavy development? Anyone using it 
in production with any success?


IIRC Nicholas Ritter was using it in Production for the final round of 
testing.




Thanks.

P.S.: I know Squid 3 is still beta: http://www.squid-cache.org/Versions/
But as I'm new to TProxy I'd like to start using the bleeding edge 
version that requires no additional patching.


Both are technically still in beta. The tproxy won't be out formally 
until kernel 2.6.28. But yes, we who worked on it believe they are 
finished and usable. Even if not proven by years and masses of usage.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.3 or 3.0.STABLE11-RC1


Re: [squid-users] how to blocking P2P

2008-12-22 Thread Leonardo Rodrigues Magalhães



░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░ escreveu:

can you give me sample ?
im n00b :(
  


   sure ... lots of messages regarding this subject here:

http://marc.info/?l=squid-users&w=2&r=1&s=p2p+connect&q=b


--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] how to blocking P2P

2008-12-22 Thread Matus UHLAR - fantomas
On 22.12.08 17:23, ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░ wrote:
> what you mean ?

I mean that while there are a few chances of blocking P2P traffic by using
squid, it's much better done with hardware/sopftware that has nothing to do
with squid.

> On Mon, Dec 22, 2008 at 5:15 PM, Matus UHLAR - fantomas
>  wrote:
> > On 22.12.08 10:44, ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░ wrote:
> >> anyone know how to block /limit P2P connection
> >
> > content-inspecting firewalls.
-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
REALITY.SYS corrupted. Press any key to reboot Universe.


Re: [squid-users] how to blocking P2P

2008-12-22 Thread ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░
can you give me sample ?
im n00b :(

On Mon, Dec 22, 2008 at 5:24 PM, Leonardo Rodrigues Magalhães
 wrote:
>
> usually P2P does not uses squid. Anyway, several P2P protocols can be
> encapsulated in HTTP requests, thus allowing them to use squid and
> successfully work through HTTP proxy.
>
> Those HTTP-encapsulated P2P requests usually can be identified by:
>
> 1) CONNECT method
> 2) uses IP addresses instead of names
> 3) almost no real CONNECT requests (https ones) uses IP addresses, they uses
> almost all names
>
> with 1 and 2, you can create ACLs and limit/block it. Search the archives,
> this has been discuted several times before.
>
> And watch out your NAT rules. If they are allowing anything, so P2P will
> probably works without squid, thus you cannot control/block it on squid.
>
>
> ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░ escreveu:
>>
>> anyone know how to block /limit P2P connection
>>
>
> --
>
>
>Atenciosamente / Sincerily,
>Leonardo Rodrigues
>Solutti Tecnologia
>http://www.solutti.com.br
>
>Minha armadilha de SPAM, NÃO mandem email
>gertru...@solutti.com.br
>My SPAMTRAP, do not email it
>
>
>
>
>



-- 
-=-=-=-=
Personal Blog http://my.blog.or.id ( lagi belajar )
Hot News !!! :
Pengin punya Layanan SMS PREMIUM ?
Contact me ASAP. dapatkan Share revenue MAXIMAL tanpa syarat traffic...


Re: [squid-users] how to blocking P2P

2008-12-22 Thread Leonardo Rodrigues Magalhães


usually P2P does not uses squid. Anyway, several P2P protocols can be 
encapsulated in HTTP requests, thus allowing them to use squid and 
successfully work through HTTP proxy.


Those HTTP-encapsulated P2P requests usually can be identified by:

1) CONNECT method
2) uses IP addresses instead of names
3) almost no real CONNECT requests (https ones) uses IP addresses, they 
uses almost all names


with 1 and 2, you can create ACLs and limit/block it. Search the 
archives, this has been discuted several times before.


And watch out your NAT rules. If they are allowing anything, so P2P will 
probably works without squid, thus you cannot control/block it on squid.



░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░ escreveu:

anyone know how to block /limit P2P connection
  


--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] how to blocking P2P

2008-12-22 Thread ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░
what you mean ?

On Mon, Dec 22, 2008 at 5:15 PM, Matus UHLAR - fantomas
 wrote:
> On 22.12.08 10:44, ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░ wrote:
>> anyone know how to block /limit P2P connection
>
> content-inspecting firewalls.
> --
> Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
> Warning: I wish NOT to receive e-mail advertising to this address.
> Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
> Micro$oft random number generator: 0, 0, 0, 4.33e+67, 0, 0, 0...
>



-- 
-=-=-=-=
Personal Blog http://my.blog.or.id ( lagi belajar )
Hot News !!! :
Pengin punya Layanan SMS PREMIUM ?
Contact me ASAP. dapatkan Share revenue MAXIMAL tanpa syarat traffic...


Re: [squid-users] how to blocking P2P

2008-12-22 Thread Matus UHLAR - fantomas
On 22.12.08 10:44, ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░ wrote:
> anyone know how to block /limit P2P connection

content-inspecting firewalls.
-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Micro$oft random number generator: 0, 0, 0, 4.33e+67, 0, 0, 0...


Re: [squid-users] Question about reverse proxy and Apache Expires headers

2008-12-22 Thread Matus UHLAR - fantomas
> Matus UHLAR - fantomas wrote:
> >you set up images to expire after a month, and wonder that they don't 
> >expire
> >before a month?

On 21.12.08 09:33, Tom Williams wrote:
> As strange as that sounds, basically yes.  :) The reason I set the 
> expiration period for one month is I don't expect the images to change 
> frequently at all.  However, if the image DOES change at some point 
> within that month time period, I would want the image to be refreshed.

Don't send Expires then. Expires means that the object will NOT be
re-fetched. I think using "Cache-Control: must-revalidate" header should be
enough.

> I'm sure my understanding of this is wrong but I was thinking if I set 
> the images to expire after say one day or one week, Squid would purge 
> the image that was in the cache and request an updated copy of the 
> image.
>  If the image isn't changing with much frequency, I wouldn't want 
> Squid to fetch a "fresh", yet unchanged, copy of the image with much 
> frequency.  Since I set the expiration to be since the image was last 
> modified, I was thinking Squid would ask the server if the image had 
> changed and fetch a new copy if it did.  If the image had not changed, 
> after a month it would purge the old image and fetch a new one.  Now 
> that I've written that, that doesn't make much sense either.
> 
> So, how do the expires headers impact Squid's interaction with the web 
> server in a reverse proxy configuration?

I think you should read RFC2616 (HTTP/1.1) for description of Expires and
Cache-Control headers. They together can be used to fine-tune caching
behaviour on caching proxy servers. The Expires: header just does not what
you want.
-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
10 GOTO 10 : REM (C) Bill Gates 1998, All Rights Reserved!


[squid-users] repost Fwd: how to blocking P2P

2008-12-22 Thread ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░
sorry repost
but i think squid-users have alot of traffic so maybe my email got miss

"anyone know how to block /limit P2P connection"


Re: [squid-users] ACL::~ACL ???

2008-12-22 Thread Kinkie
On Sun, Dec 21, 2008 at 9:59 PM, Vladimir Rudenko  wrote:
> when i do "squid -kreconfigure" (or any other command with "squid -k")
> i see such list:
> 2008/12/21 22:42:55.334| ACL::~ACL: '
> 2008/12/21 22:42:55.334| ACL::~ACL: '
> 
> 2008/12/21 22:55:38.016| ACL::~ACL: '
> 2008/12/21 22:55:38.016| ACL::~ACL: '
>
> looks like c++ destrictor, but what it means in log answer?

what are your debug levels settings?
I'm inclined to think of this as a cosmetic bug.


-- 
/kinkie


[squid-users] refresh_pattern , how to setrefresh_time < 1minitues

2008-12-22 Thread Wisdo Tang
Hi list,

I'm trying to set some kind of url to refresh itself in a specified time
(< 1 mintutes).

such as,
refresh_pattern  \.php   1s   100%  1s

I searched the maillist,  it seems this topic is discussed before,
but no final solution at that time.

Is it good way for now?

Best regards,
-Roadt