[squid-users] Fwd: transeparent prxy and a reverse proxy simultaneously ?

2007-10-16 Thread Indunil Jayasooriya
Hi ,

I want to know that is it possible for a single Squid server to serve
both as a transeparent INTERCEPTING proxy and a reverse proxy
simultaneously ?


-- 
Thank you
Indunil Jayasooriya


Re: [squid-users] How often is mswin_check_lm_group.exe Can't find DC for user's domain logged?

2007-10-16 Thread Guido Serassio

Hi,

At 12.49 15/10/2007, Paul Cocker wrote:

I'm seeing mswin_check_lm_group.exe Can't find DC for user's domain
'cdltd.co.uk' in the cache.log file.


You must use only netbios domain names, not FQDN domain names.
mswin_check_lm_group.exe is a Lan Manager based helper, so netbios 
name resolution (WINS) is involved.



 Does the program try to contact the
domain on startup?


No.


 Does each child try to contact the domain?


Yes, during every user validation.


 Is this
error a reflection of a failure to connect to the domain for a single
connection?


Maybe.


Basically, how severe is this error?


Is a fatal error for the displayed user validation.


 Are one or two expected?


This should never happen.


Should I
only worry when I see a cache.log swamped with them? Or is this a major
concern?


Hard to answer to this question, maybe a DC slowness problem, a name 
resolution problem, a network problem, 


Regards

Guido



-

Guido Serassio
Acme Consulting S.r.l. - Microsoft Certified Partner
Via Lucia Savarino, 1   10098 - Rivoli (TO) - ITALY
Tel. : +39.011.9530135  Fax. : +39.011.9781115
Email: [EMAIL PROTECTED]
WWW: http://www.acmeconsulting.it/



Re: [squid-users] Fwd: transeparent prxy and a reverse proxy simultaneously ?

2007-10-16 Thread Amos Jeffries

Indunil Jayasooriya wrote:

Hi ,

I want to know that is it possible for a single Squid server to serve
both as a transeparent INTERCEPTING proxy and a reverse proxy
simultaneously ?


Yes. I have all three modes operating here at present. Interceptiong, 
forward, and accelerator/reverse.


Just segregate your squid.conf so you can tell the accelerator-specific 
configuration apart from the general and you'll be fine.


Amos


Re: [squid-users] Fwd: transeparent prxy and a reverse proxy simultaneously ?

2007-10-16 Thread Indunil Jayasooriya
On 10/16/07, Amos Jeffries [EMAIL PROTECTED] wrote:
 Indunil Jayasooriya wrote:
  Hi ,
 
  I want to know that is it possible for a single Squid server to serve
  both as a transeparent INTERCEPTING proxy and a reverse proxy
  simultaneously ?

 Yes. I have all three modes operating here at present. Interceptiong,
 forward, and accelerator/reverse.

 Just segregate your squid.conf so you can tell the accelerator-specific
 configuration apart from the general and you'll be fine.


Pls see below for CURRENT squid.conf file.


http_port 3128
httpd_accel_host virtual
httpd_accel_port 80
httpd_accel_with_proxy on
httpd_accel_uses_host_header on


to setup reverse proxy on squid 2.5 , http_port should be changed to
80, shouldn't it?
then, what will happen to port 3128 that is used for transpaenrt purpose.

I think I need something like this for reverse proxy.

http_port 80 # Port of Squid proxy
httpd_accel_host 172.16.1.115 # IP address of real web server @ DMZ ZONE
httpd_accel_port 80 # Port of real web server @ DMZ ZONE
httpd_accel_single_host on # Forward uncached requests to single host
httpd_accel_with_proxy on
httpd_accel_uses_host_header off

Now, How should I include the above reverse proxy section by keeping
transeparent proxy section in squid.conf

Should I have to include both sections? then HOW?


pls read below too.
I am ruunnig itables on the SAME BOX. I have added below rules for squid.

#Redirecting traffic destined to port 80 to port 3128
 iptables -t nat -A PREROUTING -p tcp -i eth2 --dport 80 -j REDIRECT
--to-port 3128

#For squid traffic to Accept
iptables -A INPUT -i eth2 -d 192.168.101.254 -p tcp -s
192.168.101.0/24 --dport 3128 -j ACCEPT

Evrything works fine.

HOPE to hear from you.



 Amos




-- 
Thank you
Indunil Jayasooriya


[squid-users] Filemanager

2007-10-16 Thread Michael Jurisch
Hi there!

I asked this question in a former mail, but maybe someone who can help didn't 
recoginzed it as it was part of a other main issue.

I just want to know whether there are (web based) filemanager for Squid out 
there, which allow me to navigate through the cache content, delete selected 
file and maybe even change the rights of the file.

Thanks a lot,
Micha


[squid-users] Filemanager suplemental

2007-10-16 Thread Michael Jurisch
Ah, I forgot to mention that I already tested cachepurger by ISP Systems but 
I am looking for an alternative as it seems to work not really correct.

Micha


Re: [squid-users] Squid marks alive siblings as dead.

2007-10-16 Thread Henrik Nordstrom
On tis, 2007-10-16 at 17:27 +1300, Amos Jeffries wrote:

  The default for all accesses (HTTP, ICP, HTCP, SNMP) is deny unless
  allowed.
 
 precisely. Simply flagging a peer as htcp is not enough to turn it on. As
 now documented.

A requesting peer needs to be allowed by in
http_access
and
icp_access or htcp_access if icp or htcp is used
on the Squid server the peer is connecting to.

It is not sufficient to simply add a cache_peer line to the requesting
peer, the requested peer also needs to allow access.

 You mean a visible default of both being X_access deny !localnet with
 the backup default of both being deny all?

Default-if-none being deny all, but with a suggested uncommented
default of allow localnet, deny all.

 Or the backup default of both being the deny !localnet?
 
 localnet also would consequently need adding to the suggested global acls.
 Perhapse with the RFC1918 spaces as a good default for localnet.

That's a good idea.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Filemanager

2007-10-16 Thread Neil A. Hillard
Hi,

Michael Jurisch wrote:
 I asked this question in a former mail, but maybe someone who can
 help didn't recoginzed it as it was part of a other main issue.
 
 I just want to know whether there are (web based) filemanager for
 Squid out there, which allow me to navigate through the cache content,
 delete selected file and maybe even change the rights of the file.

You probably need to explain why you want to do this.  For example - why
would you need to change the file permissions?  If squid created the
file in the cache, it can read it back - why would you want to change
the permissions?


Neil.

-- 
Neil Hillard[EMAIL PROTECTED]
AgustaWestland  http://www.whl.co.uk/

Disclaimer: This message does not necessarily reflect the
views of Westland Helicopters Ltd.


Re: [squid-users] Fwd: transeparent prxy and a reverse proxy simultaneously ?

2007-10-16 Thread Henrik Nordstrom
On tis, 2007-10-16 at 12:22 +0530, Indunil Jayasooriya wrote:
 Hi ,
 
 I want to know that is it possible for a single Squid server to serve
 both as a transeparent INTERCEPTING proxy and a reverse proxy
 simultaneously ?

Yes. But you need two http_access lines, and also remember to add a
never_direct for the accelerated sites or intercepted requests for
accelerated sites will be a little confused..

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Fwd: transeparent prxy and a reverse proxy simultaneously ?

2007-10-16 Thread Henrik Nordstrom
On tis, 2007-10-16 at 14:36 +0530, Indunil Jayasooriya wrote:

 to setup reverse proxy on squid 2.5 , http_port should be changed to
 80, shouldn't it?

First of all you should upgrade to 2.6, then see the FAQ on how to
configure Squid for reverse proxy operation.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Odd Corruption

2007-10-16 Thread Alex Smith

Hi Hendrik,

We do appear to be using mod_deflate on our mediapool. Is it better to 
exclude the file from this altogether?


Thanks

Henrik Nordstrom wrote:

On mån, 2007-10-15 at 18:41 +0100, Alex Smith wrote:
  

Hi,

Having weird issues with squid (Squid Cache: Version 2.6.STABLE6). It 
seems to be randomly trashing css files, so websites are left rendered 
incorrectly on IE, but fine in FF.



Maybe you have a broken web server doing content encoding without
updating ETag? (I.e. Apache mod_deflate)

See the broken_vary_encoding squid.conf directive for a workaround.

Regards
Henrik
  




[squid-users] log file and analysis report

2007-10-16 Thread Arun Shrimali
Dear All,

I am using Squid 2.6 Stable 4 and Sarg 2.2.2 for report generation.
there was no proper 'how to' for setting up the auto generation of the
sarg report, thus I have put sarg command in hourly cron to auto
generate the report, which is working fine. It is hourly update the
report.

but the problem is:
1. for the same log file it create different report with change of the
day as follows
 2007Oct07 - 2007Oct14
 2007Oct07 - 2007Oct13
 2007Oct07 - 2007Oct12
 2007Oct07 - 2007Oct11
where as I would like to have the single report to be continue for
next day also

2. Squid automatic start new log file on start of the week i.e. on
sunday, where as I would like to have single log file for a month,
thus ultimately can have single analysis report for a month.

can anybody help me to configure accordingly.

regards

Arun


Re: [squid-users] Fwd: transeparent prxy and a reverse proxy simultaneously ?

2007-10-16 Thread Indunil Jayasooriya
On 10/16/07, Henrik Nordstrom [EMAIL PROTECTED] wrote:
 On tis, 2007-10-16 at 14:36 +0530, Indunil Jayasooriya wrote:

  to setup reverse proxy on squid 2.5 , http_port should be changed to
  80, shouldn't it?

 First of all you should upgrade to 2.6, then see the FAQ on how to
 configure Squid for reverse proxy operation.

If upgraded to squid 2.6, I think I need below lines taken from squid FAQ.


http_port 80 accel defaultsite=www.example.com
cache_peer ip.of.real.webserver parent 80 0 no-query originserver

acl our_sites dstdomain .example.com
http_access allow our_sites

how to write never_direct rule ?

Am I right?


Then, How can I set up transparent interceting to the same squid.conf?
 What about below lines? it is right? if wrong, pls let me know.

http_port 3128 transparent
acl mynet src 192.168.101.0/24
http_access allow mynet


Hope to hear from you.


-- 
Thank you
Indunil Jayasooriya


[squid-users] Reverse proxying http and https

2007-10-16 Thread Taneli Leppä

Hello,

I'm trying to configure a reverse proxy using Squid 2.6 to
serve pages from another server, using both http and https.

Lets say my cache server is mycache.net and I want to serve
both types of pages from cached.mycache.net. The result I'm
looking for is:

http://mycache.net/page - http://cached.mycache.net/page
https://mycache.net/page - https://cached.mycache.net/page

I can get the configuration working so that http and https
go to destination site's http or https port, but not both
at the same time. My current configuration is like this:

http_port mycache.net:80 vhost vport
https_port mycache.net:443 vhost vport key=/etc/squid/mycache.key 
cert=/etc/squid/mycache.crt


cache_peer cached.mycache.net parent 80 0 originserver

acl valid_dst dst mycache.net
http_access allow valid_dst

I tried adding another cache_peer with port 443, but Squid
just complains that such cache_peer already exists.

Any tips for making this configuration work?

--
  Taneli Leppä | Crasman Co Ltd
  [EMAIL PROTECTED]  | http://www.crasman.fi/


Re: [squid-users] log file and analysis report

2007-10-16 Thread Adrian Chadd
You need to rotate the log file.

What you need to do is:

* squid -k rotate
* sleep for a few seconds to let squid do what it needs to
* run sarg on the old log file (access.log.0.)

As for the rest, I can't (easily) help you with. I don't run sarg.




Adrian

On Tue, Oct 16, 2007, Arun Shrimali wrote:
 Dear All,
 
 I am using Squid 2.6 Stable 4 and Sarg 2.2.2 for report generation.
 there was no proper 'how to' for setting up the auto generation of the
 sarg report, thus I have put sarg command in hourly cron to auto
 generate the report, which is working fine. It is hourly update the
 report.
 
 but the problem is:
 1. for the same log file it create different report with change of the
 day as follows
  2007Oct07 - 2007Oct14
  2007Oct07 - 2007Oct13
  2007Oct07 - 2007Oct12
  2007Oct07 - 2007Oct11
 where as I would like to have the single report to be continue for
 next day also
 
 2. Squid automatic start new log file on start of the week i.e. on
 sunday, where as I would like to have single log file for a month,
 thus ultimately can have single analysis report for a month.
 
 can anybody help me to configure accordingly.
 
 regards
 
 Arun

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level bandwidth-capped VPSes available in WA -


Re: [squid-users] Filemanager

2007-10-16 Thread Michael Jurisch
Hi!

 You probably need to explain why you want to do this.  For example - why
 would you need to change the file permissions?  If squid created the
 file in the cache, it can read it back - why would you want to change
 the permissions?

Ok, I try to keep it short:
We want to deliever specific content via the proxy, so that the original server 
hasn't to handle all the load. But we also want that a web editor can decide, 
that a certain file will be deleted from the cache within seconds manually and 
also lock (change file permission) the files, which means the file should 
remain in cache, but surfers can't access them (= stoping delivering the 
content). That later one was just a thought of mine, I want to test whether it 
works or not.

In the end we want to have a web front end for customers who can easily do that 
things describe above.

Thanks,
Micha


Re: [squid-users] Fwd: transeparent prxy and a reverse proxy simultaneously ?

2007-10-16 Thread Amos Jeffries

Indunil Jayasooriya wrote:

On 10/16/07, Amos Jeffries [EMAIL PROTECTED] wrote:

Indunil Jayasooriya wrote:

Hi ,

I want to know that is it possible for a single Squid server to serve
both as a transeparent INTERCEPTING proxy and a reverse proxy
simultaneously ?

Yes. I have all three modes operating here at present. Interceptiong,
forward, and accelerator/reverse.

Just segregate your squid.conf so you can tell the accelerator-specific
configuration apart from the general and you'll be fine.



Pls see below for CURRENT squid.conf file.


http_port 3128
httpd_accel_host virtual
httpd_accel_port 80
httpd_accel_with_proxy on
httpd_accel_uses_host_header on



Ah sorry, you will need to upgrade to a current squid.
 Mixed mode has been available since 2.6 allowed multiple ports.

If you build it yourself you will need to include one of the transparent 
./configure options as suitable for your OS/FW.



#
# Reverse-Proxy accelerating example.com
#
http_port 80 accel defaultsite=example.com
acl hostedSites dstdomain example.com
cache_peer 172.16.1.115 80 0 no-query no-digest originserver name=webhost
cache_peer_access webhost allow hostedSites
http_access allow hostedSites

#
# Forward-Proxy with Transparent users
#
http_port 3128 transparent
...


Amos



Re: [squid-users] Reverse proxying http and https

2007-10-16 Thread Michael Alger
On Tue, Oct 16, 2007 at 01:55:10PM +0300, Taneli Leppä wrote:
 I'm trying to configure a reverse proxy using Squid 2.6 to
 serve pages from another server, using both http and https.
 
 I can get the configuration working so that http and https
 go to destination site's http or https port, but not both
 at the same time. 

My first question is, why do you want to do this?

My second question is, does squid actually do the SSL handshake
when you have it set up to connect to port 443 only? I've never
tried this so I have no idea if it actually works or not, but I
don't really see why it would.

 I tried adding another cache_peer with port 443, but Squid
 just complains that such cache_peer already exists.

The only thing I can think of is adding an additional IP address
to the origin server, and an additional cache_peer with that IP
for the alternate port. You can then control which method (HTTP
or HTTPS) squid uses to connect to the origin using peer_access
rules.

But I really want to know why you want to do this in the first
place. Normally a reverse proxy lives close enough to the origin
that the network path is trusted, so SSL between the proxy and
origin is just needless overhead.


Re: [squid-users] Filemanager

2007-10-16 Thread Amos Jeffries

Michael Jurisch wrote:

Hi!


You probably need to explain why you want to do this.  For example - why
would you need to change the file permissions?  If squid created the
file in the cache, it can read it back - why would you want to change
the permissions?


Ok, I try to keep it short:
We want to deliever specific content via the proxy, so that the original server 
hasn't to handle all the load. But we also want that a web editor can decide, 
that a certain file will be deleted from the cache within seconds manually and 
also lock (change file permission) the files, which means the file should 
remain in cache, but surfers can't access them (= stoping delivering the 
content). That later one was just a thought of mine, I want to test whether it 
works or not.

In the end we want to have a web front end for customers who can easily do that 
things describe above.



Riiight well the first bit is possible.

Seeing as your system knows which file is now obsolete you can use any 
of the the cache-manager systems to drop individual URI out of the 
cache. Next request squid will fetch from the authoritative server again 
as per normal.


Altering the cache directly is NOT recommended. Squid is not guaranteed 
to use the same filing system in two given cache-dirs and not all squid 
fs match OS fs. If a file is altered in-cache it may cause serious problems.


'Locking' of files like that makes no sense in HTTP. Either a URI is 
available or its dead. If you are really wanting an archive of old 
content you should be looking elsewhere than the web proxy.
You're better off locking it on the origin server fs and following the 
same procedure on the proxy as for changed files.


Amos


Re: [squid-users] Reverse proxying http and https

2007-10-16 Thread Taneli Leppä

Hello,

Michael Alger wrote:

My first question is, why do you want to do this?


We have our reasons. I agree it sounds strange.


My second question is, does squid actually do the SSL handshake
when you have it set up to connect to port 443 only? I've never
tried this so I have no idea if it actually works or not, but I
don't really see why it would.


Yes, it seems to do the handshake properly. All traffic (when
accessing http://mycache.net and https://mycache.net) goes to
https://cached.mycache.net/ when I specify port 433 and ssl in 
cache_peer options.


Would I be better off just running two instances of Squid on
different ports, one for http and one for https?

--
  Taneli Leppä | Crasman Co Ltd
  [EMAIL PROTECTED]  | http://www.crasman.fi/


Re: [squid-users] Odd Corruption

2007-10-16 Thread Henrik Nordstrom
On tis, 2007-10-16 at 10:39 +0100, Alex Smith wrote:
 Hi Hendrik,
 
 We do appear to be using mod_deflate on our mediapool. Is it better to 
 exclude the file from this altogether?

It's hard to say what's best, short of having mod_deflate fixed..
whatever else you do there is tradeoffs.

The mod_deflate bug is not at all isolated to any specific kind of
resource, and may be seen on any cachable and mod_deflate compressable
content.

The broken_vary_encoding directive in squid.conf is quite effective for
working around the issue, but only in your cache. Other caches handling
ETag will still break down when given incorrect input from the server
unless they have similar rules in place.

Disabling mod_deflate will slow down page loading and increase bandwidth
usage.

Simply removing the ETag header from responses will for no apparent
reason make some clients not cache the response, slowing down page views
and increasing bandwidth usage..

Applying the current Apache patch works around the problem, but
clients can no longer perform cache validation of compressed responses
which slows down revisits and increases bandwidth usage.. and if they
fix that with a quick patch adressing why validation no longer works
the problem is most likely back at square one again... but at least
there is some discussion going on on how to fix this and other design
problems with conditional requests in Apache.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Fwd: transeparent prxy and a reverse proxy simultaneously ?

2007-10-16 Thread Henrik Nordstrom
On tis, 2007-10-16 at 15:47 +0530, Indunil Jayasooriya wrote:

 If upgraded to squid 2.6, I think I need below lines taken from squid FAQ.
 
 
 http_port 80 accel defaultsite=www.example.com
 cache_peer ip.of.real.webserver parent 80 0 no-query originserver
 
 acl our_sites dstdomain .example.com
 http_access allow our_sites


yes.

 how to write never_direct rule ?

never_direct allow our_sites

and you also need

cache_peer_access ip.of.real.webserver allow our_sites

to tell Squid that the web server is only for the accelerated sites, not
anything else...

 Then, How can I set up transparent interceting to the same squid.conf?
  What about below lines? it is right? if wrong, pls let me know.
 
 http_port 3128 transparent
 acl mynet src 192.168.101.0/24
 http_access allow mynet

Looks fine.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] Hosting simple files using squid

2007-10-16 Thread Chris Picton
Hi

Is it possible to host simple files using squid.

I am thinking in particular about using  squid to host the proxy
autoconfiguration file, and using my dhcp server to point users to
http://192.168.1.1:3128/proxy.pac

Is this at all possible?
-+---
Chris Picton | PGP Key ID: 9D28A988 (wwwkeys.pgp.net)
  Technical Director | PGP Key Fingerprint:
 Tangent Systems | 2B46 29EA D530 79EC D9EA 3ED0 229D 6DD6 9D28 A988
011 447 8096 | 
[EMAIL PROTECTED] | http://www.tangent.co.za/keys/chrisp.asc
-+---


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] ACL help: blocking non-html objects from particular domains

2007-10-16 Thread Amos Jeffries

Craig Skinner wrote:

On Mon, Oct 15, 2007 at 12:04:41AM +1300, Amos Jeffries wrote:

It should work. What does cache.log / access.log say when (3) is used?



Thanks for the help, I'll work on dstdomains next, logs below:


###


acl our_networks src 127.0.0.1/32
http_access allow our_networks
acl suspect-domains dstdom_regex /etc/squid/suspect-domains.acl
acl ok-mime-types rep_mime_type -i ^text/html$
http_access allow suspect-domains ok-mime-types
http_access deny all

The request GET http://www.example.com/ is DENIED, because it matched 'all'
TCP_DENIED/403 1375 GET http://www.example.com/ - NONE/- text/html


###


acl our_networks src 127.0.0.1/32
http_access allow our_networks
acl suspect-domains dstdom_regex /etc/squid/suspect-domains.acl
acl ok-mime-types rep_mime_type -i ^text/html$
http_access deny suspect-domains !ok-mime-types
http_access allow suspect-domains
http_access deny all

The request GET http://www.example.com/ is DENIED, because it matched 
'ok-mime-types'
TCP_DENIED/403 1375 GET http://www.example.com/ - NONE/- text/html



Really weird. So what does /etc/squid/suspect-domains.acl contain 
exactly then?



Amos


Re: [squid-users] ACL help: blocking non-html objects from particular domains

2007-10-16 Thread Amos Jeffries

Craig Skinner wrote:

On Mon, Oct 15, 2007 at 12:04:41AM +1300, Amos Jeffries wrote:

It should work. What does cache.log / access.log say when (3) is used?



Thanks for the help, I'll work on dstdomains next, logs below:


###


acl our_networks src 127.0.0.1/32
http_access allow our_networks
acl suspect-domains dstdom_regex /etc/squid/suspect-domains.acl
acl ok-mime-types rep_mime_type -i ^text/html$
http_access allow suspect-domains ok-mime-types
http_access deny all

The request GET http://www.example.com/ is DENIED, because it matched 'all'
TCP_DENIED/403 1375 GET http://www.example.com/ - NONE/- text/html


###


acl our_networks src 127.0.0.1/32
http_access allow our_networks
acl suspect-domains dstdom_regex /etc/squid/suspect-domains.acl
acl ok-mime-types rep_mime_type -i ^text/html$
http_access deny suspect-domains !ok-mime-types
http_access allow suspect-domains
http_access deny all

The request GET http://www.example.com/ is DENIED, because it matched 
'ok-mime-types'
TCP_DENIED/403 1375 GET http://www.example.com/ - NONE/- text/html



Doh!. I'm just going to go aside and kick myself a bit.

  reP_mime_types is a REPLY acl.

it should be used with http_reply_access  :-P


Amos


Re: [squid-users] strange problem with proxy port

2007-10-16 Thread Amos Jeffries

Sven Frommholz - Konexxo GmbH wrote:
 
Amos Jeffries wrote:
Sounds like a firewall problem. The fact squid isn't logging a 
connection attempt makes it probable.
What error message are the clients showing when they drop the 
connection?



Amos



Windows Firewall is completely turned off on all clients. The problem occurs
on clients within the winodws domain and also on external clients, so it
can't be some sort of group policy. Regarding the error messages I can only
translate, since there are only german clients here. Firefox and IE will stop
immediately after submitting the request. FF shows connection reset (not a
squid message, but some firefaox internal). IE shows the IE standard message
Website cannot be displayed.
 
Sven
 


What about the FW on the squid box itself?

You may need to do a packet trace with tcpdump/wireshark or similar to 
see where the connections are headed because they do not appear to be 
going to squid.


Only other time I've seen FFx doing an immediate rest was when I was 
experimenting with transparency and IPv6.


Amos



Re: [squid-users] Caching problem

2007-10-16 Thread Henrik Nordstrom
On mån, 2007-10-15 at 04:20 -0400, Michael Alger wrote:

 META HTTP-EQUIV=headername CONTENT=header-value
 
 is equivalent to
 
 headername: header-value
 
 but not everything will parse these as if they were actual HTTP
 headers.

This syntax should only be used for HTTP header IF the web server you
are using knows to parse HTML META tags and add them as HTTP headers to
the response.

Other uses of HTML META HTTP-EQUIV should be limited to headers only
relevant to the browser and not intermediary caches such as Squid.

For a good primer on how HTTP caching works see the Caching tutorial for
Web authors and Webmasters url:http://www.mnot.net/cache_docs/ by Mark
Nottingham (aka mnot).

REgards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Hosting simple files using squid

2007-10-16 Thread Adrian Chadd
On Tue, Oct 16, 2007, Chris Picton wrote:
 Hi
 
 Is it possible to host simple files using squid.
 
 I am thinking in particular about using  squid to host the proxy
 autoconfiguration file, and using my dhcp server to point users to
 http://192.168.1.1:3128/proxy.pac
 
 Is this at all possible?

Yup. Create an entry in the mime database, put the file in the icons directory,
point at that. It -is- a hack and I don't recommend it, but I've done it to
embed images in squid error documents.



adrian



Re: [squid-users] Reverse proxying http and https

2007-10-16 Thread Taneli Leppä

Amos Jeffries wrote:
I suggest adding defaultsite=mysite.example.net to those to help out 
users with broken software.


Thanks for the suggestion!


add name=XX to the existing cache_peer
then add:
cache_peer cached.mycache.net parent 443 0 originserver name=YY
all cache_peer_access and cache_peer_domains need to now refer to XX and 
YY instead of the peer FQDN.


Great, this seems to work! Thanks! My configuration is now:

http_port mycache.net:80 vhost vport defaultsite=cached.mycache.net
https_port mycache.net:443 vhost vport defaultsite=cached.mycache.net 
key=/etc/squid/mycache.key cert=/etc/squid/mycache.crt


cache_peer cached.mycache.net parent 80 0 originserver name=http
cache_peer cached.mycache.net parent 443 0 originserver name=https ssl 
sslflags=DONT_VERIFY_PEER


acl all src 0.0.0.0/0.0.0.0
acl valid_dst dst mycache.net
http_access allow valid_dst
http_access deny all

acl http_dst port 80
acl https_dst port 443

cache_peer_access http allow http_dst
cache_peer_access https allow https_dst
cache_peer_access http deny all
cache_peer_access https deny all

--
  Taneli Leppä | Crasman Co Ltd
  [EMAIL PROTECTED]  | http://www.crasman.fi/


Re: [squid-users] Reverse proxying http and https

2007-10-16 Thread Henrik Nordstrom
On tis, 2007-10-16 at 13:55 +0300, Taneli Leppä wrote:
 Hello,
 
 I'm trying to configure a reverse proxy using Squid 2.6 to
 serve pages from another server, using both http and https.

 http://mycache.net/page - http://cached.mycache.net/page
 https://mycache.net/page - https://cached.mycache.net/page

Thats fine. Just another example of multiple backend servers..

 I tried adding another cache_peer with port 443, but Squid
 just complains that such cache_peer already exists.

See the name option to cache_peer if you need multiple peers with the
same destination host..

Also see cache_peer_access for selecting which requests gets sent to
which peer. You will need to use this to select that http requests go to
the http peer and https requests go to the https peer.

REgards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Hosting simple files using squid

2007-10-16 Thread Henrik Nordstrom
On tis, 2007-10-16 at 14:02 +0200, Chris Picton wrote:
 Hi
 
 Is it possible to host simple files using squid.
 
 I am thinking in particular about using  squid to host the proxy
 autoconfiguration file, and using my dhcp server to point users to
 http://192.168.1.1:3128/proxy.pac
 
 Is this at all possible?

Not easily. It's better to run a simpe web server on another port for
serving the PAC file.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] SQUID 2.6 Stable 14 disk usage didn't grow HELP ME!

2007-10-16 Thread Narek Gharibyan
Hi all,
I set squid 2.6 transparent proxy with default settings on P4 2000 RAM 512/
80GB HDD. I change only

cache_mem 128 MB

cache_dir ufs /usr/local/squid/cache 40960 16 256


Squid works normally and do caching. It takes 300Mb RAM, and about 3GB HDD
space, but it DOESN’T use more space. Squid works about 15 days without any
restart and it use only 3GB space and the cache size didn’t grow. Is it
normal? I want to use more HDD cache Please advice

 

Thank you in advance



Re: [squid-users] Reverse proxying http and https

2007-10-16 Thread Amos Jeffries

Taneli Leppä wrote:

Amos Jeffries wrote:
I suggest adding defaultsite=mysite.example.net to those to help out 
users with broken software.


Thanks for the suggestion!


add name=XX to the existing cache_peer
then add:
cache_peer cached.mycache.net parent 443 0 originserver name=YY
all cache_peer_access and cache_peer_domains need to now refer to XX 
and YY instead of the peer FQDN.


Great, this seems to work! Thanks! My configuration is now:

http_port mycache.net:80 vhost vport defaultsite=cached.mycache.net
https_port mycache.net:443 vhost vport defaultsite=cached.mycache.net 
key=/etc/squid/mycache.key cert=/etc/squid/mycache.crt


cache_peer cached.mycache.net parent 80 0 originserver name=http
cache_peer cached.mycache.net parent 443 0 originserver name=https ssl 
sslflags=DONT_VERIFY_PEER


Just one last: are people going to be visiting mycache.net? or 
cached.mycache.net?


http(s)_port and defaultsite=  need the public ones.

cache_peer should use a private domain name or even IP address so you 
can later change public DNS without breaking squid.


Happy caching!
Amos


Re: [squid-users] Hosting simple files using squid

2007-10-16 Thread Amos Jeffries

Chris Picton wrote:

Hi

Is it possible to host simple files using squid.

I am thinking in particular about using  squid to host the proxy
autoconfiguration file, and using my dhcp server to point users to
http://192.168.1.1:3128/proxy.pac

Is this at all possible?


Not easily yet.
 I've tried this myself and the few solutions seem very prone to breaking.

Amos


Re: [squid-users] Reverse proxying http and https

2007-10-16 Thread Taneli Leppä

Amos Jeffries wrote:
Just one last: are people going to be visiting mycache.net? or 
cached.mycache.net?


They're going to be visiting mycache.net, so I guess it's
correct.

cache_peer should use a private domain name or even IP address so you 
can later change public DNS without breaking squid.


Yes, I'm using actually IP addresses :-) Thanks once more!

--
  Taneli Leppä | Crasman Co Ltd
  [EMAIL PROTECTED]  | http://www.crasman.fi/


RE: [squid-users] Squid on DualxQuad Core 8GB Rams - Optimization - Performance - Large Scale - IP Spoofing

2007-10-16 Thread Paul Cocker
For the ignorant among us can you clarify the meaning of devices?


Paul Cocker
IT Systems Administrator
IT Security Officer

-Original Message-
From: Adrian Chadd [mailto:[EMAIL PROTECTED] 
Sent: 15 October 2007 10:28
To: Tek Bahadur Limbu
Cc: Haytham KHOUJA (devnull); squid-users@squid-cache.org
Subject: Re: [squid-users] Squid on DualxQuad Core 8GB Rams -
Optimization - Performance - Large Scale - IP Spoofing

On Mon, Oct 15, 2007, Tek Bahadur Limbu wrote:

 I've read almost every single thread on Optimizing Squid and Linux 
 and want to share my setup with you.
 I do have some questions, clarifications and bugs but overall the 
 performance is pretty impressive. (Yes, much better than the NetApps)
 
 Great news to hear that Squid is beating NetCache!

Its not. Modern devices beat squid hands down.




Adrian

--
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid
Support -
- $25/pm entry-level bandwidth-capped VPSes available in WA -




TNT Post is the trading name for TNT Post UK Ltd (company number: 04417047), 
TNT Post (Doordrop Media) Ltd (00613278), TNT Post Scotland Ltd (05695897),TNT 
Post North Ltd (05701709) and TNT Post South West Ltd (05983401). Emma's Diary 
and Lifecycle are trading names for Lifecycle Marketing (Mother and Baby) Ltd 
(02556692). All companies are registered in England and Wales; registered 
address: 1 Globeside Business Park, Fieldhouse Lane, Marlow, Buckinghamshire, 
SL7 1HY.



Re: [squid-users] Squid on DualxQuad Core 8GB Rams - Optimization - Performance - Large Scale - IP Spoofing

2007-10-16 Thread Adrian Chadd
On Tue, Oct 16, 2007, Paul Cocker wrote:
 For the ignorant among us can you clarify the meaning of devices?

Bluecoat. Higher end Cisco ACE appliances/blades. In the accelerator space,
stuff like what became the Juniper DX can SLB and cache about double what
squid can in memory.

Just so you know, the Cisco Cache Engine stuff from about 8 years ago
still beats Squid for the most part. I remember seeing numbers of
~ 2400 req/sec, to/from disk where appropriate, versus Squid's current
maximum throughput of about 1000. And this was done on Cisco's -then-
hardware - I think that test was what, dual PIII 800's or something?
They were certainly pulling about 4x the squid throughput for the same
CPU in earlier polygraphs.

I keep saying - all this stuff is documented and well-understood.
How to make fast network applications - well understood. How to have
network apps scale well under multiple CPUs - well understood, even better
by the Windows people. Cache filesystems - definitely well understood.

Time spent by people to implement this stuff on Squid - almost none.

(Current hardware can and will saturate gigabit ethernet with HTTP traffic
on two modern CPU cores. Maybe one CPU core for low transaction rates.
Stuff is pretty fantastically quick.)

Who wants Squid to do that?



Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level bandwidth-capped VPSes available in WA -


[squid-users] TCP_MISS/000 in Access.log

2007-10-16 Thread Edwin Malave Jr.
I have a number of users which are trying to access
http://www.todaysmilitary.com. However, they are not able to even pull up
the website on their browsers.  I tail'ed the squid access log and noticed
TCP_MISS/OOO entries.  I know that TCP_MISS usually means that the client
aborted the GET request before SQUID could return the data or that the
website might be unavailable.  But in this case neither is true.  ECN is off
and the site is accessible on a non proxy computer.  The squidclient is not
able to access the url either.  I am not sure where to go from here. Any
help would be appreciated.

I am running SQUID version 2.6.STABLE12 on Ubuntu Linux 6.10 (Kernel version
2.6.17-11-server)

Snippet from access.log:

1192536227.969 120004 127.0.0.1 TCP_MISS/000 0 GET
http://www.todaysmilitary.com/app/tm/ - DIRECT/www.todaysmilitary.com -
1192536248.500 120001 127.0.0.1 TCP_MISS/000 0 GET
http://www.todaysmilitary.com/ - DIRECT/www.todaysmilitary.com -
1192536312.471 119986 127.0.0.1 TCP_MISS/000 0 GET
http://www.todaysmilitary.com/ - DIRECT/www.todaysmilitary.com -
1192538865.918   9931 127.0.0.1 TCP_MISS/000 0 GET
http://www.todaysmilitary.com - DIRECT/www.todaysmilitary.com -
1192538868.723   1761 127.0.0.1 TCP_MISS/000 0 GET
http://www.todaysmilitary.com - DIRECT/www.todaysmilitary.com -
1192539023.034 120007 127.0.0.1 TCP_MISS/000 0 GET
http://www.todaysmilitary.com - DIRECT/72.3.215.130 -
1192539274.487  10808 127.0.0.1 TCP_MISS/000 0 GET
http://www.todaysmilitary.com - DIRECT/www.todaysmilitary.com -
1192540150.237 119998 127.0.0.1 TCP_MISS/000 0 GET
http://www.todaysmilitary.com/ - DIRECT/www.todaysmilitary.com -
1192540164.445 120003 127.0.0.1 TCP_MISS/000 0 GET
http://www.todaysmilitary.com/ - DIRECT/www.todaysmilitary.com -
1192540192.787 120015 127.0.0.1 TCP_MISS/000 0 GET
http://www.todaysmilitary.com - DIRECT/www.todaysmilitary.com -
1192540828.684 119979 127.0.0.1 TCP_MISS/000 0 GET
http://www.todaysmilitary.com/ - DIRECT/www.todaysmilitary.com -
1192540853.336 119988 127.0.0.1 TCP_MISS/000 0 GET
http://www.todaysmilitary.com/ - DIRECT/www.todaysmilitary.com -
1192540934.683 119986 127.0.0.1 TCP_MISS/000 0 GET
http://www.todaysmilitary.com/ - DIRECT/www.todaysmilitary.com -
1192540986.043 119997 127.0.0.1 TCP_MISS/000 0 GET
http://www.todaysmilitary.com/ - DIRECT/72.3.215.130 -
1192541004.694 119995 127.0.0.1 TCP_MISS/000 0 GET
http://www.todaysmilitary.com/ - DIRECT/www.todaysmilitary.com -
1192541015.153 12 127.0.0.1 TCP_MISS/000 0 GET
http://www.todaysmilitary.com/ - DIRECT/www.todaysmilitary.com -

Thank you,

Edwin



AW: [squid-users] force basic NTLM-auth for certain clients/urls

2007-10-16 Thread Markus.Rietzler
thanxs for that hint - it worked as a fix

i have addes this to my squid.conf

acl javaNtlmFix browser -i java
header_access Proxy-Authenticate deny javaNtlmFix
header_replace Proxy-Authenticate Basic realm=Internet Access

now any java-client (java web start, java or applets in browser) will only see 
the basic auth scheme.
a username/password dialog pops up and i have to enter my credentials. 

any other client (firefox, ie) still se both NTLM and Basic scheme and use NTLM 
challenge response to authenticate... 

the little drawback is, that there is that little nasty dialog but connection 
via proxy is working...

thanxs

markus

-Ursprüngliche Nachricht-
Von: Chris Robertson [mailto:[EMAIL PROTECTED] 
Gesendet: Samstag, 13. Oktober 2007 02:10
An: squid-users@squid-cache.org
Betreff: Re: [squid-users] force basic NTLM-auth for certain 
clients/urls

[EMAIL PROTECTED] wrote:
 we are running squid 2.6stable16 with ntlm auth. we use winbind to
 support challenge response auth so that there is no user 
interaction or
 password dialog popup. 

 is it possible to force basic auth - so that no ntlm-auth is used or
 tried before - for certain clients (eg acl javavm browser 
java) or urls?

 proxy-auth uses settings from auth_param but you can't define which
 auth-schema being used, right?
   

Right.


 markus
   

Perhaps it would be possible to use header_access Proxy-Authenticate 
deny java and header_replace in a creative fashion to not tell the 
java browser that NTLM is an authentication option.  Given sufficient 
free time, it would certainly be fun to tinker at...

http://www.squid-cache.org/Versions/v2/2.6/cfgman/header_access.html
http://www.squid-cache.org/Versions/v2/2.6/cfgman/header_replace.html

Chris



[squid-users] Need help with Squid core dump on solaris

2007-10-16 Thread Hasibul Haque
Hi All,
I am running a squid reverse-proxy on solaris 10.
It runs smoothly most of the time, however, I have had 2 core dumps in last 3
months and had to restart squid manually.I have the core file but not
sure how to analyze
it.
Any help would be appreciated.

Thanks,
Hasib


Re: [squid-users] Squid on FC6, connections sitting around too long

2007-10-16 Thread Tory M Blue
On 10/15/07, Henrik Nordstrom [EMAIL PROTECTED] wrote:

 Probably you have a TCP connection based load balancer instead of one
 that balances on actual traffic, and the Netcaches have persistent
 connections disabled..

 See the client_persistent_connections and persistent_request_timeout
 directives.

 Regards
 Henrik


For some reason when I initially configured these I thought
persistence was off by default, but looking at the config guides, I
see it's defaulted to on.

Playing with persistence on/off and persistence timeout is helping
things tremendously.

Squid is showing more output/input than the netcaches now. Now I'm
working on finding the right combination to keep open connections to a
minimum while continuing max throughput.

Client persistence in a reverse proxy environment makes no sens, and
since my server environment is also load balanced, not sure it makes
much sense there (still testing), but definitely persistence timeout
plays a big role.

Thanks again
Toryi


[squid-users] ACL Question - (urlpath_regex OR url_regex)

2007-10-16 Thread Vadim Pushkin

Hello All;

I have a rule which blocks the use of CONNECT based on the user calling an 
IP address vs. FQDN, this works great!


I am able to specify allowed IP addresses by adding them into 
/squid/etc/allow-ip-addresses.


I am in need of adding entire subnets, or parts of a network as well, which 
I am unable to figure out.


I have within my squid.conf, the following:

acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 22 # ssh

acl SSL_ports port 443

acl CONNECT method CONNECT

# Should I use dstdomain versus something else here?
acl allowed-CONNECT dstdomain /squid/etc/allow-ip-addresses

# When I use urlpath_regex, it allows *everything* through.
acl numeric_IPs url_regex ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny CONNECT numeric_IPs !allowed-CONNECT

Please help,

.vp




[squid-users] web-based email always reloads on SQUID

2007-10-16 Thread Beavis
has anyone here encountered an issue with how squid handles web-based
emails? (gmail, yahoo, hotmail) Im running squid 2.6-stable16 on a
openbsd box. everytime i try to logged into gmail, yahoo or any other
web-based email service all i see is the page that reloads all the
time.. without ever really getting through the site.

below are my logs:

 1192557563.121   1825 172.16.100.50 TCP_MISS/200 4910 CONNECT
www.google.com:443 - DIRECT/209.85.165.104 -
 1192557563.230103 172.16.100.50 TCP_MISS/302 867 GET
http://mail.google.com/mail/? - DIRECT/209.85.133.18 text/html
 1192557563.592360 172.16.100.50 TCP_MISS/200 2385 GET
http://mail.google.com/mail/? - DIRECT/209.85.133.83 text/html
 1192557563.826149 172.16.100.50 TCP_MISS/200 352 GET
http://mail.google.com/mail/? - DIRECT/209.85.133.19 text/html
 1192557563.846168 172.16.100.50 TCP_MISS/200 352 GET
http://mail.google.com/mail/? - DIRECT/209.85.133.83 text/html


any help would be awesomely appreciated.


thanks,
-pf


Re: [squid-users] Squid on FC6, connections sitting around too long

2007-10-16 Thread Henrik Nordstrom
On tis, 2007-10-16 at 09:42 -0700, Tory M Blue wrote:

 Client persistence in a reverse proxy environment makes no sens

I disagree. The TCP setup cost is a very large portion of the total page
load time, especially if you have users far away..

but it do place a different workload on the load balancers, and mixing
servers with different persistent connections settings the same farm is
hard to balance right..


Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] anonymous proxying sites

2007-10-16 Thread Chuck Kollars
 I was wondering if anyone knew a way to block access
 to anonymous proxying sites. Some of our users have
 worked out how to bypass the denied.list and as a 
 result we have no logging as to their surfing
  activity

Yep, proxies are a _huge_ problem. There are thousands
of them: my personal list exceeded 10,000 a few days
ago, and my blacklist subscription also lists over
10,000. And they keep changing their name every few
days. 

Shady owners can make a small but positive amount of
money from every proxy (apparently through showing
ads). So every Tom Dick and Harry creates one or two
and adds their bit to the giant shell game of guess
where the proxies are. 

And you're only seeing most of it. There are certain
technical tricks that will make a proxy completely
invisible so you won't even realize you're not seeing
the surfing activity history. 

There are even services that will email a user the
new proxy of the day every morning. A user opens up
their email, plugs the address into their web browser,
and voila you've been painted as the fool once again. 

Depending on how serious you are about fighting
proxies, you'll need a good AUP (Acceptable Use
Policy), administrative backing, a lot of aspirin, an
hour every day, script writing skills, and a
combination of Squid and DansGuardian. The combination
of Squid and DansGuardian is the only technical
approach I know of that works very well - DansGuardian
pre-scans the _content_ of every site and ultimate
site, so it will block a lot of proxy use even though
you haven't yet configured the exact current name of
the proxy. 

The list on http://proxy.org is the most complete one
I know of. If you can figure out a way to
automatically suck up their entire list _every_day_,
remove duplicates, and add all those to your banned
list, you can stop _much_ (but not anywhere near
_all_) of the illicit activity. 

good luck!


-Chuck Kollars


  

Shape Yahoo! in your own image.  Join our Network Research Panel today!   
http://surveylink.yahoo.com/gmrs/yahoo_panel_invite.asp?a=7 




Re: [squid-users] ACL Question - (urlpath_regex OR url_regex)

2007-10-16 Thread Sven Frommholz - Konexxo GmbH
 

Vadim Pushkin wrote 
 Hello All;
 
 I have a rule which blocks the use of CONNECT based on the 
 user calling an 
 IP address vs. FQDN, this works great!
 
 I am able to specify allowed IP addresses by adding them into 
 /squid/etc/allow-ip-addresses.
 
 I am in need of adding entire subnets, or parts of a network 
 as well, which 
 I am unable to figure out.
 
 I have within my squid.conf, the following:
 
 acl Safe_ports port 80 # http
 acl Safe_ports port 21 # ftp
 acl Safe_ports port 22 # ssh
 
 acl SSL_ports port 443
 
 acl CONNECT method CONNECT
 
 # Should I use dstdomain versus something else here?
 acl allowed-CONNECT dstdomain /squid/etc/allow-ip-addresses
 
 # When I use urlpath_regex, it allows *everything* through.
 acl numeric_IPs url_regex ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+
 
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access deny CONNECT numeric_IPs !allowed-CONNECT
 
 Please help,
 
 .vp

squid will not see URLs at all during SSL traffic, so url_regex will not
work.
Try acl allowed-CONNECT dst 192.168.0.0/24 for subnets.

Sven


Re: [squid-users] web-based email always reloads on SQUID

2007-10-16 Thread Beavis
thanks for the reply alexandre ... i found what the issue was ..
there's something funked with the header_access options that a
collegue of mine put into the config file. he basically wanted squid
to act as a elite proxy, and not give away x-for and http-via keys. he
did succeed but it broke gmail, yahoo and the rest of the webmails...
I was able to get it to work by removing the header_access and just
configure the following options.

via off
forwarded_for off



regards,
-pf

On 10/16/07, Alexandre Correa [EMAIL PROTECTED] wrote:
 gmail uses HTTPS .. i think yahoo too !!

 On 10/16/07, Beavis [EMAIL PROTECTED] wrote:
  has anyone here encountered an issue with how squid handles web-based
  emails? (gmail, yahoo, hotmail) Im running squid 2.6-stable16 on a
  openbsd box. everytime i try to logged into gmail, yahoo or any other
  web-based email service all i see is the page that reloads all the
  time.. without ever really getting through the site.
 
  below are my logs:
 
   1192557563.121   1825 172.16.100.50 TCP_MISS/200 4910 CONNECT
  www.google.com:443 - DIRECT/209.85.165.104 -
   1192557563.230103 172.16.100.50 TCP_MISS/302 867 GET
  http://mail.google.com/mail/? - DIRECT/209.85.133.18 text/html
   1192557563.592360 172.16.100.50 TCP_MISS/200 2385 GET
  http://mail.google.com/mail/? - DIRECT/209.85.133.83 text/html
   1192557563.826149 172.16.100.50 TCP_MISS/200 352 GET
  http://mail.google.com/mail/? - DIRECT/209.85.133.19 text/html
   1192557563.846168 172.16.100.50 TCP_MISS/200 352 GET
  http://mail.google.com/mail/? - DIRECT/209.85.133.83 text/html
 
 
  any help would be awesomely appreciated.
 
 
  thanks,
  -pf
 


 --

 Sds.
 Alexandre J. Correa
 Onda Internet / OPinguim.net
 http://www.ondainternet.com.br
 http://www.opinguim.net



Re: [squid-users] ACL help: blocking non-html objects from particular domains

2007-10-16 Thread Craig Skinner
On Wed, Oct 17, 2007 at 01:12:41AM +1300, Amos Jeffries wrote:
 Doh!. I'm just going to go aside and kick myself a bit.
 
   reP_mime_types is a REPLY acl.
 
 it should be used with http_reply_access  :-P

Beautie mate! Stupid of me!

acl our_networks src 127.0.0.1/32
http_access allow our_networks
acl suspect-domains dstdom_regex /etc/squid/suspect-domains.acl
http_access allow suspect-domains
http_access deny all
acl ok-mime-types rep_mime_type -i text/html
http_reply_access allow ok-mime-types
http_reply_access deny all

Nice one.


[squid-users] Many to Many Reverse Proxy Configuration for squid 2.6.16

2007-10-16 Thread Warwick Shaw
We have been using squid in the following configuration since the end year 2000.

Outward facing public squid servers serve content for multiple hosts.
Behind the public servers is a middle layer of squid servers that hide
the real origin servers.
The real origin servers are various content management systems and
flat file Apache web servers.

The middle layer is using squid 2.5.11 along with jesred.
The many to many relationship of host name to origin server is just
two lines in the squid.conf
httpd_accel_host virtual
httpd_accel_uses_host_header on

A non public dns takes care of host names coming and going. When we
change the rewrite rules we just run squid -k reconfigure.


Is there a configuration for squid 2.6.16 which avoids having to
maintain a list of source and destination host names in squid.conf?


Thank you,

Warwick


Re: [squid-users] TCP_MISS/000 in Access.log

2007-10-16 Thread Amos Jeffries
 I have a number of users which are trying to access
 http://www.todaysmilitary.com. However, they are not able to even pull up
 the website on their browsers.  I tail'ed the squid access log and noticed
 TCP_MISS/OOO entries.  I know that TCP_MISS usually means that the client
 aborted the GET request before SQUID could return the data or that the

No TCP_MISS, just means the URL has no stored objects in cache.
The 000 is the bit usually means aborted.

 website might be unavailable.  But in this case neither is true.  ECN is
 off
 and the site is accessible on a non proxy computer.  The squidclient is
 not
 able to access the url either.  I am not sure where to go from here. Any
 help would be appreciated.

Check this out:

 http://www.webservertalk.com/archive254-2004-4-205827.html

- try with half_closed_clients off.
- try an upgrade
- try checking cache.log to see if there are any reasons given


Amos




Re: [squid-users] Many to Many Reverse Proxy Configuration for squid 2.6.16

2007-10-16 Thread Amos Jeffries
 We have been using squid in the following configuration since the end year
 2000.

 Outward facing public squid servers serve content for multiple hosts.
 Behind the public servers is a middle layer of squid servers that hide
 the real origin servers.
 The real origin servers are various content management systems and
 flat file Apache web servers.

 The middle layer is using squid 2.5.11 along with jesred.
 The many to many relationship of host name to origin server is just
 two lines in the squid.conf
 httpd_accel_host virtual
 httpd_accel_uses_host_header on

 A non public dns takes care of host names coming and going. When we
 change the rewrite rules we just run squid -k reconfigure.


 Is there a configuration for squid 2.6.16 which avoids having to
 maintain a list of source and destination host names in squid.conf?


Yes its possible. I'm doing it in real time.

Amos




Re: [squid-users] anonymous proxying sites

2007-10-16 Thread Adrian Chadd
On Tue, Oct 16, 2007, Chuck Kollars wrote:

 The list on http://proxy.org is the most complete one
 I know of. If you can figure out a way to
 automatically suck up their entire list _every_day_,
 remove duplicates, and add all those to your banned
 list, you can stop _much_ (but not anywhere near
 _all_) of the illicit activity. 

If people would like to see these sorts of features included
in Squid then please let us know. If it can be done free then it
will be; but in reality things like this generally cost money
to implement. Its why companies like Ironport do so well.
You couldn't do what Ironport does with their mail filtering for
free; and ironport are making/have made a web appliance which
will probably do this..

The trouble for Squid at the moment, of course, is non-port-80 traffic..



Adrian



RE: [squid-users] anonymous proxying sites

2007-10-16 Thread SSCR Internet Admin
On my part here I used the squidguard feature of using expressions, I just
added the word proxy it helps a bit...

-Original Message-
From: Adrian Chadd [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, October 17, 2007 8:26 AM
To: Chuck Kollars
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] anonymous proxying sites

On Tue, Oct 16, 2007, Chuck Kollars wrote:

 The list on http://proxy.org is the most complete one
 I know of. If you can figure out a way to
 automatically suck up their entire list _every_day_,
 remove duplicates, and add all those to your banned
 list, you can stop _much_ (but not anywhere near
 _all_) of the illicit activity. 

If people would like to see these sorts of features included
in Squid then please let us know. If it can be done free then it
will be; but in reality things like this generally cost money
to implement. Its why companies like Ironport do so well.
You couldn't do what Ironport does with their mail filtering for
free; and ironport are making/have made a web appliance which
will probably do this..

The trouble for Squid at the moment, of course, is non-port-80 traffic..



Adrian


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

__ NOD32 2595 (20071016) Information __

This message was checked by NOD32 antivirus system.
http://www.eset.com




__ NOD32 2595 (20071016) Information __

This message was checked by NOD32 antivirus system.
http://www.eset.com



-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



Re: [squid-users] Squid on DualxQuad Core 8GB Rams - Optimization - Performance - Large Scale - IP Spoofing

2007-10-16 Thread Michel Santos
Adrian Chadd disse na ultima mensagem:
 On Tue, Oct 16, 2007, Paul Cocker wrote:
 For the ignorant among us can you clarify the meaning of devices?

 Bluecoat. Higher end Cisco ACE appliances/blades. In the accelerator
 space,
 stuff like what became the Juniper DX can SLB and cache about double what
 squid can in memory.


o really? how much would that be? do you have a number or is it just talk?


 Just so you know, the Cisco Cache Engine stuff from about 8 years ago
 still beats Squid for the most part. I remember seeing numbers of
 ~ 2400 req/sec, to/from disk where appropriate, versus Squid's current
 maximum throughput of about 1000. And this was done on Cisco's -then-
 hardware - I think that test was what, dual PIII 800's or something?
 They were certainly pulling about 4x the squid throughput for the same
 CPU in earlier polygraphs.



I am not so sure if this 2400 req/sec wasn't per minute and also wasn't
from cache but only incoming requests ...

I pay you a beer or even two if you show me a device type pIII which can
satisfy 2400 req from disk



 I keep saying - all this stuff is documented and well-understood.
 How to make fast network applications - well understood. How to have
 network apps scale well under multiple CPUs - well understood, even better
 by the Windows people. Cache filesystems - definitely well understood.



well, not only well-understood but also well-known a Ferrari seems to run
faster than the famous john-doo-mobile - but also very well-known the
price issue and even if well-documented it makes no sense at all comparing
both



squid does a pretty good job not only getting high hit rates but
especially considering the price

unfortunatly squid is not a multi-threaded application what by the way
does not disable you running several instances as workaround

unfortunatly again, diskd is kind of orfaned but certainly is
_the_kind_of_choice_ for SMP machines, by design and still more when
running several diskd processes per squid process


again unfortunatly, people are told that squid is not SMP capable and that
there is no advantage of using SMP machines for it so they configuring
their machines to death on single dies with 1 meg or 2 and getting nothing
out of it so where does it end??? Easy answer, squid is going to be a
proxy for natting corporate networks or poor ISPs which do not have
address space - *BUT NOT* as a caching machine anymore

but fortunatly true that caching performance is in first place a matter of
fast hardware

that you can see and not only read common bla-bla I add a well-known mrtg
graph of the hit rate of a dual-opteron sitting in front of a 4MB/s ISP
POP

and I get pretty much more hits as you told at the beginning on larger
POPs - so I do not know where you get your squid's 1000 req limit from ...
must be from your P-III goody ;)


but then at the end the actual squid marketing is pretty bad, nobody talks
caching but talks proxying, authenticating and acling, even the makers are
not defending caching at all and appearently not friends of running squid
as multi-instance application because any documentation about it is very
poor and sad


probably an answer to actual demands and so they go with the croud,
bandwidth is almost everywhere very cheap so why people should spend their
brains and bucks on caching technics. Unfortunatly my bandwidth is
expensive and I am not interesting in proxying or and other feature so
perhaps my situation and position is different and is not the same as
elsewhere.

Michel

...





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.
attachment: squid0-hit-day.png