Re: [squid-users] Block file upload

2008-04-05 Thread Amos Jeffries

[EMAIL PROTECTED] wrote:

Is it possible to stop people from uploading files using squid ie is there
some way to do an outbound mime type acl ?

I have added these two lines to my squid.conf :

acl fileupload req_mime_type -i ^multipart/form-data$
http_access deny fileupload


That is the correct ACL.
Which version of squid? 2.6, 2.7, or 3.0

Are you certain the objects are going out with that MiME-type? not some 
other according to the type of file uploaded.


Amos



Here is my complete conf :

#===
cache_mem 32 MB
cache_mgr [EMAIL PROTECTED]
cache_dir ufs /var/spool/squid 2000 16 256
cache_access_log /var/log/squid/access.log
cache_log /var/log/squid/cache.log
cache_store_log /var/log/squid/store.log
visible_hostname gateway
cache_effective_user squid
cache_effective_group squid

http_port 3128 transparent
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off

refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern . 0 20% 4320

acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl CONNECT method CONNECT

acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http

#  MIME Filter for File Upload ==
acl fileupload req_mime_type -i ^multipart/form-data$

http_access deny to_localhost
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost

# === Block File Upload 
http_access deny fileupload all
http_reply_access deny fileupload all

coredump_dir /var/spool/squid

#===

But it not works ! Has anyone used this acl before and has a sample from
the conf file ?



--
Please use Squid 2.6.STABLE19 or 3.0.STABLE4


RE: [squid-users] client ip's

2008-04-05 Thread Jorge Bastos
People,

I updated to last STABLE-4 on debian, but this still happens this way.
What can I do more?

Jorge   

> -Original Message-
> From: Jorge Bastos [mailto:[EMAIL PROTECTED]
> Sent: quinta-feira, 3 de Abril de 2008 9:56
> To: 'Amos Jeffries'
> Cc: 'Henrik Nordstrom'; squid-users@squid-cache.org
> Subject: RE: [squid-users] client ip's
> 
> Hum, the last one's on debian.
> They were 3.0 PRE-X, but don't remember the number.
> 
> 
> 
> 
> > -Original Message-
> > From: Amos Jeffries [mailto:[EMAIL PROTECTED]
> > Sent: quinta-feira, 3 de Abril de 2008 6:08
> > To: Jorge Bastos
> > Cc: 'Henrik Nordstrom'; squid-users@squid-cache.org
> > Subject: Re: [squid-users] client ip's
> >
> > Jorge Bastos wrote:
> > > The rule I use to redirect traffic from 80 to 8080 is:
> > > I must remember, this was working before 3.0 stable1 or stable2
> (not
> > using
> > > stable2), I just saw this was happening now.
> >
> > What version did you upgrade from?
> >
> > >
> > > iptables -t nat -A PREROUTING -s 192.168.1.0/24 -p tcp --dport 80 -
> j
> > DNAT
> > > --to-destination 192.168.1.1:8080
> > >
> >
> > If squid is running on this same box I would recommend the REDIRECT
> > target instead of DNAT. It's less work for the kernel.
> >
> > The other possible issue is that you have your redirection rule at
> the
> > start of the NAT tables. The matching rule to allow squid traffic out
> > is
> > near the end.
> >
> > Even if you keep DNAT, they should be in this order:
> >
> > # allow squid traffic out okay.
> > iptables -t nat _A PREROUTING -s 192.168.1.1 -p tcp --dport 80 -j
> > ACCEPT
> > # redirect all other web traffic into squid.
> > iptables -t nat -A PREROUTING -s 192.168.1.0/24 -p tcp --dport 80 -j
> > REDIRECT --to-port 8080
> >
> > >
> > > cisne:~# iptables-save -t nat
> > > # Generated by iptables-save v1.4.0 on Wed Apr  2 17:12:25 2008
> > > *nat
> > > :PREROUTING ACCEPT [35:1650]
> > > :POSTROUTING ACCEPT [10307:1367320]
> > > :OUTPUT ACCEPT [66427:4357431]
> > > -A PREROUTING -d 193.164.158.105/32 -j DROP
> > > -A PREROUTING -i eth1 -p tcp -m tcp --dport 5111 -j DNAT --to-
> > destination
> > > 192.168.1.11:5900
> > > -A PREROUTING -i eth1 -p tcp -m tcp --dport 5901 -j DNAT --to-
> > destination
> > > 192.168.1.2:5900
> > > -A PREROUTING -i eth1 -p tcp -m tcp --dport 5969 -j DNAT --to-
> > destination
> > > 192.168.1.3:5900
> > > -A PREROUTING -i eth1 -p tcp -m tcp --dport 3389 -j DNAT --to-
> > destination
> > > 192.168.1.204:3389
> > > -A PREROUTING -s 192.168.1.0/24 -p tcp -m tcp --dport 80 -j DNAT
> > > --to-destination 192.168.1.1:8080
> > > -A PREROUTING -p gre -j ACCEPT
> > > -A PREROUTING -p icmp -j ACCEPT
> > > -A PREROUTING -p ah -j ACCEPT
> > > -A PREROUTING -p udp -m udp --dport 53 -j ACCEPT
> > > -A PREROUTING -p udp -m udp --dport 500 -j ACCEPT
> > > -A PREROUTING -p udp -m udp --dport 1723 -j ACCEPT
> > > -A PREROUTING -p udp -j ACCEPT
> > > -A PREROUTING -p tcp -m tcp --dport 20 -j ACCEPT
> > > -A PREROUTING -p tcp -m tcp --dport 21 -j ACCEPT
> > > -A PREROUTING -p tcp -m tcp --dport 22 -j ACCEPT
> > > -A PREROUTING -p tcp -m tcp --dport 23 -j ACCEPT
> > > -A PREROUTING -p tcp -m tcp --dport 25 -j ACCEPT
> > > -A PREROUTING -p tcp -m tcp --dport 43 -j ACCEPT
> > > -A PREROUTING -p tcp -m tcp --dport 79 -j ACCEPT
> > > -A PREROUTING -p tcp -m tcp --dport 123 -j ACCEPT
> > > -A PREROUTING -p tcp -m tcp --dport 143 -j ACCEPT
> > > -A PREROUTING -p tcp -m tcp --dport 443 -j ACCEPT
> > > -A PREROUTING -d 80.172.172.34/32 -p tcp -m tcp --dport 444 -j
> ACCEPT
> > > -A PREROUTING -p tcp -m tcp --dport 1723 -j ACCEPT
> > > -A PREROUTING -p tcp -m tcp --dport 1863 -j ACCEPT
> > > -A PREROUTING -p tcp -m tcp --dport 3306 -j ACCEPT
> > > -A PREROUTING -p tcp -m tcp --dport 3389 -j ACCEPT
> > > -A PREROUTING -d 80.172.172.34/32 -p tcp -m tcp --dport 5000 -j
> > ACCEPT
> > > -A PREROUTING -p tcp -m tcp --dport 5190 -j ACCEPT
> > > -A PREROUTING -p tcp -m tcp --dport 5900 -j ACCEPT
> > > -A PREROUTING -p tcp -m tcp --dport 5901 -j ACCEPT
> > > -A PREROUTING -p tcp -m tcp --dport 6667 -j ACCEPT
> > > -A PREROUTING -s 192.168.1.0/24 -d 192.168.1.206/32 -p tcp -m tcp -
> -
> > dport
> > >  -j ACCEPT
> > > -A PREROUTING -d 192.168.1.1/32 -p tcp -m tcp --dport 8080 -j
> ACCEPT
> > > -A PREROUTING -i eth1 -p tcp -m tcp --dport 30106 -j DNAT --to-
> > destination
> > > 192.168.1.224:30106
> > > -A PREROUTING -s 192.168.1.0/24 -p tcp -m tcp --dport 62500:63500
> > > --tcp-flags FIN,SYN,RST,ACK SYN -j ACCEPT
> > > -A PREROUTING -j DROP
> > > -A POSTROUTING -o eth1 -j MASQUERADE
> > > COMMIT
> > > # Completed on Wed Apr  2 17:12:26 2008
> > >
> > > -Original Message-
> > > From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
> > > Sent: quarta-feira, 2 de Abril de 2008 11:42
> > > To: Jorge Bastos
> > > Cc: squid-users@squid-cache.org
> > > Subject: RE: [squid-users] client ip's
> > >
> > > WHat do your iptables NAT rules look like?
> > >
> > > iptables-save -t nat
> > >
> > > on

RE: [squid-users] Unable to access a website through Suse/Squid.

2008-04-05 Thread Terry Dobbs
The internet line is DSL, and does use a username/password (PPoE).
However, on the actual DSL router (provided by ISP) I don't see any MTU
options. 

I will have to look into ip tables. I can add static routes via the
interface card which are permanent, however doing it this way doesn't
give me any options for mss, mtu, etc.. All I can enter this way is
Source, Destination, Gateway.

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Friday, April 04, 2008 6:19 PM
To: Terry Dobbs
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] Unable to access a website through
Suse/Squid.

fre 2008-04-04 klockan 13:56 -0400 skrev Terry Dobbs:
> Thanks so much, the advmss worked like a charm. How do I make it so
this
> route stays there? When I restart networking it seems to vanish.

Some things first.. you should figure out if the MTU is local or remote.
As it's mostly you having issues I would suspect it's local. In such
case you should have a lower mss on the default route to make TCP/IP
work better.

How are you connected to the Internet? ADSL with PPPoE, or some other
tunneling method which has a lover MTU than the default 1500?

How to set the routing is quite distribution dependent, and I am not
very familiar with SuSE. But on the good side you can use iptables to
acheive the same thing, or maybe rules in your router.

Regards
Henrik



RE: [squid-users] client ip's

2008-04-05 Thread Henrik Nordstrom
lör 2008-04-05 klockan 14:24 +0100 skrev Jorge Bastos:

> I updated to last STABLE-4 on debian, but this still happens this way.
> What can I do more?

Good question.

One thing you can try is to downgrade to Squid-2.6. If that shows the
same symptoms the problem is not within Squid but most likely in your
firewall ruleset or something else relevant to how the connections end
up at your Squid.

Regards
Henrik



RE: [squid-users] Unable to access a website through Suse/Squid.

2008-04-05 Thread Henrik Nordstrom
lör 2008-04-05 klockan 10:11 -0400 skrev Terry Dobbs:
> The internet line is DSL, and does use a username/password (PPoE).
> However, on the actual DSL router (provided by ISP) I don't see any MTU
> options. 

PPPoE means a lower MTU than the internet default of 1500, so any sites
not capable of performing Path MTU discovery properly will fail to
communicate with you. Path MTU problems is still quite common,
especially with people running homegrown firewalls where they add a
simple "drop all ICMP traffic, people should not ping us" rule,
forgetting that TCP/IP also makes significant use of ICMP..

> I will have to look into ip tables. I can add static routes via the
> interface card which are permanent, however doing it this way doesn't
> give me any options for mss, mtu, etc.. All I can enter this way is
> Source, Destination, Gateway.

You can try the following iptables rule:

iptables -t mangle -A OUTPUT -o outinterface -j TCPMSS --set-mss 1440

Regards
Henrik



Re: [squid-users] transparent tproxy bridging

2008-04-05 Thread Henrik Nordstrom
tor 2008-04-03 klockan 18:53 +0300 skrev Abdock:

> Anybody doing transparent tproxy and bridging on squid 2.6 or 3.1 ?
> 
> Can you please share how to ?  on centos

1. Configure the box as a bridge.

2. Use iptables to intercept the traffic as per the Squid FAQ section on
transparent interception on Linux or any other howto.

RedHat has the bridge netfilter integration built by default, which
means that iptables rules also applies on bridged traffic, just as it
does on routed traffic..

Regards
Henrik



Re: [squid-users] No userid in access.log

2008-04-05 Thread Henrik Nordstrom
fre 2008-04-04 klockan 10:56 +0800 skrev CC Ngu:
> When I use Login=PASS in cache_peer option (i.e. pass the authentication
> to the parent), there is no userid information in the access.log, is it
> possible to show the userid in the local squid's access.log?

You need to tech Squid how to sniff the userid from the traffic in such
case. Actually not too hard, but some coding is needed. I would suggest
starting by using an external acl type for the purpose..


external_acl_type %{Proxy-Authenticate} /path/to/your/helper

and in your custom helper pars the Proxy-Authenticate headers and return
a suitable USER=xxx attribute on Squid when a username is seen.

This will work for the standard HTTP authentication schemes (i.e. Basic
and Digest), but will fail for the Microsoft hacked shemes (i.e.
NTLM/Negotiate/Kerberos) for the same reasons these are hard to proxy...

Regards
Henrik




Re: [squid-users] ICAP: fake user and new icap header X-Authenticated-Groups

2008-04-05 Thread Henrik Nordstrom
fre 2008-04-04 klockan 14:27 +0200 skrev Arno _:

> Any actual way of sending a fake information, or should I crate a new icap-=
> fake-client-username and icap-fake-client-group on the icap config part of =
> squid.conf ?

Sounds like an interesting addition.

> Anyone interested or it will be just for me ?

Quite likely others will find it useful.

Code discussions is best held on suqid-dev, where you are welcome to
subscribe.

Regards
Henrik



Re: [squid-users] Transparent Proxy and iTunes/WinAmp

2008-04-05 Thread Henrik Nordstrom

lör 2008-04-05 klockan 00:49 -0400 skrev Adam Goldberg:
> When I try to connect via my browser, this is what I see:
> * Invalid Response

What is said in cache.log?

Regards
Henrik



RE: [squid-users] client ip's

2008-04-05 Thread Jorge Bastos
This already worked with some of the 3.0 versions.
Gonna try to play with my iptables rules and let you guys know.




> -Original Message-
> From: Henrik Nordstrom [mailto:[EMAIL PROTECTED]
> Sent: sábado, 5 de Abril de 2008 19:38
> To: Jorge Bastos
> Cc: 'Amos Jeffries'; squid-users@squid-cache.org
> Subject: RE: [squid-users] client ip's
> 
> lr 2008-04-05 klockan 14:24 +0100 skrev Jorge Bastos:
> 
> > I updated to last STABLE-4 on debian, but this still happens this
> way.
> > What can I do more?
> 
> Good question.
> 
> One thing you can try is to downgrade to Squid-2.6. If that shows the
> same symptoms the problem is not within Squid but most likely in your
> firewall ruleset or something else relevant to how the connections end
> up at your Squid.
> 
> Regards
> Henrik




[squid-users] problem with transparent and invalid URLs

2008-04-05 Thread Leonardo Rodrigues Magalhães


   Hello Guys,

   i'm having problems with the following scenario:

   Linux (Fedora 8) with kernel 2.6.24.3
   squid 3.0-stable4 correctly compiled with --enable-linux-netfilter
   http_port 8080 transparent in squid.conf

   DNAT rule pointing tcp/80 traffic to squid port 8080

   transparent proxy works fine except for accessing the own machine 
that is running squid, which also runs a web server.


   if i manually point squid on firefox/IE proxy configurations, it 
works. But if i let the connection be intercepted, then i only get 
'Invalid URL' errors.


   debug shows:

2008/04/05 18:04:54.338| parseHttpRequest: Request Header is
Host: 192.168.0.1
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; pt-BR; Alexa; 
rv:1.8.1.13) Gecko/20080311 Firefox/2.0.0.13
Accept: 
text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5

Accept-Language: pt-br,pt;q=0.8,en-us;q=0.5,en;q=0.3
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive

2008/04/05 18:04:54.338| parseHttpRequest: Complete request received
2008/04/05 18:04:54.338| clientParseRequest: FD 50: parsed a request
2008/04/05 18:04:54.338| commSetTimeout: FD 50 timeout 86400
2008/04/05 18:04:54.338| cbdataUnlock: 0x8608068=1
2008/04/05 18:04:54.338| cbdataLock: 0x89a6abc=2
2008/04/05 18:04:54.338| Invalid URL: /admin/cacti/graph_view.php


   Please note that request has the correct Host:  header, but after 
parsed, the request it's consider to be only 
'/admin/cacti/graph_view.php', which is incorrect, as it should be 
http://192.168.0.1/admin/cacti/graph_view.php.



   Other requests, NOT for the own machine that is running squid, works 
just fine:



2008/04/05 18:07:49.915| parseHttpRequest: Complete request received
2008/04/05 18:07:49.915| clientParseRequest: FD 76: parsed a request
2008/04/05 18:07:49.915| commSetTimeout: FD 76 timeout 86400
2008/04/05 18:07:49.915| cbdataUnlock: 0x8608824=1
2008/04/05 18:07:49.915| cbdataLock: 0x89a6e00=2 
2008/04/05 18:07:49.915| init-ing hdr: 0x89aea2c owner: 2

2008/04/05 18:07:49.915| parsing hdr: (0x89aea2c)
Host: www.terra.com.br
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; pt-BR; Alexa; 
rv:1.8.1.13) Gecko/20080311 Firefox/2.0.0.13

Accept: */*
Accept-Language: pt-br,pt;q=0.8,en-us;q=0.5,en;q=0.3
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Proxy-Connection: keep-alive
Referer: http://www.terra.com.br/capa/
Cookie: TERRA=c90f754602563117478607000169c8b00342; cAtmE=1; 
cAtmS=1; cAtmR=


2008/04/05 18:07:49.915| parsing HttpHeaderEntry: near 'Host: 
www.terra.com.br'

2008/04/05 18:07:49.915| parsed HttpHeaderEntry: 'Host: www.terra.com.br'
2008/04/05 18:07:49.915| created HttpHeaderEntry 0x89c7328: 'Host : 
www.terra.com.br




--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
[EMAIL PROTECTED]
My SPAMTRAP, do not email it







Re: [squid-users] problem with transparent and invalid URLs

2008-04-05 Thread Henrik Nordstrom
lör 2008-04-05 klockan 18:40 -0300 skrev Leonardo Rodrigues Magalhães:

> if i manually point squid on firefox/IE proxy configurations, it 
> works. But if i let the connection be intercepted, then i only get 
> 'Invalid URL' errors.

That's because your Squid doesn't realize the connection was
intercepted, because the destination IP did not change from the
requested IP..

The suggested setup is to not intercept connections destined for the
server itself. This neatly avoids this and a couple of other related
issues...

I.e. in iptables nat you have something like

iptables -t nat -A PREROUTING -d ip.of.this.server -j ACCEPT
iptables -t nat -p tcp --dport 80 -j REIRECT --redirect-to port_of_squid

Regards
Henrik



Re: [squid-users] cache "big" images ans use it on the LAN

2008-04-05 Thread Henrik Nordstrom

tor 2008-04-03 klockan 08:11 +0300 skrev Rakotomandimby Mihamina:
 
> > An example URL of a object where you see this would help..
> 
> http://svn.infogerance.us for example.

That's a 404 not found today.

> I already provided that URL and its cacheability sounded OK.
> 
> Is there a possibility to "force" or overwrite the cacheability/expiration?

There is plenty, but you need to know why something doesn't get cached
to address it proper. And quite often it's actually due to clients
saying they don't accept the cached copy..

Regards
Henrik



Re: [squid-users] problem with transparent and invalid URLs

2008-04-05 Thread Leonardo Rodrigues Magalhães



Henrik Nordstrom escreveu:

lör 2008-04-05 klockan 18:40 -0300 skrev Leonardo Rodrigues Magalhães:

  
if i manually point squid on firefox/IE proxy configurations, it 
works. But if i let the connection be intercepted, then i only get 
'Invalid URL' errors.



That's because your Squid doesn't realize the connection was
intercepted, because the destination IP did not change from the
requested IP..

The suggested setup is to not intercept connections destined for the
server itself. This neatly avoids this and a couple of other related
issues...

I.e. in iptables nat you have something like

iptables -t nat -A PREROUTING -d ip.of.this.server -j ACCEPT
iptables -t nat -p tcp --dport 80 -j REIRECT --redirect-to port_of_squid

  


  Hi Henrik,

   As a workaround, i have already done that, thanks for your tip.

   The interesting part is that i'm actually migrating from 2.5 
directly to 3.0, and that exact scenario works just fine on 2.5-Stable14 
! Squid 2.5 Stable 14 handles this scenario with no problems at all, 
even intercepting a request to its own ip address.



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
[EMAIL PROTECTED]
My SPAMTRAP, do not email it






Re: [squid-users] problem with transparent and invalid URLs

2008-04-05 Thread Henrik Nordstrom
lör 2008-04-05 klockan 21:53 -0300 skrev Leonardo Rodrigues Magalhães:
> The interesting part is that i'm actually migrating from 2.5 
> directly to 3.0, and that exact scenario works just fine on 2.5-Stable14 
> ! Squid 2.5 Stable 14 handles this scenario with no problems at all, 
> even intercepting a request to its own ip address.

Right. Fixing up this for Squid-3.


Regards
Henrik



[squid-users] Request processing question

2008-04-05 Thread David Lawson
I've got a couple questions about how Squid chooses to fulfill a  
request.  Basically, I've got a cache with a number of sibling peers  
defined.  Some of the time it makes an ICP query to those peers and  
then does everything it should do, takes the first hit, makes the HTTP  
request for the object via that peer, etc.  Some, perhaps most, of the  
time, it doesn't even make an ICP query for the object, it just goes  
direct to the origin server.  Can anyone tell me why that is and how  
to stop it?  I'd like Squid to, at the very least, make the query for  
every request.  Can anyone point me in the right direction?  This is  
Squid 2.5  STABLE12, by the way.


I've also got a broader, more general question of how a request flows  
through the Squid process, when ACLs are processed, are they before or  
after any rewriter is done to the URLs, etc., but that's a really  
secondary thing, right now I'm just concerned with the ICP question.


--Dave
Systems Administrator
Zope Corp.
540-361-1722
[EMAIL PROTECTED]