Re: [squid-users] Fix for Windows media player and NTLM auth pop ups

2008-03-01 Thread Adrian Chadd
Could you please wrap this up and dump it into the Squid bugzilla?

Thanks,



Adrian

On Thu, Feb 28, 2008, Plant, Dean wrote:
 I have seen this problem asked about in the archives but was not sure if
 a fix was ever given. If it has I apologise for the noise.
 
 I had been having problems with WMP not correctly authenticating to our
 proxy and came across a blog on the isaserver.org website.
 
 When WMP is acting as a web proxy client (CERN) and the web proxy
 server requires Windows Integrated authentication, WMP will not
 auto-authenticate to the web proxy server if the web proxy server is
 specified as either an FQDN or an IP address. If the web proxy server is
 specified as a NetBIOS (unqualified) name, WMP will auto-authenticate
 using the interactive account credentials. If the web proxy server
 requires Basic or Digest authentication, an authentication prompt is
 expected, regardless of how the web proxy server is specified. This
 behaviour is the same if the web proxy server is obtained via an
 automatic configuration (WPAD) script.
 
 http://blogs.isaserver.org/pouseele/2007/11/09/windows-media-player-auth
 entication-prompts/
 
 I changed our wpad file from IP's to the NetBIOS names and the pop-ups
 have now disappeared. :-) Only problem now is that I have been testing
 the squid_kerb_auth helper (with good results so far) and as you have to
 specify the proxy as a FQDN, WMP is broken again :-(
 
 HTH
 
 Dean

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] my squid hang up

2008-03-01 Thread Amos Jeffries

mkdracoor wrote:
hello my problem is that a can't search anything in google the page open 
good but when  I search for anything the proxy give an error of time 
out here is my conf file. I don't now if I missing something please 
help me

thanks


The error page from squid (it is from squid right?) should have at least 
a message from squid saying what it was doing or which timeout.





# Configuracion Squid by mkdracoor
# 
#
# This is the default Squid configuration file. You may wish
# to look at the Squid home page (http://www.squid-cache.org/)
# for the FAQ and other documentation.
#
# The default Squid config file shows what the defaults for
# various options happen to be.  If you don't need to change the
# default, you shouldn't uncomment the line.  Doing so may cause
# run-time problems.  In some cases none refers to no default
# setting at all, while in other cases it refers to a valid
# option - the comments for that keyword indicate if this is the
# case.
#


# NETWORK OPTIONS
# 
- 



http_port 3128

# cache_peer
cache_peer  192.168.22.75 parent 3128 0 default no-query login=PASS
cache_peer_domain 192.168.22.75


WTF? so there are no domains this peer serves for?
Might as well remove it entirely then and save squid much processing time.

Maybe thats the timeout?

# 


acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache

cache_mem 100 MB
cache_swap_low 90
cache_swap_high 95
cache_dir ufs /var/cache/squid 500 16 256
refresh_pattern ^ftp:  1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern .  0 20% 4320

# 


log_fqdn off
log_mime_hdrs off
emulate_httpd_log off
half_closed_clients off
cache_access_log /var/log/squid/access.log
cache_log /var/log/squid/cache.log
cache_store_log /var/log/squid/store.log
log_ip_on_direct off
client_netmask 255.255.255.0
#__ 


ftp_user [EMAIL PROTECTED]
ftp_list_width 32
ftp_passive on
ftp_sanitycheck off
hosts_file /etc/hosts


# Autentificación.
# --
authenticate_ttl 30 minutes
authenticate_ip_ttl 0 seconds
auth_param basic children 5
auth_param basic realm Internet Proxy-Caching (JCCE MASO III)
auth_param basic credentialsttl 5 minutes
auth_param basic casesensitive on
authenticate_cache_garbage_interval 1 hour
auth_param basic program /usr/lib/squid/ncsa_auth /etc/squid/passwd

# Timeouts.
# --
forward_timeout 4 minutes
connect_timeout 1 minute
peer_connect_timeout 30 seconds
read_timeout 5 minutes
request_timeout 1 minute
persistent_request_timeout 1 minute
client_lifetime 1 day
half_closed_clients on
pconn_timeout 120 seconds
ident_timeout 10 seconds
shutdown_lifetime 30 seconds

# ACLs.
# --
acl PURGE method PURGE
acl CONNECT method CONNECT
acl manager proto cache_object
acl passwd proxy_auth REQUIRED

acl all src 0.0.0.0/0.0.0.0
acl localhost src 127.0.0.1/255.255.255.255
#acl intranet src 192.168.3.34/45


It's the /45 breaking that. There is no IPv4 CIDR /45.
Maybe you intended to write: 192.168.3.34-192.168.3.45



acl jc_ips src /etc/squid/jc_ips



Following ACL never used:


acl SSL_ports port 443 563
acl Safe_ports port 80 443 # HTTP, HTTPS
acl Safe_ports port 21  # FTP
acl Safe_ports port 563  # HTTPS, SNEWS
acl Safe_proxy port 3128 # PROXY
acl Safe_admin port 70  # GOPHER
acl Safe_admin port 210  # WAIS
acl Safe_admin port 280  # HTTP-MGMT
acl Safe_admin port 488  # GSS-HTTP
acl Safe_admin port 591  # FILEMAKER
acl Safe_admin port 777  # MULTILING HTTP
acl Safe_admin port 1025-65535 # Unregistered Ports
# ACLs Personalizadas.
acl porno0 dstdomain /etc/squid/filtros/porno0
acl peligroso0 dstdomain /etc/squid/filtros/peligroso0
acl peligroso1 url_regex /etc/squid/filtros/peligroso1
acl noporno0 dstdomain /etc/squid/filtros/noporno0
acl noporno1 url_regex /etc/squid/filtros/noporno1
acl descargas0 urlpath_regex /etc/squid/filtros/descargas0
acl descargas1 url_regex /etc/squid/filtros/descargas1
acl sitesall dstdomain /etc/squid/filtros/sitesall

# Reglas Default.
http_access allow manager localhost
http_access deny manager
http_access allow PURGE localhost
http_access deny PURGE
http_access allow jc_ips passwd
http_access deny all



EVERTHING deny!

Following NEVER match:


#Permitir y Denegar Filtros
http_access deny all porno0
http_access deny all peligroso0
http_access deny all peligroso1
http_access allow all noporno0 noporno1
http_access allow all descargas0 descargas1




# Parametros Administrativos.
# --
mail_program mail
cache_mgr [EMAIL PROTECTED]
cache_effective_user proxy
visible_hostname proxy.jcmaso3

# Misceláneas
# --
ie_refresh off
retry_on_error on
redirector_bypass off
cachemgr_passwd disable 

Re: [squid-users] Cache url's with ? question marks

2008-03-01 Thread Amos Jeffries

Adrian Chadd wrote:

G'day,

Just remove the QUERY ACL and the cache ACL line using QUERY in it.
Then turn on header logging (log_mime_hdrs on) and see if the replies
to the dynamically generated content is actually giving caching info.




 Adrian

http://wiki.squid-cache.org/ConfigExamples/DynamicContent

Amos



On Fri, Feb 29, 2008, Saul Waizer wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hello List,

I am having problems trying to cache images*/content that comes from a
URL containing a question mark on it ('?')

Background:
I am running squid Version 2.6.STABLE17 on FreeBSD 6.2 as a reverse
proxy to accelerate content hosted in America served in Europe.

The content comes from an application that uses TOMCAT so a URL
requesting dynamic content would look similar to this:

http://domain.com/storage/storage?fileName=/.domain.com-1/usr/14348/image/thumbnail/th_8837728e67eb9cce6fa074df7619cd0d193_1_.jpg

The result of such request always results on a MISS with a log similar
to this:

TCP_MISS/200 8728 GET http://domain.com/storage/storage? -
FIRST_UP_PARENT/server_1 image/jpg

I've added this to my config: acl QUERY urlpath_regex cgi-bin as you can
see bellow but it makes no difference and I tried adding this:
acl QUERY urlpath_regex cgi-bin \?  and for some reason ALL requests
result in a MISS.

Any help is greatly appreciated.

My squid config looks like this: (obviously real ip's were changed)

# STANDARD ACL'S ###
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
# REVERSE CONFIG FOR SITE #
http_port 80 accel vhost
cache_peer 1.1.1.1 parent 80 0 no-query originserver name=server_1
acl sites_server_1 dstdomain domain.com
#  REVERSE ACL'S FOR OUR DOMAINS ##
acl  ourdomain0  dstdomain   www.domain.com
acl  ourdomain1  dstdomain   domain.com
http_access allow ourdomain0
http_access allow ourdomain1
http_access deny all
icp_access allow all
 HEADER CONTROL ###
visible_hostname cacheA.domain.com
cache_effective_user nobody
forwarded_for on
follow_x_forwarded_for allow all
header_access All allow all
### SNMP CONTROL  ###
snmp_port 161
acl snmppublic snmp_community public1
snmp_access allow all
## CACHE CONTROL 
access_log /usr/local/squid/var/logs/access.log squid
acl QUERY urlpath_regex cgi-bin
cache_mem 1280 MB
cache_swap_low 95
cache_swap_high 98
maximum_object_size 6144 KB
minimum_object_size 1 KB
maximum_object_size_in_memory 4096 KB
cache_dir ufs /storage/ram_dir1 128 16 256
cache_dir ufs /storage/cache_dir1 5120 16 256
cache_dir ufs /storage/cache_dir2 5120 16 256
cache_dir ufs /storage/cache_dir3 5120 16 256

Also here is the result of a custom script I made to parse the
access.log that will sort and display the top 22 responses so I can
compare them with cacti, I am trying to increase the Hit ratio but so
far is extremely low.

1  571121 69.3643% TCP_MISS/200
2  98432 11.9549% TCP_HIT/200
3  51590 6.26576% TCP_MEM_HIT/200
4  47009 5.70938% TCP_MISS/304
5  17757 2.15664% TCP_IMS_HIT/304
6  11982 1.45525% TCP_REFRESH_HIT/200
7  11801 1.43327% TCP_MISS/404
8  6810 0.827095% TCP_MISS/500
9  2508 0.304604% TCP_MISS/000
   10  1323 0.160682% TCP_MISS/301
   11  1151 0.139792% TCP_MISS/403
   12  1051 0.127647% TCP_REFRESH_HIT/304
   13  430 0.0522248% TCP_REFRESH_MISS/200
   14  127 0.0154245% TCP_CLIENT_REFRESH_MISS/200
   15  83 0.0100806% TCP_MISS/401
   16  81 0.00983769% TCP_CLIENT_REFRESH_MISS/304
   17  35 0.00425085% TCP_MISS/503
   18  20 0.00242906% TCP_DENIED/400
   19  19 0.00230761% TCP_HIT/000
   20  19 0.00230761% TCP_DENIED/403
   21  14 0.00170034% TCP_SWAPFAIL_MISS/200
   22  1 0.000121453% TCP_SWAPFAIL_MISS/30

Thanks!




-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFHyHfrAcr37anguZsRAtktAKCKlqDTxrtmLLpfEK+cq92OOS0JwQCeIuiG
59G9YtNTZXD5JIExywCYprI=
=1Uls
-END PGP SIGNATURE-





--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


Re: [squid-users] Creating an Out of Service Page using Custom Errors

2008-03-01 Thread Amos Jeffries

Mark A. Schnitter wrote:

Hello,

I'm attempting to create an Out of Service page hosted on the same box
that is running Squid. I'm running a reverse proxy configuration with
several backend boxes. When one of the backend boxes is down for scheduled
maintenance or is having a problem, I would like to have a custom html
page displayed to indicate the problem instead of a Squid proxy error
page.

I've searched through the mail archives, Google, etc. and have found lots
of good info around how to display a custom page, but so far the only way
to trap or trigger the page seems to be confined to creating an ACL that
looks for specific information to display the page. I've researched the
error_map and deny_info tags and haven't been able to find a way to
trigger the custom page when I get a squid error.

For example, if my Windows box is up, but IIS is down, Squid returns an
error code 111. The error_map tag only seems to accept HTML response codes
so I'm out of luck with that approach. If I try to use deny_info, I can't
find an ACL tag that allows me to identify Squid errors.

I would like to identify the following two conditions: Target box is down
and Target Box is up, but web server is down.

If there was a way I could trap the errors Squid was producing without
changing the default error messages, that would be the ideal solution.


You may need to us squid 3.0 with its status ACL and deny_info to get 
this working. If that still cannot do it then you are likely to end up 
with a  code change needed.




Hypothetical Example:

acl oos squid-error 111
deny_info ERR_OOS oos
- Where squid-error is the trapping mechanism
- Where ERR_OOS is the custom error page

Any ideas or different approaches?

Thanks,
Mark



Amos
--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


Re: [squid-users] how big value should auth_param basic children be?

2008-03-01 Thread Amos Jeffries

Yong Bong Fong wrote:

Hi,
  Just wondering how to define the optimum value for auth_param basic 
children. I have around 200+ users utilizing my proxy-ldap 
authentication. Currently i have set it to 20, i wonder if that is 
beyond redundant and what is actually the appropriate value to 
accommodate the users?

thanks

auth_param basic children 50


Check your cache.log.
If squid is having trouble with lack of auth helpers you will see a 
series of WARNING lines with a recommended number of helpers specific to 
your servers needs.


Amos
--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


Re: [squid-users] transparency Squid very slow internet

2008-03-01 Thread Amos Jeffries

Guillaume Chartrand wrote:

Hi I run squid 2.6.STABLE12 on RHEL3 AS for web-caching and filtering my 
internet. I use also Squidguard to block some sites.
I configure squid to run with WCCP v2 with my cisco router. So all my web-cache 
traffic is redirected transparently to squid.

I don't know why but when I activate the squid it's really decrease my internet 
speed. It's long to have page loaded, even when it's in my network. I look with 
the command top and the squid process run only about 2-3 % of CPU and 15% of 
Memory. I also run iftop and I have about 15 Mb/s Total on my ethernet 
interface. I don't know where to look in the config to increase the speed. I 
use about 50% of disk space so it's not so bad

Thanks for the help



It's usually regex ACL at fault when speed drops noticably.

Check that:
 * ACL are only regex when absoutely necessary (dstdomain, srcdomain, 
dst, src are all better in most uses).

  ie acl searchengines dstdomain google.com yahoo.com

 * limiting regex ACL to only be tested when needed (placing a src 
netblock ACL ahead of one on the http_access will speed up all requests 
outside that netblockk).

  ie   http_access allow dodgy_users pornregexes


Amos
--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


Re: [squid-users] Port Changing in Fedora 8

2008-03-01 Thread Amos Jeffries

Steve B wrote:

Sorry, forgot to reply to squidusers again.

1) I want to use another port because where I am, no other port is
 open. The port is 81.
 2) I put inside of my .conf file: http_port 81.
 3) Not from what the grep command had said. Someone else suggested to
 use it so I used it. Supposedly nothing.
 4) Not from what I had seen.


Ports under 1024 have three problems that are usually seen to fail startup:

* Something else already using it.

* Starting squid as user other than root.

* SELinux security policy preventing even root opening the port.

Amos





 On Fri, Feb 29, 2008 at 5:03 AM, Peter Albrecht
 [EMAIL PROTECTED] wrote:
  Hi Steve,
 
 
 
   On Thursday 28 February 2008 20:18, Steve B wrote:
I am trying to change the port that Squid uses from 3128(default) to
something other than it obviously. My main problem is that when I
change the port to Anything, even 3127(made something up), squid will
NOT start. For example when I type the command 'service squid start'
with the port at 3128, it will start, but if it is anything other then
that, it will go: 'Starting Squid. [FAILED]'
   
Any help?
 
   Some questions:
 
   1) _Why_ do you want to use another port? You have to tell the clients
 anyway.
   2) _How_ did you configure the other port? What is in your http_port
 directive?
   3) Is there any other service running on the port you want to use? Try
 nmap ip-of-your-server to see if any service is running on that port.
   4) Is there any information in the log files?
 
   Regards,
 
   Peter
 
   --
   Peter Albrecht, Novell Training Services
 



 --
 -Steve






--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


Re: [squid-users] Reverse proxy setup with squid 2.6+

2008-03-01 Thread Amos Jeffries

Russ Gnann wrote:
We are currently looking up upgrade our squid servers from 2.5 to 2.6 or higher.  In our current configuration, we send requests to the origin servers to a single IP address that points to a load balancer which is associated with a pool of web servers. In 2.5, this is easy to do with the httpd_accel_* directives, but in 2.6 I know that those directives have been replaced by the http_port directive with accel, vhost, vport, etc. options.  I have supplied the squid.conf we are attempting to use below with a build of 2.6.  With this configuration, it appears that any connection attempt that doesn't get a cache hit resolves the virtual host, and makes an HTTP connection to that resolved public IP instead sending the request to the internal 10.x.x.11 address.  


Is there a way under squid 2.6 and higher to force any request that doesn't 
make a cache hit to a single backend IP address?  The vhost option is necessary 
with http_port since the Host: header must contain the Virtual Host name as our 
web servers use that data to determine what which site to serve.



You require a cache_peer directive and a cache_peer_access with ACLs.
Those will direct cache-misses to the actual source you configure 
without doing the DNS lookups.


Amos



squid build: 
# /opt/squid-2.6.16/sbin/squid -v

Squid Cache: Version 2.6.STABLE16
configure options:  '--prefix=/opt/squid-2.6.16' '--enable-async-io' 
'--enable-snmp' '--enable-removal-policies=heap' '--enable-referer-log' 
'--enable-useragent-log'

- squid.conf -
acl snmppublic snmp_community local-squid-ro
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl local_network src 172.16.0.0/16 10.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
acl web_ports port 80
http_access allow web_ports
http_access allow manager localhost
http_access allow manager local_network
http_access deny manager
acl purge method PURGE
http_access allow purge localhost
http_access allow purge local_network
http_access deny purge
http_access allow all
icp_access allow all
http_port 80 accel defaultsite=10.x.x.11 vhost
cache_peer 10.x.x.11 parent 80 0 no-query originserver
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
memory_replacement_policy heap LFUDA
cache_replacement_policy heap LFUDA
logformat CustomLog %a %ui %un [%{%d/%b/%Y:%H:%M:%S %z}tl] %rm %ru HTTP/%rv %Hs %st %{Referer}h 
%{User-Agent}h %{Cookie}h %Ss:%Sh
access_log /opt/squid-2.6.16/var/logs/custom.log CustomLog
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
cache_effective_user www
cache_effective_group www
visible_hostname squid.domain.com



Regards,

Russell



--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


Re: [squid-users] Redirection on error.

2008-03-01 Thread Amos Jeffries

Dave Coventry wrote:

Amos,

Thank you for the reply.

I have done a bit more research and I see that the directive is
probably not what I require.

From what I read, the Error page will still behave in the same way and
append any links onto the originally requested URL.


I'm not sure what you mean by this?
The error response and page as a whole _replaces_ the original URL and 
page requested _as a whole_.




How can you ensure that the links are accessed locally?

Would it work if I used http://192.168.60.254/redir/images/logo.gif;?

and http://192.168.60.254/cgi-bin/login.pl;?


You could alter squid errors/ directory and add your files. Altering the 
 ERR_* file appropriately to use them.
That is the old way of doing it and remains very unstable. With server 
upgrades likely to replace your editing without warning.


Amos
--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


Re: [squid-users] Reverse proxy and URL filtering...

2008-03-01 Thread Amos Jeffries

Gary Tai wrote:

This is set up internally for proof of concept currently, so it
doesn't have public access.



So does the same experiment in your browser bring up squid?
The point and advice remains unchnged.

Amos




On Thu, Feb 28, 2008 at 9:04 AM, Amos Jeffries [EMAIL PROTECTED] wrote:

Gary Tai wrote:
  I need to setup a reverse proxy on the same Windows server that allows
  only certain defined URLs (www.somedomain.com/Test/this_url_only.asp).
 
 
  Squid-Listen-On:8880 - send to localhost: (IIS)
 
  I've got the reverse proxy working using the following in my squid.conf file:
 
  http_port 192.168.10.81:8880 accel defaultsite=vmsquid01

 Typing http://vmsquid01/; in my browser does not bring up your website.
 That should be the FQDN for the site you are accelerating.

 You may also need vhost.



 
  cache_peer 127.0.0.1 parent  0 no-query originserver
 
 
  I can't seem to get Squid to only allow defined URLs.
 
  Is this what I should be using?
 
  acl allowed_URL urlpath_regex ^Test/this_url_only.asp
 
  cache_peer_access 127.0.0.1 allow allowed_URL

 Is the regext case-sensative? You may need to add urlpath_regex -i.

 So far so good.

 Amos
 --
 Please use Squid 2.6STABLE17+ or 3.0STABLE1+
 There are serious security advisories out on all earlier releases.




--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


Re: [squid-users] Re: enabling web based Authentication.

2008-03-01 Thread Amos Jeffries

Dave Coventry wrote:

I have just been googling and I read that it is impossible for Squid
to provide for Transparent Proxying and for Authentication.

Would it be possible to replace the
/usr/local/squid/share/errors/English/ERR_ACCESS_DENIED page with a
custom one providing for usernames and passwords.

A Perl script might be able to generate a file accessible to the acl
AuthorisedUser src /var/log/squid/iplistfile directive.

Is this feasible?


In 2.x its sometimes needed. In 3.x its fully obsolete.



Has anyone done something similar?


Yes. see below.



Or is there an easier solution?


Yes.
Write up your login page as a normal HTML page somewhere.
Use:
   deny_info http://page-uri name-of-proxy_auth-acl


Amos
--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


Re: [squid-users] External acl question

2008-03-01 Thread Amos Jeffries

Prasad J Pandit wrote:


  Hello Rodrigo, hello all!

I'm trying to implement the per user access restriction using Squid. 
I've put the acls for each user in a seperate file like user-acl.txt. 
For example, my `guest-acl.txt' looks like:


===
acl guest_ip dst some-ip/32
acl guest_maildstdom_regexmail.google* www.
acl guest_domdstdomain.google.com

http_access allow guest_ip
http_access allow guest_mail
http_access allow guest_dom
===

So the  `guest' user will only be allowed to access some-ip and her 
gmail account.


Then you will need to extend those http_access lines to include more 
than one ACL.

ie  http_access allow guest_ip guest_dom

Instead of all the above. What you have currently will let _anyone_ 
access _any_ of the ACL matches. some-ip or *.google.com or 
mail.google.hijacked-serve.com, or www.any-server-anywhere.com, etc.




Now, I've quite a few such files. What I'd like to have is I just 
include these files into the squid.conf file like


include guest-acl.txt
include root-acl.txt
 ...
include gobman-acl.txt

And depending upon which one is commented/uncommented squid would 
include the acls from the respective files(Snort does it really well).


I'm trying to do this with the `acl external'  `external_acl_type', but 
don't see any light so far.


Could you please tell me if this can be done, and how if yes? One more 
thing is, I can not use squid for authentication, I've to use something 
else for that.




There is a patchset to both squid-2 and squid-3 for the include directive.

It will be included native in 2.7 and 3.0.STABLE2+ (due out within the 
week, daily snapshots of 3.0 are just undergoing final tests and checks 
before release).



Amos
--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


Re: [squid-users] Re: Why squid -z

2008-03-01 Thread Amos Jeffries

RW wrote:

On Tue, 26 Feb 2008 12:25:06 +0200
Angela Williams [EMAIL PROTECTED] wrote:


On Tuesday 26 February 2008, Ric wrote:

I'm wondering why we require squid -z before starting up Squid for
the first time.  Is there some reason why Squid shouldn't do this
automatically when necessary?

Just a simple scenario?
I use a separate cache file system for all my many squid boxes.
Now for some reason one of the boxes get bounced and my squid cache
filesystem fails to mount but squid comes up happily and say Oh look
I don't have any cache directory structure so let me make one! Root
filesystem is limited in space and then this dirty great big
directory structure is created and then gets used by squid. In the
twinkling of an eye the root filesystem is full!


I don't think this could actually happen unless the admin does
something perverse.

If squid is run under it's own user, it would own the mounted
filesystem, but the mountpoint should still belong to root, operator or
whatever. The squid daemon wouldn't be able to write the cache
directories under the mountpoint unless the admin had explicitly given
it write permission or changed the ownership of the mountpoint to
the squid user (even though squid doesn't do the mounting). 


OTOH when you run squid as root (which you probably shouldn't do
anyway)


To do most of what squid is expected to do these days:
  net-load routing, fastest-path detection, transparency, acceleration 
(reverse-proxy), pmtu alteration, other kernel-level socket operations.


It _requires_ starting as root and dropping its own privileges down to 
effective-user when no longer needed.



the cache directory needs to be owned by
cache_effective_user for squid to use it. 



It does anyway, root-started or non-root.
Are you willing to require all squid users to have another layer of 
directory structure chown'd to effective-user just for your feature?


Adrian has already made the offer to commit the code if you write it.

Amos
--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


Re: [squid-users] The requested URL could not be retrieved: invalid url

2008-03-01 Thread Amos Jeffries

Matus UHLAR - fantomas wrote:

On Fri, Feb 8, 2008 at 7:36 PM, Dave Coventry [EMAIL PROTECTED] wrote:

On Feb 8, 2008 7:37 PM, Adrian Chadd wrote:
  Under linux, add --enable-linux-netfilter to the configure line.

 Okay, I'll try that.


On 28.02.08 21:41, Dave Coventry wrote:

I've managed to get squid working (without authentication as yet), but
I have a really strange error.

Whenever I access my apache server, squid removes the domain part of
the URL and delivers an error.

For example if I access my apache server
http://myimaginarysite.dydns.org I get the following error:

~ snip ~~~
ERROR
The requested URL could not be retrieved

While trying to retrieve the URL: /

The following error was encountered:

* Invalid URL


I guess you are trying to use squid as intercepting proxy but didn't tell it
so. Look at transparent option for http_port directive



I think this error message is normal for transparent proxies. They do 
not natively recevie the domain in the METHOD-URL-PROTOCOL tuple. The 
squid code in transparent mode should be pulling the Host: info from 
headers, but may not report it in the page even if using it.


Is the URL you are asking for actually real and reachable to squid?

You could try adding 'vhost' to the options to force squid check Host: 
header and see if something funky is causing it not to by default.


Amos
--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


Re: [squid-users] Port Changing in Fedora 8

2008-03-01 Thread nima sadeghian
for opening a port on redhat go to etc/sysconfig/iptables there you
can open the port and service network restart

On Sat, Mar 1, 2008 at 1:03 PM, Amos Jeffries [EMAIL PROTECTED] wrote:
 Steve B wrote:
  Sorry, forgot to reply to squidusers again.
 
  1) I want to use another port because where I am, no other port is
   open. The port is 81.
   2) I put inside of my .conf file: http_port 81.
   3) Not from what the grep command had said. Someone else suggested to
   use it so I used it. Supposedly nothing.
   4) Not from what I had seen.

 Ports under 1024 have three problems that are usually seen to fail startup:

 * Something else already using it.

 * Starting squid as user other than root.

 * SELinux security policy preventing even root opening the port.

 Amos

 
 
 
   On Fri, Feb 29, 2008 at 5:03 AM, Peter Albrecht
   [EMAIL PROTECTED] wrote:
Hi Steve,
   
   
   
 On Thursday 28 February 2008 20:18, Steve B wrote:
  I am trying to change the port that Squid uses from 3128(default) to
  something other than it obviously. My main problem is that when I
  change the port to Anything, even 3127(made something up), squid will
  NOT start. For example when I type the command 'service squid start'
  with the port at 3128, it will start, but if it is anything other then
  that, it will go: 'Starting Squid. [FAILED]'
 
  Any help?
   
 Some questions:
   
 1) _Why_ do you want to use another port? You have to tell the clients
   anyway.
 2) _How_ did you configure the other port? What is in your http_port
   directive?
 3) Is there any other service running on the port you want to use? Try
   nmap ip-of-your-server to see if any service is running on that 
  port.
 4) Is there any information in the log files?
   
 Regards,
   
 Peter
   
 --
 Peter Albrecht, Novell Training Services
   
 
 
 
   --
   -Steve
 
 
 


 --
 Please use Squid 2.6STABLE17+ or 3.0STABLE1+
 There are serious security advisories out on all earlier releases.




-- 
Best Regards
Nima Sadeghian


Re: [squid-users] Re: Why squid -z

2008-03-01 Thread Ric


On Mar 1, 2008, at 2:14 AM, Amos Jeffries wrote:


RW wrote:

On Tue, 26 Feb 2008 12:25:06 +0200
Angela Williams [EMAIL PROTECTED] wrote:

On Tuesday 26 February 2008, Ric wrote:
I'm wondering why we require squid -z before starting up Squid  
for

the first time.  Is there some reason why Squid shouldn't do this
automatically when necessary?

Just a simple scenario?
I use a separate cache file system for all my many squid boxes.
Now for some reason one of the boxes get bounced and my squid cache
filesystem fails to mount but squid comes up happily and say Oh look
I don't have any cache directory structure so let me make one! Root
filesystem is limited in space and then this dirty great big
directory structure is created and then gets used by squid. In the
twinkling of an eye the root filesystem is full!

I don't think this could actually happen unless the admin does
something perverse.
If squid is run under it's own user, it would own the mounted
filesystem, but the mountpoint should still belong to root,  
operator or

whatever. The squid daemon wouldn't be able to write the cache
directories under the mountpoint unless the admin had explicitly  
given

it write permission or changed the ownership of the mountpoint to
the squid user (even though squid doesn't do the mounting). OTOH  
when you run squid as root (which you probably shouldn't do

anyway)


To do most of what squid is expected to do these days:
 net-load routing, fastest-path detection, transparency,  
acceleration (reverse-proxy), pmtu alteration, other kernel-level  
socket operations.


It _requires_ starting as root and dropping its own privileges down  
to effective-user when no longer needed.



the cache directory needs to be owned by
cache_effective_user for squid to use it.


It does anyway, root-started or non-root.
Are you willing to require all squid users to have another layer of  
directory structure chown'd to effective-user just for your feature?


Adrian has already made the offer to commit the code if you write it.

Amos



To be fair to RW, I don't think he was asking for this feature.  I was.

RW was just offering an opinion on the technical merits of Angela's  
argument.  In any case, this argument is moot since a config flag that  
defaults to off seems acceptable to all.


Ric






[squid-users] squid meetup dinner/drinks

2008-03-01 Thread Robert Collins
Hi,
The squid meetup has been going well :). We're going to head to
waggamamma's http://www.wagamama.com/locations_map.php?locationid=127
around 5pm. We'll head off to a local pub after that around 6:30 or so.

-Rob



signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Redirection on error.

2008-03-01 Thread Dave Coventry
Thanks for your help.

On Sat, Mar 1, 2008 at 11:42 AM, Amos Jeffries  wrote:
  I'm not sure what you mean by this?
  The error response and page as a whole _replaces_ the
  original URL and  page requested _as a whole_.

Well, if I compose an HTML page to replace ERR_ACCESS_DENIED, and the
page has an IMG tag which refers to images/logo.jpg, then apache
assumes that the location of the logo.jpg file is on the server to
which I was attempting to connect before my access was denied.

So if I was attempting to view http://www.cricinfo.com, apache assumes
that the location of the file logo.jpg is at
http://www.cricinfo.com/images/logo.jpg and returns a 404

If the IMG tag is changed to http://localhost/images/logo.jpg; the
result is the same.

If, however, the IMG tag is changed to
http://192.168.60.254/images/logo.jpg; the result is slightly
different: the /var/log/apache2/access.log file reveals that apache
believes a dummy file has been requested and returns 200.

127.0.0.1 - - [01/Mar/2008:11:52:32 +0200] GET / HTTP/1.0 200 738
- Apache/2.2.4 (Ubuntu) PHP/5.2.3-1ubuntu6 (internal dummy
connection)

It may be that Apache is at fault here, and I will research this.

But my gut feel is that Squid is spoofing the location of the
ERR_ACCESS_DENIED file as being on the server of the requested URL.

This is not a big deal as far as the images/logo.jpg is concerned,
but it drives a coach and horses through my idea to call a perl cgi
script from the ERR_ACCESS_DENIED page.


[squid-users] Re: squid meetup dinner/drinks

2008-03-01 Thread Kinkie
On Sat, Mar 1, 2008 at 4:26 PM, Robert Collins [EMAIL PROTECTED] wrote:
 Hi,
 The squid meetup has been going well :).

This is great news! It makes me all the more sorry for not being there with you.
I'm Garmisch Partenkirchen (de), but with you in spirit.

Have fun, and happy squidding!

-- 
/kinkie


[squid-users] Re: Why squid -z

2008-03-01 Thread RW
On Sat, 01 Mar 2008 23:14:30 +1300
Amos Jeffries [EMAIL PROTECTED] wrote:

 RW wrote:
  On Tue, 26 Feb 2008 12:25:06 +0200
  Angela Williams [EMAIL PROTECTED] wrote:
  Root filesystem is limited in space and then this dirty great
  big directory structure is created and then gets used by squid. In
  the twinkling of an eye the root filesystem is full!
  
  I don't think this could actually happen unless the admin does
  something perverse.
  
  If squid is run under it's own user, it would own the mounted
  filesystem, but the mountpoint should still belong to root
  ...
  OTOH when you run squid as root (which you probably shouldn't do
  anyway)
 
 To do most of what squid is expected to do these days:
net-load routing, fastest-path detection, transparency,
 acceleration (reverse-proxy), pmtu alteration, other kernel-level
 socket operations.

I was under the impression (probably wrong) that most thing that
involved root access wouldn't commonly involve caching to disk - I
didn't know that transparent caching required root access. That was
really just an aside though.

 
 Are you willing to require all squid users to have another layer of 
 directory structure chown'd to effective-user just for your feature?

No (and it's not my feature), what I'm talking about is this:

# mkdir /cache
# mount /dev/md21 /cache
#
# chown squid:squid /cache
# ls -ld /cache
drwxr-xr-x  3 squid  squid  512 Mar  1 17:07 /cache
#
# umount /cache
# ls -ld /cache
drwxr-xr-x  2 root  wheel  512 Mar  1 17:05 /cache

i.e, when the filesystem is not mounted, /cache doesn't belong to
squid


My point was that Angela's objection to auto-initialization is
not well founded. And since hers was the only specific objection to
on-by-default, I thought it worth mentioning.

I don't really care much about this myself, but I do see merit in
having squid do something useful out-of-the-box, e.g. work as a basic
cache with access from localhost and private addresses - and that
requires automatic initialization of a default cache directory. OTOH
that could perhaps become a packaging issue once the option is added.





RE: [squid-users] Re: squid meetup dinner/drinks

2008-03-01 Thread Jorge Bastos
Hi guys,
I know the victoria places on the map, i've been there once!!!
Too bad I'm not there, it was on vacation!




-Original Message-
From: Kinkie [mailto:[EMAIL PROTECTED] 
Sent: sábado, 1 de Março de 2008 17:35
To: Robert Collins
Cc: Squid Users; Squid Developers
Subject: [squid-users] Re: squid meetup dinner/drinks

On Sat, Mar 1, 2008 at 4:26 PM, Robert Collins [EMAIL PROTECTED] wrote:
 Hi,
 The squid meetup has been going well :).

This is great news! It makes me all the more sorry for not being there with you.
I'm Garmisch Partenkirchen (de), but with you in spirit.

Have fun, and happy squidding!

-- 
/kinkie



[squid-users] FATAL: comm_select_init: epoll_create...

2008-03-01 Thread ale1971

Hello, anyone know what this error means?

My squid version:

PC-DEB:~# squid3 -v
Squid Cache: Version 3.0.PRE5

Error in cache.log:

CPU Usage: 0.010 seconds = 0.000 user + 0.010 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 516
2008/03/02 04:15:14| storeDirWriteCleanLogs: Starting...
2008/03/02 04:15:14|   Finished.  Wrote 0 entries.
2008/03/02 04:15:14|   Took 0.0 seconds (   0.0 entries/sec).
FATAL: comm_select_init: epoll_create(): (38) Function not implemented

Squid Cache (Version 3.0.PRE5): Terminated abnormally.
CPU Usage: 0.020 seconds = 0.000 user + 0.020 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 516
2008/03/02 04:15:17| storeDirWriteCleanLogs: Starting...
2008/03/02 04:15:17|   Finished.  Wrote 0 entries.
2008/03/02 04:15:17|   Took 0.0 seconds (   0.0 entries/sec).
FATAL: comm_select_init: epoll_create(): (38) Function not implemented

Squid Cache (Version 3.0.PRE5): Terminated abnormally.
CPU Usage: 0.020 seconds = 0.000 user + 0.020 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 516
2008/03/02 04:15:20| storeDirWriteCleanLogs: Starting...
2008/03/02 04:15:20|   Finished.  Wrote 0 entries.
2008/03/02 04:15:20|   Took 0.0 seconds (   0.0 entries/sec).
FATAL: comm_select_init: epoll_create(): (38) Function not implemented



My squid.conf

#
# Admin settings
#
cache_mgr [EMAIL PROTECTED]

#
# Cache Params
#
# Disk cache: 1024 MB, 16 top directories max, 256 second-level directories
max
cache_dir ufs /var/spool/squid3 1024 16 256
cache_access_log /var/log/squid3/access.log
cache_log /var/log/squid3/cache.log
cache_store_log /var/log/squid3/store.log
mime_table /usr/share/squid3/mime.conf

# want to use volatile memory for squid?
cache_mem 128 MB
maximum_object_size 8192 KB
#Smallest expiry interval that Squid will honor in headers
minimum_expiry_time 120 seconds

#
# Backend Servers Settings
#
#URL of the site you are caching
http_port 80 accel defaultsite=www.site1.it vhost

#
# ACLs for manager app
#
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
http_access allow manager localhost
#set your password for cachemgr here
#cachemgr_passwd myn1cepass all

#
# ACLs
#
acl all src 0.0.0.0/0.0.0.0
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl Safe_ports port 80  # http
http_access deny manager
http_access deny !Safe_ports
http_access allow localhost
http_reply_access allow all
icp_access allow all

acl ftpblock url_regex -i \.mp3$ \.asx$ \.avi$ \.mpeg$ \.mpg$ \.qt$ \.ram$
\.rm$
# fa in modo che non vengano fatti scaricare i file di quelle desinenze


cache_peer 192.168.0.18 parent 80 0 no-query originserver
acl LaBUsers dstdomain www.site1.it
cache_peer_access 192.168.0.18 allow LaBUsers
cache_peer_access 192.168.0.18 deny all
http_access allow LaBUsers

cache_peer 192.168.0.17 parent 80 0 no-query originserver
acl PolUsers dstdomain www.site2.it
cache_peer_access 192.168.0.17 allow PolUsers
cache_peer_access 192.168.0.17 deny all
http_access allow PolUsers


http_access deny all

#
# Headers
#

#
#logs
#
#emulate_httpd_log on
#log in apache format
logformat combined %a %ui %un [%tl] %rm %ru HTTP/%rv %Hs %st
%{Referer}h$
access_log /var/log/squid3/access-combi.log combined




Thank you

-- 
View this message in context: 
http://www.nabble.com/FATAL%3A-comm_select_init%3A-epoll_create...-tp15782733p15782733.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] FATAL: comm_select_init: epoll_create...

2008-03-01 Thread J. Peng
On Sun, Mar 2, 2008 at 10:55 AM, ale1971 [EMAIL PROTECTED] wrote:

  FATAL: comm_select_init: epoll_create(): (38) Function not implemented


What's your OS and kernel?
Seems you complied squid with --enable-epoll but your OS doesn't support epoll.
Linux with kernel 2.6 has epoll enabled.


Re: [squid-users] External acl question

2008-03-01 Thread Prasad J Pandit


   Hello Amos,

On Sat, 1 Mar 2008, Amos Jeffries wrote:

There is a patchset to both squid-2 and squid-3 for the include directive.

It will be included native in 2.7 and 3.0.STABLE2+ (due out within the week, 
daily snapshots of 3.0 are just undergoing final tests and checks before 
release).


  That's execellent! I hope, that'll let me include any file on my disk.
I'm looking forward to get my hands on 3.0 STABLE2+ release.

Thank you for the information!
--
Regards
  - Prasad