Re: [squid-users] Squid-3.1 failed to select source

2013-04-28 Thread Doug
Hello,

# squid3 -k parse
2013/04/29 10:10:15| Processing Configuration File:
/etc/squid3/squid.conf (depth 0)

This is the info it gives.

2013/4/29 Amos Jeffries :
> On 28/04/2013 8:55 p.m., Doug wrote:
>>
>> Hello,
>>
>> I have the reverse proxy config as:
>>
>>   cache_peer  175.6.1.216  parent  80 0  no-query  originserver
>> name=caiyuan
>> acl resdomain dstdomain  www.52caiyuan.com www.52huayuan.cn
>> 52caiyuan.com 52huayuan.cn huayuan.52caiyuan.com
>> cache_peer_access caiyuan allow resdomain
>
> What does "squid -k parse" throw out at you?
>
> I would expect some warnings about something to do with splay trees.
> Which means ...
>
>
>> When accessing to the cache, the domains www.52caiyuan.com and
>> 52caiyuan.com work fine.
>> But huayuan.52caiyuan.com got failed, the cache.log says:
>>
>>   2013/04/28 16:36:13| Failed to select source for
>> 'http://huayuan.52caiyuan.com/'
>> 2013/04/28 16:36:13|   always_direct = 0
>> 2013/04/28 16:36:13|never_direct = 1
>> 2013/04/28 16:36:13|timedout = 0
>
>
> The latest version should work much better. There is a package of 3.3.3 now
> available in the Debian sid repository you should try out.
>
> Amos
>


[squid-users] Squid-3.1 failed to select source

2013-04-28 Thread Doug
Hello,

I have the reverse proxy config as:

 cache_peer  175.6.1.216  parent  80 0  no-query  originserver name=caiyuan
acl resdomain dstdomain  www.52caiyuan.com www.52huayuan.cn
52caiyuan.com 52huayuan.cn huayuan.52caiyuan.com
cache_peer_access caiyuan allow resdomain

When accessing to the cache, the domains www.52caiyuan.com and
52caiyuan.com work fine.
But huayuan.52caiyuan.com got failed, the cache.log says:

 2013/04/28 16:36:13| Failed to select source for
'http://huayuan.52caiyuan.com/'
2013/04/28 16:36:13|   always_direct = 0
2013/04/28 16:36:13|never_direct = 1
2013/04/28 16:36:13|timedout = 0

 For the same originserver, why some domains work but some not?

The squid and OS version:

 Squid Cache: Version 3.1.6
Debian GNU/Linux 6.0

(apt-get install squid3)

Can you help? thanks.


[squid-users] How to configure squid so it serves stale web pages when Internet Down

2011-11-22 Thread Doug Karl
We are trying to configure Squid for installation in school labs in 
Belize, Central America where the Internet routinely goes down for 
several minutes and sometimes an hour at a time.  We are very happy to 
serve up stales pages to the children for their classroom session. So we 
need to either: (1) Configure squid to handle such situations where 
cached pages are simply served stale when the Internet is down (i.e. 
don't have Internet to verify freshness) or (2) Have squid respond to a 
script that detects the Internet to be down telling it to serve up 
stales pages when Internet down.  As configured, our Squid 
implementation will not serve stale pages because it tries to access the 
original Web site and the cached pages are not served up at all.


NOTE: We have tried "Squid Off-line mode" and that does not work as you 
would think as several others reported. SO ARE THERE config parameters 
that can make caching work in the presence of bad Internet connection?


Thank you,
Doug Karl & Mary Willette


Re: [squid-users] Transparent Squid Stalls For Up To Two Minutes

2009-05-18 Thread Doug Eubanks
I appreciate your response. I don't believe it's a file system issue, I've 
tried troubleshooting that for several weeks.  Originally, I was using 16 256 
(the default) as directory layout.  I've tried using ext4, reiser (my favorite 
filesystem) and now it's on btrfs.  I also have the filesystem mounted with 
noatime.  When I was using reiser, I had disabled tail packing as well.  As you 
can see, I'm using aufs, but I've also tried diskd.

The IP tables NAT/DNAT stuff happens at my router.  See this DD-WRT wiki 
article for how it's done 
(http://www.dd-wrt.com/wiki/index.php/Transparent_Proxy), I actually wrote the 
section on multiple hosts can bypass the proxy. Either way, it's not a router 
issue.  If I set my browser to the use the proxy directly, the delays still 
happen 99% of the time.

Originally,I was using dans with antivirus.  But the delays have gotten to be 
horrible.  I went back to a standard squid setup to try to resolve the problem. 
 At this  point, I simply want to get squid working because a lot of the sites 
we visit continously may benefit from cacheing (news sites with lots of 
graphics, etc).  Once I get this problem resolved, I'll go back to using dans 
w/ antivirus.

10.0.0.254 (the squid host) is excluded from the IP tables rules on DD-WRT, 
along with my Xbox 360, my BluRay player, my HD-DVD player and my DirecTV 
receiver.

The three DNS servers specified in the squid.conf all resolve names properly 
and are open to the squid host.

Thanks
Doug Eubanks
ad...@dougware.net
919-201-8750

  _  
From: Amos Jeffries [mailto:squ...@treenet.co.nz]
To: ad...@dougware.net
Cc: squid-users@squid-cache.org
Sent: Mon, 18 May 2009 14:55:39 +
Subject: Re: [squid-users] Transparent Squid Stalls For Up To Two Minutes

Doug Eubanks wrote:
> I'm having an intermittent squid issue. It's plagued me with CentOS 5.x, 
> Fedora 6, and now Fedora 11 (all using the RPM build that came with the OS).
> 
> My DD-WRT router forwards all of my outgoing port 80 requests to my 
> transparent proxy using IP tables. For some reason, squid will hang when 
> opening a URL for up to two minutes. It doesn't always happen and sometimes 
> restarting squid will correct the problem (for a while). The system is pretty 
> hefty 3ghz P4 with 2G of RAM with a SATA II drive. That should be plenty for 
> a small home network of about 10 clients.
> 
> When I test DNS lookups from the host, requests are returned within less than 
> a second. I'm pretty sure that's not the problem.
> 
> Here is my squid.conf, any input would be greatly appreciated!
> 
> acl manager proto cache_object
> acl localhost src 127.0.0.1/32
> acl to_localhost dst 127.0.0.0/8
> acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
> acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
> acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
> acl SSL_ports port 443
> acl Safe_ports port 80  # http
> acl Safe_ports port 21  # ftp
> acl Safe_ports port 443 # https
> acl Safe_ports port 70  # gopher
> acl Safe_ports port 210 # wais
> acl Safe_ports port 1025-65535  # unregistered ports
> acl Safe_ports port 280 # http-mgmt
> acl Safe_ports port 488 # gss-http
> acl Safe_ports port 591 # filemaker
> acl Safe_ports port 777 # multiling http
> acl CONNECT method CONNECT
> http_access allow manager localhost
> http_access deny manager
> http_access allow localnet
> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports
> http_access allow localnet
> http_access allow localhost
> http_access deny all
> htcp_access allow localnet
> htcp_access deny all
> http_port 3128 transparent

Is the NAT / REDIRECT/DNAT happening on the Squid box?
It needs to.

> hierarchy_stoplist cgi-bin ?
> cache_mem 32 MB
> maximum_object_size_in_memory 128 KB
> cache_replacement_policy heap LRU
> cache_dir aufs /var/spool/squid 4096 8 16

4GB of objects under 512KB small (avg set at 64KB later),  using only an 
8x16 inode array. You may have a FS overload problem.

Also, Squid 'pulses' cache garbage collection one directory at a time. 
Very large amounts of files in any one directory can slow things down a 
lot at random times.

It's generally better to increase the L1/L2 numbers from default as the 
cache gets bigger.

> max_open_disk_fds 0
> minimum_object_size 0 KB
> maximum_object_size 512 KB
> access_log /var/log/squid/access.log squid
> refresh_pattern ^ftp:   144020% 10080
> refresh_pattern ^gopher:14400%  1440
> refresh_pattern (cgi-bin|\?)0   0%  0
> refresh_pattern .   0   20% 4320
> visible_hostname doug-linu

[squid-users] Transparent Squid Stalls For Up To Two Minutes

2009-05-18 Thread Doug Eubanks
I'm having an intermittent squid issue. It's plagued me with CentOS 5.x, Fedora 
6, and now Fedora 11 (all using the RPM build that came with the OS).

My DD-WRT router forwards all of my outgoing port 80 requests to my transparent 
proxy using IP tables. For some reason, squid will hang when opening a URL for 
up to two minutes. It doesn't always happen and sometimes restarting squid will 
correct the problem (for a while). The system is pretty hefty 3ghz P4 with 2G 
of RAM with a SATA II drive. That should be plenty for a small home network of 
about 10 clients.

When I test DNS lookups from the host, requests are returned within less than a 
second. I'm pretty sure that's not the problem.

Here is my squid.conf, any input would be greatly appreciated!

acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access allow localnet
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow localhost
http_access deny all
htcp_access allow localnet
htcp_access deny all
http_port 3128 transparent
hierarchy_stoplist cgi-bin ?
cache_mem 32 MB
maximum_object_size_in_memory 128 KB
cache_replacement_policy heap LRU
cache_dir aufs /var/spool/squid 4096 8 16
max_open_disk_fds 0
minimum_object_size 0 KB
maximum_object_size 512 KB
access_log /var/log/squid/access.log squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern (cgi-bin|\?)0   0%  0
refresh_pattern .   0   20%     4320
visible_hostname doug-linux.dougware.net
unique_hostname doug-linux.dougware.net
coredump_dir /var/spool/squid
cache_mgr ad...@dougware.net
dns_nameservers 10.0.0.254 10.0.0.253 69.197.163.239
store_avg_object_size 64 KB
memory_replacement_policy heap LRU
tcp_outgoing_address 10.0.0.254
udp_outgoing_address 10.0.0.254

Thanks
Doug Eubanks
ad...@dougware.net
919-201-8750


RE: [squid-users] How Can I Change Time Zone and/or Time Format

2007-04-03 Thread Korell, Doug
My complete line is (without quotes): 
logformat squid %{%m/%d/%Y %H:%M:%S}tl  %un %>A %>a %Ss
%ru

I didn't need all the information the default had and I wanted it tab
delimited. Then the access_log line references squid for the format.

-Original Message-
From: Vadim Pushkin [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, April 03, 2007 2:03 PM
To: Korell, Doug; squid-users@squid-cache.org
Subject: RE: [squid-users] How Can I Change Time Zone and/or Time Format

Hello Doug;

So, your line in squid.conf looks like:

logformat "%{%m/%d/%Y %H:%M:%S}tl"

or

logformat Squid  "%{%m/%d/%Y %H:%M:%S}tl"

Thank you in advance,

.vp

>From: "Korell, Doug" <[EMAIL PROTECTED]>

>Under logformat in squid.conf, I use "%{%m/%d/%Y %H:%M:%S}tl" which 
>will format into localtime. No need then to convert from GMT.
>
>It will look like this: 04/03/2007 07:46:31
>
>
>
>-Original Message-
>From: Vadim Pushkin [mailto:[EMAIL PROTECTED]
>Sent: Tuesday, April 03, 2007 1:48 PM
>To: squid-users@squid-cache.org
>Subject: [squid-users] How Can I Change Time Zone and/or Time Format
>
>Hello;
>
>I find it difficult to correlate data from my access.log, which starts 
>off with something like this:
>
>1175633093.114
>
>How can I change "log_access?" into something in EST or a time format 
>that others, myself included, can read?
>
>Thank you,
>
>.vp

Confidentiality Notice: This e-mail message, including any attachments, is
for the sole use of the intended recipient(s) and may contain confidential
and privileged information. Any unauthorized review, use, disclosure or
distribution is prohibited. If you are not the intended recipient, please
contact the sender by reply e-mail and destroy all copies of the original
message.


RE: [squid-users] How Can I Change Time Zone and/or Time Format

2007-04-03 Thread Korell, Doug
Under logformat in squid.conf, I use "%{%m/%d/%Y %H:%M:%S}tl" which will
format into localtime. No need then to convert from GMT.

It will look like this: 04/03/2007 07:46:31



-Original Message-
From: Vadim Pushkin [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, April 03, 2007 1:48 PM
To: squid-users@squid-cache.org
Subject: [squid-users] How Can I Change Time Zone and/or Time Format

Hello;

I find it difficult to correlate data from my access.log, which starts
off with something like this:

1175633093.114

How can I change "log_access?" into something in EST or a time format
that others, myself included, can read?

Thank you,

.vp

Confidentiality Notice: This e-mail message, including any attachments, is
for the sole use of the intended recipient(s) and may contain confidential
and privileged information. Any unauthorized review, use, disclosure or
distribution is prohibited. If you are not the intended recipient, please
contact the sender by reply e-mail and destroy all copies of the original
message.


RE: [squid-users] Logging only authentications

2007-03-29 Thread Korell, Doug
This gets me close but I do need to somehow log the IP. I tried to
figure out a pattern in the access.log that would allow me to grab only
407 status codes and then the next log entry for the IP address if
successful (most have been 200) but as this thing gets hit, not sure how
well that would work since all entries will be mixed up. I'm sure some
creative programming can overcome this.

I was trying to find detailed information on helpers and wrappers and I
can't find a thing. Is there a tutorial for this that explains, for
example, what you did below?
 

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Monday, March 26, 2007 2:10 AM
To: Korell, Doug
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Logging only authentications

ons 2007-03-21 klockan 16:31 -0700 skrev Korell, Doug:
> I am using Squid for one purpose only, to force PC's with generic 
> Windows logins to authenticate using AD credentials when accessing the

> Internet. I have Squid configured and it's working fine, except the 
> access.log of course logs all website hits (which we also have 
> Websense doing). At first I didn't think this would be a big deal but 
> in testing, if I hit just the mainpage for a site like cnn.com, it
logs 150 entries.

Hmm.. thinking. HTTP is stateless so there is not really a "login" only
"this request was authorized". But I suppose it should be possible to
rate-limit the access log somehow.

At first I thought maybe this can be done with the session helper, which
can be used in many other such situations. However, the access.log acls
is "fast" and do not support external lookups such as helpers.. so I
guess something need to be coded to support this.
 
> So, is there some way I can log only LDAP authentications and if they 
> were successful or unsuccessful?


You can do this in the auth helper interface, but unfortunately will
only tell you the login name and timestamp, not from which station or
any other details.

Most easily done as a wrapper around the actual auth helper.

#!/usr/bin/perl
$|=1;
use IPC::Open2;
my ($in, $out, $logfile);
my $logfilename = shift @ARGV;
open($logfile, ">>$logfilename") || die; select $logfile; $|=1;
open2($out,$in,@ARGV) || die;
while() {
  my ($login, $password) = split;
  print $in $_; $ans = <$out>;
  print $logfile time(). " $login $ans\n";
  print $ans;
}


Used in front of the auth helper in squid.conf together with a log file
name. 

auth_param basic /usr/local/squid/libexec/logauth.pl
/usr/local/squid/var/logs/auth.log
/usr/local/squid/libexec/squid_ldap_auth -b ...

Regards
Henrik

Confidentiality Notice: This e-mail message, including any attachments, is
for the sole use of the intended recipient(s) and may contain confidential
and privileged information. Any unauthorized review, use, disclosure or
distribution is prohibited. If you are not the intended recipient, please
contact the sender by reply e-mail and destroy all copies of the original
message.


[squid-users] Logging only authentications

2007-03-21 Thread Korell, Doug
I am using Squid for one purpose only, to force PC's with generic
Windows logins to authenticate using AD credentials when accessing the
Internet. I have Squid configured and it's working fine, except the
access.log of course logs all website hits (which we also have Websense
doing). At first I didn't think this would be a big deal but in testing,
if I hit just the mainpage for a site like cnn.com, it logs 150 entries.
 
So, is there some way I can log only LDAP authentications and if they
were successful or unsuccessful? Websense is close to doing what I need
but when you turn on manual authentication, it turns it on for everyone
and you can't exempt which I need to do for some devices that aren't in
active directory.
 
Thanks.

Confidentiality Notice: This e-mail message, including any attachments, is
for the sole use of the intended recipient(s) and may contain confidential
and privileged information. Any unauthorized review, use, disclosure or
distribution is prohibited. If you are not the intended recipient, please
contact the sender by reply e-mail and destroy all copies of the original
message.


[squid-users] Shortening URLs passing through a squid hierarchy

2006-07-21 Thread Irvine, Doug - Resources - ICT Services
Hi,

I am responsible for a large number of squid caches serving most of the
schools in Oxfordshire. We have a central 'farm' of squid caches which
are the upstream parents to each of the school's local squid cache.

At one particular school I have a headmaster who hosts the school's blog
site on a remote web server running a product by Userland. When the
school tries to access the site through our cache hierarchy, the browser
shows:

Can't coerce the string "34 10" into a number because it contains
non-numeric characters.

If the school uses one of the central caches as their proxy it works. It
appears to me that this is a problem with the remote site being unable
to handle large strings in the request and I have found a bug for this.
I have tried to get assistance from the software company without luck.

Due to the way we are going to re-configure our central servers I need
to find a workaround to this problem.

Is there a way to get Squid to shorten the length of the address going
to the remote site?

Regards

Doug Irvine 
School's Support Team Leader
Oxfordshire County Council 
3rd Floor Clarendon House 
Shoe Lane 
Oxford 
OX1 2DP 


01865 815888 
Mobile 07776163426 

The information in this e-mail, together with any attachments, is confidential. 
If you have received this message in error you must not print off, copy, use or 
disclose the contents. The information may be covered by legal and/or 
professional privilege. Please delete from your system and inform the sender of 
the error. As an e-mail can be an informal method of communication, the views 
expressed may be personal to the sender and should not be taken as necessarily 
representing the views of the Oxfordshire County Council. As e-mails are 
transmitted over a public network the Oxfordshire County Council cannot accept 
any responsibility for the accuracy or completeness of this message. It is your 
responsibility to carry out all necessary virus checks. You should be aware 
that all emails received and sent by this Council are subject to the Freedom of 
Information Act 2000 and therefore may be disclosed to other parties under that 
Act. www.oxfordshire.gov.uk




Re: [squid-users] Stupid? Question

2006-04-13 Thread Doug Dixon


On 14 Apr 2006, at 11:18, Cayz James ((DOS)) wrote:


All,

My co-worker and I have gotten a squid server running as a  
transparent =

proxy, no cache.

How can we stop the access.log file from filling up with TCP_MISS
(TCP_MISS/304?) entries - we *know* they missed the cache, we turned
caching off!!!

Thanks for any advice.

James

--
Telecommunications / Network Technologist I
Delaware Department of State

Email: [EMAIL PROTECTED]
Office: 302-744-5029
DE SLC: D575B
USPS:  Room 204
 Delaware Public Archives
 121 Duke of York St
 Dover, DE  19901-3638



Hi

The answer depends on why do you want to do this... e.g.:

a) You don't care about what's in the access log, you just want to  
turn off logging so that you don't get an ever-growing file
b) You do care about the access log, but the format with the extra  
squid-specific information doesn't work with your log analyser
c) You do care about the access log, but you want to prune the  
"meaningless" information for aesthetic reasons


Or... something else?

If I were you I'd probably keep the log going - it's useful  
information - and if you really need it in another format, write a  
script to manipulate it (I'd use awk).
You could also try adding this to your squid.conf and see if you like  
that format better:


emulate_httpd_log on


Cheers
Doug


[squid-users] TCP_REFRESH_HIT/MISS and origin servers

2006-02-13 Thread Doug Dixon

Hi

In a reverse proxy situation, I'm naturally trying to prevent as many  
requests as possible from hitting the origin servers.


However I've run into the following situation:

1. Origin server sets 'Cache-Control: max-age=4320' (3 days) on a  
fairly long-lived object
2. Squid issues TCP_HIT or TCP_MEM_HIT for 3 days - I am happy,  
origin server doesn't get touched
3. After this period the object becomes stale so Squid serves up  
TCP_REFRESH_HIT/200 (for normal client requests) or TCP_REFRESH_MISS/ 
304 (for client IMS requests)
4. Every one of these hits the origin servers, which - assuming the  
object hasn't changed and isn't going to change for another 3 days -  
I don't want


There appears to be no out-of-the-box way of getting Squid to carry  
on caching an unchanged object beyond the max-age/expiry values  
contained in the initial fetch from the origin server.


Unless I've missed something, it seems the only ways to get Squid to  
"carry on" caching such an object are either (a) to modify it on the  
origin server (causing a TCP_REFRESH_MISS/200) or (b) to purge it  
from cache altogether. This process would have to be repeated  
whenever the object became stale again.


I'm not sure if my analysis is correct, but is there some reason why  
a TCP_REFRESH_HIT or TCP_REFRESH_MISS (i.e. a 304 Not Modified from  
an origin server for a stale object) would not cause Squid to update  
its store entry with any new max-age/expires values supplied with the  
origin server's 304 response?


In other words, could a 304 Not Modified issued by an origin server  
be used to extend the cache lifetime of the object to which it  
refers? This seems to be the best way of telling Squid that it should  
treat the object as fresh again.


Thanks for your wisdom

Doug



Re: [squid-users] Slowness, idnsSendQuery and "No buffer space av ailable"

2005-06-19 Thread Doug Darrah
On Mon, Jun 20, 2005 at 02:51:31AM +0200, Henrik Nordstrom wrote:
> On Fri, 17 Jun 2005, Chris Robertson wrote:
> 
> >>2005/06/09 21:08:49| idnsSendQuery: FD 5: sendto: (55) No buffer space
> >>available
> 
> This indicates you have run out of networking related buffers in your 
> kernel.
> 
> Unfortunately I don't know the exact details on how to retify this as I am 
> not a BSD guy..
 
Henrik,

Thanks for the reply. That gives me a solid direction in which to continue 
researching the solution.

Is there a general recommended setting for networking buffers (on caches
that process upwards of 150 requests per second)? I didn't find any specific 
recommendation in Squid: The Definitive Guide.

Doug



[squid-users] Slowness, idnsSendQuery and "No buffer space available"

2005-06-17 Thread Doug Darrah
Hi,

Lately, on some BSDi-based (4.1 BSDI BSD/OS 4.1 i386) squids I help
support, we've been seeing dramatic slowdown in squid performance. From
one of the afflicted systems' cachemgr 60min report:

client_http.hits = 18.376455/sec
client_http.errors = 0.013243/sec
client_http.kbytes_in = 81.341042/sec
client_http.kbytes_out = 1117.131508/sec
client_http.all_median_svc_time = 1.311657
seconds client_http.miss_median_svc_time = 1.542425 seconds
client_http.nm_median_svc_time = 0.399283 seconds
client_http.nh_median_svc_time = 1.242674 seconds
client_http.hit_median_svc_time = 0.469653 seconds

Normally, the median SVC time is much lower, like a full second lower.
Coinciding with this, we've been seeing this in the cache.log:

2005/06/09 21:08:49| idnsSendQuery: FD 5: sendto: (55) No buffer space
available
2005/06/09 21:08:49| comm_udp_sendto: FD 5, 192.168.1.2, port 53:
(55) No buffer space available

These errors aren't seen during non-business hours; only under "normal"
load. I've Googled and searched the list, and haven't found anything
exactly like this. Further, I'm relatively new to supporting squid, so I
don't know exactly what this means. Any help is appreciated.

Doug



[squid-users] Monitoring users in real time

2004-03-31 Thread Doug Kite
In searching through archives, I see this mentioned from time to time, but never
a great solution. Is there a tool for monitoring users in real time?

I set up a proxy for a school and the admin can do all user setup through
Webmin, daily reports come via html from Calamaris or Sarg. Since the admin does
not have command line access to the box, 'tail -f access.log | grep user' is not
really an option.

Squid is set up for basic auth, so what I would need to see is a username and
the sites they have accessed in the last few minutes.

I have seen squidtaild, but it does not appear to be supported any more(?)

Any ideas or tools to suggest? There must be something out there to do this...

Thanks,
Doug



Re: [squid-users] PAM and Squid problem

2004-03-11 Thread Doug Kite
>>> Henrik Nordstrom wrote on 03/10/04 04:41PM >>>

>> I have been trying to get squid to work with PAM.
>
>Don't use PAM unless there is no other options.

Why do you say this? 

I am setting up a squid box and went that route because of the following:
1) wanted the admin to be able to use a web interface (Webmin) to add users and
change passwords easily
2) wanted to use groups (i.e. unix group helper) for acl rules
3) had no other information store to tie it to (i.e. no AD or LDAP)

Is there a better choice considering those criteria?

Thanks,
Doug



Re: [squid-users] authentication with groups

2004-03-04 Thread Doug Kite
>>> Henrik Nordstrom wrote on 03/04/04 10:12AM >>>

>See the auth_param directive, and/or the Squid FAQ chapter on 
>authentication.
>
>You need to tell Squid how it is supposed to verify the login details.

Ok, I was thinking the external_acl_type was all you needed. So, I have added:

auth_param basic program /usr/lib/squid/pam_auth

(I added the /etc/pam.d/squid file and tested pam_auth from the command line
and it worked.)

So then do I need two acls, one to do basic auth as such:
acl foo proxy_auth REQUIRED

...and another acl to do the group check (as shown before):
acl full_access external unix_group web

If yes, then do I have to combine them on the http_access line like so?
http_access allow foo full_access

Does the auth_param directive replace the authenticate_program directive? I
could not find auth_param in the squid.conf--it only mentions
authenticate_program.

Thanks for your patience and help,
Doug



Re: [squid-users] authentication with groups

2004-03-04 Thread Doug Kite
>>> Henrik Nordstrom wrote on 03/04/04 01:48AM >>>

>> external_acl_typeunix_group  %LOGIN /usr/lib/squid/squid_unix_group -p
>> acl full_access external unix_group web
>> http_access allow full_access
>> 
>> When I try to browse from a client, it does not prompt me for a username or
>> password, and goes directly to a page that says:
>
>Have you configured authentication?

This must be the problem. At the risk of sounding dense, I thought what I had
done above *WAS* configuring authentication. What else needs to be done? To
answer the question, no the above config is all I have done.

>Where did you insert the above rules in relation to your existing 
>http_access rules? (order IS important in http_access)

This is a new setup that I am testing so there are only two http_access lines,
in this order:
http_access allow full_access
http_access deny all

>Any errors in cache.log?
 
No. I can see where it is starting the helper processes.

Thanks very much,
Doug



Re: [squid-users] authentication with groups

2004-03-03 Thread Doug Kite
I cannot get the unix group helper to work. I added the following lines to
squid.conf:

external_acl_typeunix_group  %LOGIN /usr/lib/squid/squid_unix_group -p
acl full_access external unix_group web
http_access allow full_access

When I try to browse from a client, it does not prompt me for a username or
password, and goes directly to a page that says:
error cache access denied
you are not currently allowed to request ... until you have authenticated
yourself.

The access.log just logs a "denied" message.

There are 5 squid_unix_group processes running. No errors in syslog.

The user exists and is in the group.

What else am I missing? I am running the squid package from Debian (sarge),
version 2.5.STABLE4.

Thanks for any help,
Doug



>>> Henrik Nordstrom <[EMAIL PROTECTED]> 03/03/04 11:23AM >>>
On Wed, 3 Mar 2004, Doug Kite wrote:

> I have read some about LDAP auth with groups, but if I have no LDAP server
at
> present, is there an easier way? Can you use unix /etc/group ?

Yes, there is a UNIX group helper as well (unix_group)

> Or would setting up an ldap server on the same box as squid be better?

Using a directory service for user account is generally recommended, but 
it obviously depends on your environment.

Regards
Henrik



[squid-users] authentication with groups

2004-03-03 Thread Doug Kite
I wish to use authentication and groups. I have no existing directory that I
need to tie into, i.e. no LDAP or domain. 

I have read some about LDAP auth with groups, but if I have no LDAP server at
present, is there an easier way? Can you use unix /etc/group ?

Or would setting up an ldap server on the same box as squid be better?

This is on Squid 2.5.STABLE4

Thanks,
Doug