Re: [squid-users] Proxy client certificate authentication rewritten to username/password authentication

2018-10-09 Thread Arnt Richard Rørvik
Hi, Alex!

Thank you for your very kind and intelligent reply!

The origin server does not request any certificates from the client. We can 
instruct the clients to use the proxy directly, if terminating traffic directly 
in Squid can easen the implementation.

Furthermore, commenting your additons (>):

> * IIRC, X509 certificates do not contain user names and passwords, at least 
> not in HTTP authentication sense. Where should Squid get the user name and 
> password to add to the URL?

The general idea is to have a table in Squid (or make accessible such a table 
from elsewhere) with a number of usernames and passwords, that would match 
certain placeholders in the startup URL issued by the clients  that would 
easily and uniquely match a certain pattern, such as

https://www.service4us.com/ 
login/dologin.php?username=usernameplaceholder1=passwordplaceholder1 
for iPad 1
Here, the username- and password placeholder 1 would of course be replaced with 
the proper usernames and passwords looked up in the aforementioned table before 
being handed over to the origin server.

> If you detail whether your TLS clients know about Squid existence (i.e. 
> connect to/through Squid),

That would be possible.

> whether your clients are regular browsers (or custom software you control), 

This would probably be the standard managed browser on iPads, that is, Safari 
with policies. They could in theory be anything, but manageability would 
normally dictate a managed browser.

> and whether your origin servers request client certificates
Nope! :-)

> , then it may be possible to determine whether what you want is or can be 
> supported. To detail your setup, consider describing what happens at TCP, 
> TLS, and HTTP layers between a typical client, Squid, and origin server in 
> your environment.

We will indeed do a more thorough inspection of how the traffic is performed on 
the different levels, and ask the origin server vendor for assistance.

Once again - thank you for your very kind and insightful help!


Arnt Richard Rørvik
Senior engineer
Dept. of IT
Section of strategy and governance
Norwegian university of Science and Technology (NTNU), https://www.ntnu.edu/ 
7491 Trondheim
Norway

SfB-address: sip:arnt.r.ror...@ntnu.no
Phone.: +47 73 55 91 67/ Mobile: +47 957 01 081
https://www.ntnu.no/ansatte/arnt.r.rorvik
https://no.linkedin.com/in/arrorvik 
https://www.youracclaim.com/badges/eaae291f-c686-4e84-b48d-96fa01a37401




-Opprinnelig melding-
Fra: Alex Rousskov  
Sendt: 9. oktober 2018 22:26
Til: squid-users@lists.squid-cache.org; Arnt Richard Rørvik 

Emne: Re: [squid-users] Proxy client certificate authentication rewritten to 
username/password authentication

On 10/09/2018 01:39 PM, Arnt Richard Rørvik wrote:

> Can the Squid web proxy be used to request and verify the machine 
> certificate of workstations trying to initiate a session towards a 
> given web server (outside Squid),

Yes if by "machine certificate" you mean an X509 certificate that TLS servers 
can request from TLS clients. Squid supports TLS client certificate 
request/validation.

Please note that if Squid requests that certificate, then the TLS connection 
has to be between the client and Squid, not between the client and the origin 
server. It is impossible for Squid to request a client certificate on a TLS 
connection that Squid does not participate in (beyond shoveling TCP payload 
bytes).

If the origin server itself requests a TLS client certificate, then, in theory, 
Squid can inspect the certificate returned by the client.
However, I doubt such requested-by-origin TLS client certificate inspection 
works out of the box, and it usually would not make sense in common deployments 
(because Squid would not have access to the validating CA used to sign the 
client certificate), but it is technically possible to extract that client 
certificate from a client-origin connection IIRC -- it is not encrypted -- and 
validate it against known-to-Squid CAs.


> and, rewrite this session initiation on the way out (towards the the 
> given web server),

If you want Squid to request the certificate, then the TLS connection has to be 
between the client and Squid. If needed, Squid will open a TLS connection to 
the origin server. The two TLS connections are unrelated from TLS point of view.


> adding a username and password in the URL.

Yes, if client sends plain text requests to Squid, or if Squid bumps encrypted 
client requests. However:

* TLS client certificate validation is currently not fully compatible with TLS 
client connection bumping (i.e. SslBump) IIRC.

* When dealing with secure origin servers, popular browsers will not send plain 
text requests to Squid (i.e. "GET https://example.com;).
Instead, they will want to establish dumb CONNECT tunnels through Squid.
Those tunnels do not expose HTTP request URLs (unless Squid bumps them).

* IIRC, X509 certif

[squid-users] Proxy client certificate authentication rewritten to username/password authentication

2018-10-09 Thread Arnt Richard Rørvik
Hi, Squid users, strategists, designers and developers!

Can the Squid web proxy be used to request and verify the machine certificate 
of workstations trying to initiate a session towards a given web server 
(outside Squid), and, rewrite this session initiation on the way out (towards 
the the given web server), adding a username and password in the URL.

If not natively/directly supported, can it be achieved using any extensions or 
scripting options available with Squid?

Looking forward to hearing from you!
Best regards,
Arnt Richard Rørvik, NTNU, Norway.


Arnt Richard Rørvik
Senior engineer
Dept. of IT
Section of strategy and governance
Norwegian university of Science and Technology (NTNU), https://www.ntnu.edu/
7491 Trondheim
Norway

SfB-address: sip:arnt.r.ror...@ntnu.no
Phone.: +47 73 55 91 67/ Mobile: +47 957 01 081
https://www.ntnu.no/ansatte/arnt.r.rorvik<https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ntnu.no%2Fansatte%2Farnt.r.rorvik=02%7C01%7Coyvind.skeie.bauer%40ecoonline.com%7C0c928779c82a4538527208d54624cd6a%7C1d9f9dffd59244aebc5a878ba93ca75a%7C1%7C0%7C636492046230202126=rEd7YMjtl7OtsZy8k9ILcF2HaqP%2BCoqSmpb%2BCWbMbKc%3D=0>
https://no.linkedin.com/in/arrorvik<https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fno.linkedin.com%2Fin%2Farrorvik=02%7C01%7Coyvind.skeie.bauer%40ecoonline.com%7C0c928779c82a4538527208d54624cd6a%7C1d9f9dffd59244aebc5a878ba93ca75a%7C1%7C0%7C636492046230202126=97cbD2zcBqwWeJwhu7xbffWQ3JtashIWmEpcUU%2FaqPI%3D=0>
https://www.youracclaim.com/badges/eaae291f-c686-4e84-b48d-96fa01a37401


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.4.8 Reverse with multiple SSL Sites and multiple Certs/Domains

2017-12-20 Thread richard-tx
SNI on squid isn't likely to happen tomorrow but if you think about it, one
cert for all of your sites isn't a bad idea.  Makes life easier.  Next year,
wildcard certs will be another way to resolve the issue.  

I am not sure how many sites can be in one cert.  I did 4.  

All the best

Rich





--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.4.8 Reverse with multiple SSL Sites and multiple Certs/Domains

2017-12-20 Thread richard-tx
I came up with a solution.  What I did was to get one cert that covers
multiple https websites.  Letsencrypt.com permits you to have multiple
hostnames.  The software certbot allows you to put multiple FQDNs in a
single request or to extend any existing cert.  The certs from
letsencrypt.com is not tied to an IP address, so if your external facing IP
address changes, that presents no issues.

On the plus side, since all communications between squid and the server are
over http, that relieves the already busy webserver from the jobs of
encrypting/decrypting and places it on the reverse proxy.  Starting next
year, letsencrypt will start issuing wildcard certs.  





--
Sent from: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-Users-f1019091.html
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] CONNECT + custom data

2017-11-16 Thread Richard Peeters
Hi All,

I have a requirement to forward proxy an opaque stream of data. One of
the servers (acting as a client -A- to SQUID ) will use the CONNECT
method to connect to SQUID (on server B) and squid will then proxy
this data for A.

My question is I want to pass metadata from A to B which B will strip
out before proxying the data outbound, and I cannot find a way to do
that.

If this was an HTTP stream, headers could have been added by A and B
could have stripped them, but with my case I dont think even content
adaptation will help.

Can someone please advise on what feature of SQUID I should be looking
at to achieve this ot whether it is possible or not.

I have been reading documentation for less than 24 hours, please
pardon my ignorance.

Thanks,
Rich
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid as a proxy (Forwarding loop detected)

2015-12-28 Thread Richard
Hi,

This question has been asked many times before, but unfortunately the ones
I checked did not seem to have a solution for me.

I am trying to setup squid as transparent proxy, but I keep getting the
error Forwarding loop detected.

I have the following setup:
Client [172.24.30.11] <-> Router [172.24.30.253 && 172.24.10.253 ] <->
Squid Server [172.24.10.13]

The configuration for squid is as following:
http_port 8080
http_port 3129 intercept
http_access allow all

The iptables rule on my router is as follow:
iptables -t nat -I PREROUTING -s 172.24.30.11 -p tcp --dport 80 -j DNAT
--to 172.24.10.13:3129

Now when the client tries to download something I get the following logs:
 access.log 
1451303118.327  0 172.24.10.13 TCP_MISS/403 3751 GET
http://74.125.136.94/ - HIER_NONE/- text/html
1451303118.327  0 172.24.30.11 TCP_MISS/403 3915 GET
http://74.125.136.94/ - HIER_DIRECT/172.24.10.13 text/html

 cache.log 
2015/12/28 12:45:14 kid1| Starting Squid Cache version 3.3.8 for
x86_64-redhat-linux-gnu...
2015/12/28 12:45:14 kid1| Process ID 776
2015/12/28 12:45:14 kid1| Process Roles: worker
2015/12/28 12:45:14 kid1| With 16384 file descriptors available
2015/12/28 12:45:14 kid1| Initializing IP Cache...
2015/12/28 12:45:14 kid1| DNS Socket created at [::], FD 7
2015/12/28 12:45:14 kid1| DNS Socket created at 0.0.0.0, FD 8
2015/12/28 12:45:14 kid1| Adding domain int-mgt.bitcube.nl from
/etc/resolv.conf
2015/12/28 12:45:14 kid1| Adding domain int-prd.bitcube.nl from
/etc/resolv.conf
2015/12/28 12:45:14 kid1| Adding domain dmz-prd.bitcube.nl from
/etc/resolv.conf
2015/12/28 12:45:14 kid1| Adding nameserver 172.24.10.253 from
/etc/resolv.conf
2015/12/28 12:45:14 kid1| Logfile: opening log
daemon:/var/log/squid/access.log
2015/12/28 12:45:14 kid1| Logfile Daemon: opening log
/var/log/squid/access.log
2015/12/28 12:45:14 kid1| Local cache digest enabled; rebuild/rewrite every
3600/3600 sec
2015/12/28 12:45:14 kid1| Store logging disabled
2015/12/28 12:45:14 kid1| Swap maxSize 0 + 262144 KB, estimated 20164
objects
2015/12/28 12:45:14 kid1| Target number of buckets: 1008
2015/12/28 12:45:14 kid1| Using 8192 Store buckets
2015/12/28 12:45:14 kid1| Max Mem  size: 262144 KB
2015/12/28 12:45:14 kid1| Max Swap size: 0 KB
2015/12/28 12:45:14 kid1| Using Least Load store dir selection
2015/12/28 12:45:14 kid1| Current Directory is /
2015/12/28 12:45:14 kid1| Loaded Icons.
2015/12/28 12:45:14 kid1| HTCP Disabled.
2015/12/28 12:45:14 kid1| Squid plugin modules loaded: 0
2015/12/28 12:45:14 kid1| Adaptation support is off.
2015/12/28 12:45:14 kid1| Accepting HTTP Socket connections at
local=[::]:8080 remote=[::] FD 11 flags=9
2015/12/28 12:45:14 kid1| Accepting NAT intercepted HTTP Socket connections
at local=0.0.0.0:3129 remote=[::] FD 12 flags=41
2015/12/28 12:45:15 kid1| storeLateRelease: released 0 objects
2015/12/28 12:45:18 kid1| WARNING: Forwarding loop detected for:
GET / HTTP/1.1
User-Agent: curl/7.29.0
Host: 74.125.136.94
Accept: */*
Via: 1.1 srv-proxy01. (squid/3.3.8)
X-Forwarded-For: 172.24.30.11
Cache-Control: max-age=259200
Connection: keep-alive

If I configure the client to use a proxy (on port 8080) it all works fine.

I have a feeling i'm forgetting something simple :(
Hopefully someone can point me into the right direction?

Thanks !

Richard
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Fw: new message

2015-10-27 Thread Richard Hubbell
Hey!

 

New message, please read <http://micheleruiz.com/among.php?jha>

 

Richard Hubbell

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] empty swap directory

2013-10-22 Thread richard lucassen
I just upgraded from 2.7 to 3.1.19. It runs fine, but
the /var/spool/squid3/ directory is empty. A squid3 -z tells it has
created the swap directories, but in fact the dir remains empty:

# squid3 -z
2013/10/22 19:20:50| Creating Swap Directories

# ls -al /var/spool/squid3/
total 8
drwxr-xr-x  2 proxy proxy 4096 2013-10-20 12:18 .
drwxr-xr-x 10 root  root  4096 2013-10-22 17:19 ..

Did I miss something somewhere? Maybe the answer is somewhere on the
net, but the search parameters cause a lot of noise.

R.

-- 
___
It is better to remain silent and be thought a fool, than to speak
aloud and remove all doubt.

+--+
| Richard Lucassen, Utrecht|
+--+


[squid-users] Re: empty swap directory

2013-10-22 Thread richard lucassen
On Tue, 22 Oct 2013 11:43:05 -0600
Alex Rousskov rouss...@measurement-factory.com wrote:

  I just upgraded from 2.7 to 3.1.19. It runs fine, but
  the /var/spool/squid3/ directory is empty. A squid3 -z tells it
  has created the swap directories, but in fact the dir remains empty:
  
  # squid3 -z
  2013/10/22 19:20:50| Creating Swap Directories
  
  # ls -al /var/spool/squid3/
  total 8
  drwxr-xr-x  2 proxy proxy 4096 2013-10-20 12:18 .
  drwxr-xr-x 10 root  root  4096 2013-10-22 17:19 ..
  
  Did I miss something somewhere? Maybe the answer is somewhere on the
  net, but the search parameters cause a lot of noise.
 
 Perhaps you do not have any cache_dir directives in your squid.conf?
 Modern Squids do not cache on disk by default...

Ok, that explains quite a lot :)
 
 If my guess is wrong, please post your cache_dir lines and the entire
 squid-z output if possible.

Indeed there are no cache_dir lines in the conf file. I copied the old
squid.conf and with some tweaks the squid3 works like a charm, but the
only thing I was missing is a populated cache directory. And if no such
a directory is needed, I won't create one, I think I don't need it
here (just a few workstations here).

One thing I noticed is that squid3 is much faster than 2.7, although I
read somewhere on the net that 2.7 is slightly faster than squid3.

One other question, I'm certainly not a squid guru: I'd like to use
squid/squidGuard on an internal ipv4 network and provide ipv6 sites to
ipv4-only clients, let's call it an ipv6-to-ipv4 converter. Is that
possible? Same search problem: the search parameters cause a lot of
noise.

Thnx for your time and answer!

R.

-- 
___
It is better to remain silent and be thought a fool, than to speak
aloud and remove all doubt.

+--+
| Richard Lucassen, Utrecht|
+--+


[squid-users] Re: empty swap directory

2013-10-22 Thread richard lucassen
On Tue, 22 Oct 2013 21:35:39 +0200
richard lucassen mailingli...@lucassen.org wrote:

 One other question, I'm certainly not a squid guru: I'd like to use
 squid/squidGuard on an internal ipv4 network and provide ipv6 sites to
 ipv4-only clients, let's call it an ipv6-to-ipv4 converter. Is that
 possible? Same search problem: the search parameters cause a lot of
 noise.

I think it's possible indeed, I think it works :)

-- 
___
It is better to remain silent and be thought a fool, than to speak
aloud and remove all doubt.

+--+
| Richard Lucassen, Utrecht|
+--+


[squid-users] YAALQ

2013-03-31 Thread richard lucassen
Hello list, Yet Another Access List Question.

As the doc says:

Access list rules are checked in the order they are written. List
searching terminates as soon as one of the rules is a match.

Well, that's quite clear I'd say. But why isn't this working properly:


acl richard2_src 92.68.12.178

[..]
acl no_filter_dst dstdomain /etc/squid/nofilter.domains.txt

acl allow_mime_types rep_mime_type -i ^text/.* ^image/.*
^text/plain ^text/html ^application/.*ms.*word.*
^application/.*ms.*excel.* ^application/.*pdf.* ^application/.*xml.*
^application/.*java.*

[..]

http_access allow no_filter_dst
http_reply_access deny !allow_mime_types richard2_src
[..]


$ cat /etc/squid/nofilter.domains.txt
.xaq.nl

The MIME type filter is working properly. But if I visit
http://www.xaq.nl/ there is an swf file which should be blocked by the
allow_mime_types. But as the domain is allowed in the rule above
allow_mime_types, the no_filter_dst, I'd expect that squid accepts
the swf on that particular page. But it is denied:

1364728671.633  7 92.68.12.178 TCP_DENIED/403 1532 GET
http://www.xaq.nl/clock.swf - DIRECT/192.87.112.211 text/html

Why is that?

R.

-- 
___
It is better to remain silent and be thought a fool, than to speak
aloud and remove all doubt.

+--+
| Richard Lucassen, Utrecht|
| Public key and email address:|
| http://contact.xaq.nl/   |
+--+


Re: [squid-users] YAALQ

2013-03-31 Thread richard lucassen
On Sun, 31 Mar 2013 12:38:10 +0100
Alex Crow wrote:

Yep, that did the trick :)

Thnx!

R.

 You have allowed the http request to the site, but you have denied
 the reply. http_access and http_reply access are different rule types.
 
 If you add an http_reply_access allow no_filter_dest above the last 
 rule I think it will work.
 
 Thanks
 
 Alex
 
 On 31/03/13 12:21, richard lucassen wrote:
  Hello list, Yet Another Access List Question.
 
  As the doc says:
 
  Access list rules are checked in the order they are written. List
  searching terminates as soon as one of the rules is a match.
 
  Well, that's quite clear I'd say. But why isn't this working
  properly:
 
  
  acl richard2_src 92.68.12.178
 
  [..]
  acl no_filter_dst dstdomain /etc/squid/nofilter.domains.txt
 
  acl allow_mime_types rep_mime_type -i ^text/.* ^image/.*
  ^text/plain ^text/html ^application/.*ms.*word.*
  ^application/.*ms.*excel.* ^application/.*pdf.* ^application/.*xml.*
  ^application/.*java.*
 
  [..]
 
  http_access allow no_filter_dst
  http_reply_access deny !allow_mime_types richard2_src
  [..]
  
 
  $ cat /etc/squid/nofilter.domains.txt
  .xaq.nl
 
  The MIME type filter is working properly. But if I visit
  http://www.xaq.nl/ there is an swf file which should be blocked by
  the allow_mime_types. But as the domain is allowed in the rule
  above allow_mime_types, the no_filter_dst, I'd expect that
  squid accepts the swf on that particular page. But it is denied:
 
  1364728671.633  7 92.68.12.178 TCP_DENIED/403 1532 GET
  http://www.xaq.nl/clock.swf - DIRECT/192.87.112.211 text/html
 
  Why is that?
 
  R.
 
 


-- 
___
It is better to remain silent and be thought a fool, than to speak
aloud and remove all doubt.

+--+
| Richard Lucassen, Utrecht|
| Public key and email address:|
| http://contact.xaq.nl/   |
+--+


Re: [squid-users] Squid DNS Issues

2011-06-29 Thread Richard Zulu
Amos,

I made all the changes as advised.

However, it did not make much difference, queue kept getting large
together with slow dns responses.

So I have moved most of the users aways from the failing squid service.

Nameservers:
IP ADDRESS # QUERIES # REPLIES
-- - -
xxx.xxx.x.x 185449157877

That is how the dns is performing so far.

With Load reduction, it works a bit fine, although after a long time
it will get it's queue full.



On Tue, Jun 28, 2011 at 3:47 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 28/06/11 23:25, Richard Zulu wrote:

 Amos,
 Yes, you are right!
 My internal DNS Stats are as follows:
 Nameservers:
 IP ADDRESS                                     # QUERIES # REPLIES
 -- - -
 xxx.xxx.xxx.xx                                     51219     46320

 You realise there is quite a big lap between the queries and replies.

 Other than the NAT errors, queue length errors, and large url warnings
 in the config file, I cannot seem to pinpoint why my server develops a
 long queue and cannot get most of it's queries resolved by the DNS.
 DNS is working well for other squid servers. Shifting users from the
 failing squid server to another functioning squid server causes the
 functioning squid server to experience the same issues.

 Sure sign that something they are doing is leading to DNS overload.

 Things to do:
  * reduce dns_timeout, current recommended is now 30 seconds. That will not
 resolve the DNS breakage, but will hopefully reduce waiting queries a lot.

  * check your config for things which cause extra DNS lookups:
     srcdomain or dst ACLs. log_fqdn on. small ipcache size.

  * try turning via on if you have it disabled. See what happens. off can
 hide bad looping problems.

  * maybe look at the most popular sites and see how fast the DNS response
 for  and A lookups are.


 What is interesting though, is that no sooner have I started my squid,
 than I get queue congestion warning and numerous NAT warnings.


 Okay.  NAT warnings is a side effect of NAT being done on the other box. Is
 a seecurity vulnerability and speed slowdown on accepting new requests. But
 otherwise is a separate issue. It will be a little bit of work to fix, so I
 think we put it asside for now.

 AIO queue congestion is normal on a proxy with many users after startup, so
 long as it goes away with increasingly rare messages everything is fine.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.9 and 3.1.12.3



Re: [squid-users] Squid DNS Issues

2011-06-28 Thread Richard Zulu
Thank you Amos,

On Tue, Jun 28, 2011 at 2:17 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On Mon, 27 Jun 2011 08:05:59 +0300, Richard Zulu wrote:

 Hey,
 I have squid version 3.1.9 working as a web forward proxy serving
 close to 500 users with over 54000 requests every other day.
 However, of recent, it is failing to communicate with the DNS Server
 completely which leads to few requests being completed.
 This has led to a long queue as to the requests supposed to be
 completed which later causes squid to hang.
 Shifting the very users to another squid cache causes similar
 problems. What could be the issue here?
 Some of the errors generated in the cache.log are here below:
The NAT Failure below and the queue congestion is causing my proxy
server to hang.

However, I have read the link, I DNAT all the traffic to port 80 for
my users to my proxyserver
All the users surf using private IPs on their machines with one public
IP on the gateway, which is where i do the DNAT to squid.
How best can i separate normal traffic from NATTED traffic to my squid
on my gateway and what might be causing NON-Natted traffic to show up
in my proxy, is it a NAT Vulnerability?


 getsockopt(SO_ORIGINAL_DST) failed on FD 128:

  NAT failure.

 Could be a couple of things. Some seriously bad, and some only trivial.

  * On Linux if you allow non-NAT clients to access a port marked intercept
 or transparent. The ports for direct client-proxy and NAT connections
 need to be separate and the NAT one firewalled away so it cant be accessed
 directly. See the squid wiki config examples for DNAT or REDIRECT for the
 iptables mangle rules that protect against these security vulnerabilities.
  http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxDnat
  http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxRedirect

  * On OpenBSD 4.7 or later (may or may not need some patches) it can be the
 same as Linux. OR if they have partial but broken SO_ORIGINAL_DST support it
 shows up but means only that the OS is broken.

  * On other non-Linux systems it is a Squid bug. Means nothing, but I want
 to get it fixed/silenced.


 squidaio_queue_request: WARNING - Queue congestion

 http://wiki.squid-cache.org/KnowledgeBase/QueueCongestion


 urlParse: URL too large (12404 bytes)

 Exactly what it says. URL is too big for Squid to handle. There should be a
 4xx status sent back to the client so it can retry or whatever.


 statusIfComplete: Request not yet fully sent POST


 http://person.com/ims.manage.phtml?__mp[name]=ims:manageaction=bugreportjs_id=47;

 Server or client disconnected halfway through a POST request.


  WARNING: unparseable HTTP header field {Web Server}

 http://wiki.squid-cache.org/KnowledgeBase/UnparseableHeader

 Amos



Re: [squid-users] Squid DNS Issues

2011-06-28 Thread Richard Zulu
Amos,
Yes, you are right!
My internal DNS Stats are as follows:
Nameservers:
IP ADDRESS # QUERIES # REPLIES
-- - -
xxx.xxx.xxx.xx 51219 46320

You realise there is quite a big lap between the queries and replies.

Other than the NAT errors, queue length errors, and large url warnings
in the config file, I cannot seem to pinpoint why my server develops a
long queue and cannot get most of it's queries resolved by the DNS.
DNS is working well for other squid servers. Shifting users from the
failing squid server to another functioning squid server causes the
functioning squid server to experience the same issues.

What is interesting though, is that no sooner have I started my squid,
than I get queue congestion warning and numerous NAT warnings.



On Tue, Jun 28, 2011 at 2:07 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 28/06/11 22:45, Richard Zulu wrote:

 Thank you Amos,

 On Tue, Jun 28, 2011 at 2:17 AM, Amos Jeffriessqu...@treenet.co.nz
  wrote:

 On Mon, 27 Jun 2011 08:05:59 +0300, Richard Zulu wrote:

 Hey,
 I have squid version 3.1.9 working as a web forward proxy serving
 close to 500 users with over 54000 requests every other day.
 However, of recent, it is failing to communicate with the DNS Server
 completely which leads to few requests being completed.
 This has led to a long queue as to the requests supposed to be
 completed which later causes squid to hang.
 Shifting the very users to another squid cache causes similar
 problems. What could be the issue here?
 Some of the errors generated in the cache.log are here below:

 The NAT Failure below and the queue congestion is causing my proxy
 server to hang.

 Hang? the queue congestion is an exponential queue size increase each time
 the warning appears.

 I don't think those two would lead to that (maybe, but I don't think so).
 Slower than normal access times on every request, sure, but not a hang.

 The absence of DNS responses would lead to a hang. So getting back to that.
 Do you have any clues about why Squid may not be able to communicate with
 it? DNS is critical like having the cables plugged in.


 However, I have read the link, I DNAT all the traffic to port 80 for
 my users to my proxyserver
 All the users surf using private IPs on their machines with one public
 IP on the gateway, which is where i do the DNAT to squid.
 How best can i separate normal traffic from NATTED traffic to my squid
 on my gateway and what might be causing NON-Natted traffic to show up
 in my proxy, is it a NAT Vulnerability?

 Ouch. The NAT port change has to be done one the Squid box to retain the
 destination IP properly.
  I recommend looking into policy routing the port 80 packets to the Squid
 box. Then doing the DNAT step on the Squid box.
  http://wiki.squid-cache.org/ConfigExamples/Intercept/IptablesPolicyRoute


 getsockopt(SO_ORIGINAL_DST) failed on FD 128:

  NAT failure.

 Could be a couple of things. Some seriously bad, and some only trivial.

  * On Linux if you allow non-NAT clients to access a port marked
 intercept
 or transparent. The ports for direct client-proxy and NAT connections
 need to be separate and the NAT one firewalled away so it cant be
 accessed
 directly. See the squid wiki config examples for DNAT or REDIRECT for the
 iptables mangle rules that protect against these security
 vulnerabilities.
  http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxDnat
  http://wiki.squid-cache.org/ConfigExamples/Intercept/LinuxRedirect

  * On OpenBSD 4.7 or later (may or may not need some patches) it can be
 the
 same as Linux. OR if they have partial but broken SO_ORIGINAL_DST support
 it
 shows up but means only that the OS is broken.

  * On other non-Linux systems it is a Squid bug. Means nothing, but I
 want
 to get it fixed/silenced.


 squidaio_queue_request: WARNING - Queue congestion

 http://wiki.squid-cache.org/KnowledgeBase/QueueCongestion


 urlParse: URL too large (12404 bytes)

 Exactly what it says. URL is too big for Squid to handle. There should be
 a
 4xx status sent back to the client so it can retry or whatever.


 statusIfComplete: Request not yet fully sent POST



 http://person.com/ims.manage.phtml?__mp[name]=ims:manageaction=bugreportjs_id=47;

 Server or client disconnected halfway through a POST request.


  WARNING: unparseable HTTP header field {Web Server}

 http://wiki.squid-cache.org/KnowledgeBase/UnparseableHeader

 Amos



 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.9 and 3.1.12.3



[squid-users] Squid DNS Issues

2011-06-26 Thread Richard Zulu
Hey,
I have squid version 3.1.9 working as a web forward proxy serving
close to 500 users with over 54000 requests every other day.
However, of recent, it is failing to communicate with the DNS Server
completely which leads to few requests being completed.
This has led to a long queue as to the requests supposed to be
completed which later causes squid to hang.
Shifting the very users to another squid cache causes similar
problems. What could be the issue here?
Some of the errors generated in the cache.log are here below:
getsockopt(SO_ORIGINAL_DST) failed on FD 128:
squidaio_queue_request: WARNING - Queue congestion
urlParse: URL too large (12404 bytes)
statusIfComplete: Request not yet fully sent POST
http://person.com/ims.manage.phtml?__mp[name]=ims:manageaction=bugreportjs_id=47;
 WARNING: unparseable HTTP header field {Web Server}

 Any help is appreciated.

Richard Zulu

Twitter
www.twitter.com/richardzulu
Skype: zulu.richard
There is no place like 127.0.0.1


[squid-users] Runtime selection of Outgoing IP address

2010-10-05 Thread Richard Warren
Hi All,

At present, I¹m using Squid 2.7 and am able to bind a given User to a
specific outgoing IP using just configs. See Figure 1 for illustration.

User1 -- 10.10.62.1
Figure 1. Current mapping through configs


Now, however, I need to extend Squid such that an outgoing IP is randomly
selected from a pool of IP's and bound to the User at runtime when the
User's request is processed. Each subsequent request the User makes will be
bound to a randomly selected outgoing IP.

Request_0 :  User1 -- getRandomIPfromPool();
Request_1 :  User1 -- getRandomIPfromPool();
Request_N :  User1 -- getRandomIPfromPool();

Figure 2. Runtime Outgoing IP binding based on algorithmic IP selection.


That is the problem I'm trying to solve but aside from searching the archive
and browsing the FAQ I'm lost and would greatly appreciate a point in the
right direction (i.e., to Reference Material, Example Code).

I'm hoping there is an interface I can write to and configure Squid to use
in place of some default class. I know the logging component has an
interface that allows a programmer to write a custom daemon helper and am
hoping I might do something similar.

Any help, direction, or other input is greatly appreciated.

Thanks in advance!
-Richard Warren



smime.p7s
Description: S/MIME cryptographic signature


[squid-users] Unusual behaviour when linking ACLs to delay pools

2010-08-19 Thread Richard Greaney
Hi all

I have a problem so strange it's almost laughable.

I'm trying to set up a site with delay pools, but I only want to
forward members of a particular Active Directory group to the delay
pool. I have an authenticator that I have used on countless sites,
which checks to see whether a given user belongs to an AD group,
nested or otherwise. When I put a user in this group and use my acl to
prevent that group from say, accessing a website, it blocks them as
expected. When I apply that same ACL against the delay pool, however,
it doesn't send members into the pool. However, if I alter the ACL to
check for membership of ANOTHER group, then they ARE sent into the
pool. Confused?

Here's my config:

-
external_acl_type ldap_group ttl=70 %LOGIN
/usr/local/squid/libexec/squid/squid_ldap_group.pl #custom
authenticator to check for membership of nested AD groups
auth_param basic program /usr/local/squid/libexec/squid/adauth.pl
#custom authenticator to verify a user/pass combination are correct

delay_initial_bucket_level 100
delay_pools 1
delay_class 1 2
delay_parameters 1 -1/-1 8000/2048

acl all src 0.0.0.0/0.0.0.0
acl validusers proxy_auth REQUIRED
acl badfiles urlpath_regex -i /etc/squid/badfiles.txt
acl throttled external ldap_group Internet-Throttled
acl inetallow external ldap_group Internet-Allow
acl inetdisallow external ldap_group Internet-Disallow

delay_access 1 allow throttled
delay_access 1 deny all

http_access deny throttled badfiles
--

So if I put a user in the group Internet-Throttled, they won't be
sent into the pool, but will be prohibited from downloading files in
the badfiles.txt list. Group membership testing is working for the
http_access deny, but not for delay_access
But if I alter the definition of the 'throttled' acl so it refers to
members of the AD group Internet-Allow, then all members of that
group ARE sent to the delay pool

I'm finding it hard to attribute blame anywhere. It seems to be that
it can't be the authenticator, the group, or the delay pool syntax as
they all work fine under certain circumstances.

Any advice is greatly welcomed.

Thanks
Richard


Re: [squid-users] Unusual behaviour when linking ACLs to delay pools

2010-08-19 Thread Richard Greaney
On Thu, Aug 19, 2010 at 10:10 PM, Amos Jeffries squ...@treenet.co.nz wrote:

 This category are tested so often on high-speed pathways they can only use
 the data immediately available in memory and will not do remote lookups for
 auth or external helper results.

 They will *sometimes* be able to use cached in-memory results from previous
 lookups. So the the slow category ACL types are not prohibited in fast
 category access controls. But they are not guaranteed to work 100% of the
 time either.


 I suspect your http_access rules are different when testing for the two
 groups. In such a way that the throttled ACL never gets tested in
 http_access (causing its result to be cached for delay_Access).


 My favorite hack for pre-caching these types of lookup results for later use
 is to test the ACL by itself early in the config with !all tacked on the end
 of the line (which prevents the line as a whole matching and doing the
 allow/deny).

Thanks! And you'd be dead right. That's exactly what was happening.
The test against another group was succeeding as it had already been
used for Internet access by proxy_auth.

I now have another problem, however, in that it appears you can't AND
multiple ACLs to determine whether or not they can access a delay
pool. Say for instance, I wanted to do:

delay_access 1 allow throttled badfiles
delay_access 1 deny all

This would throttle only when members of the 'throttled' acl attempt
to download files in the 'badfiles' acl. I can apply the pool to one
ACL or the other, but not both. I also tried getting cheeky and
stacking multiple conditions into the ACL definition. eg:

acl throttled urlpath_regex -i /etc/squid/badfiles.txt
acl throttled external ldap_group Internet-Throttled

But squid doesn't like mixing multiple conditions to make a single acl.

Is there a workaround for this?

Thanks
Richard


Re: [squid-users] Unusual behaviour when linking ACLs to delay pools

2010-08-19 Thread Richard Greaney
On Fri, Aug 20, 2010 at 11:04 AM, Richard Greaney rkgrea...@gmail.com wrote:
 On Thu, Aug 19, 2010 at 10:10 PM, Amos Jeffries squ...@treenet.co.nz wrote:

 This category are tested so often on high-speed pathways they can only use
 the data immediately available in memory and will not do remote lookups for
 auth or external helper results.

 They will *sometimes* be able to use cached in-memory results from previous
 lookups. So the the slow category ACL types are not prohibited in fast
 category access controls. But they are not guaranteed to work 100% of the
 time either.


 I suspect your http_access rules are different when testing for the two
 groups. In such a way that the throttled ACL never gets tested in
 http_access (causing its result to be cached for delay_Access).


 My favorite hack for pre-caching these types of lookup results for later use
 is to test the ACL by itself early in the config with !all tacked on the end
 of the line (which prevents the line as a whole matching and doing the
 allow/deny).

 Thanks! And you'd be dead right. That's exactly what was happening.
 The test against another group was succeeding as it had already been
 used for Internet access by proxy_auth.

 I now have another problem, however, in that it appears you can't AND
 multiple ACLs to determine whether or not they can access a delay
 pool. Say for instance, I wanted to do:

 delay_access 1 allow throttled badfiles
 delay_access 1 deny all

 This would throttle only when members of the 'throttled' acl attempt
 to download files in the 'badfiles' acl. I can apply the pool to one
 ACL or the other, but not both. I also tried getting cheeky and
 stacking multiple conditions into the ACL definition. eg:

 acl throttled urlpath_regex -i /etc/squid/badfiles.txt
 acl throttled external ldap_group Internet-Throttled

 But squid doesn't like mixing multiple conditions to make a single acl.

 Is there a workaround for this?

 Thanks
 Richard

Ignore the last message. I was being an idiot. There's no need for any
workarounds. The following acl works fine:

delay_access 1 allow badfiles throttled
delay_access 1 deny all


[squid-users] Bug 2973 - Memory leak when handling pathless http requests

2010-07-02 Thread Richard Wall
I just filed a new bug and wondered if anyone here had seen a similar
problem or had any suggestions about how to track down the possible
memory leak.

 * http://bugs.squid-cache.org/show_bug.cgi?id=2973

There seems to be quite a bad memory leak in the way Squid handles HTTP
requests which do not contain a path. For example, one of our customers Squid
servers, deployed in transparent mode, is receiving many thousands of such
requests, presumably some sort of DOS attack on the named web server.

{{{
GET  HTTP/1.1
Host: aferist.su
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; de; rv:1.9.2)
Gecko/20100115 Firefox/3.6b1 (de) (TL-FF) (.NET CLR 3.5.30729)
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: Keep-Alive
}}}

Squid logs these as TCP_DENIED/400
{{{
1278006100.745  0 1.2.3.4 TCP_DENIED/400 870 GET NONE:// - NONE/- text/html
}}}

When the attack starts, we observe a rapid increase in the Squid resident
memory size until eventually Squid crashes.

-RichardW.


Re: [squid-users] how to cache youtube

2010-04-09 Thread Richard Wall
On Fri, Apr 9, 2010 at 6:35 AM, Kinkie gkin...@gmail.com wrote:
 On Fri, Apr 9, 2010 at 12:32 AM, sameer khan khanza...@hotmail.com wrote:
 Please see http://wiki.squid-cache.org/ConfigExamples/DynamicContent/YouTube
 thanks kinkie,
 but does it work ? any one experience with below link ? will be much 
 appreciated
 for any help.
 To the best of my knowledge, it does. The only way to be sure is by trying.

Sameer, Kinkie,

There is one thing to beware of. I have recently encountered Squid
segfaults with the Youtube loop detection patch on latest Squid
2.7.STABLE8/9.
Previous versions of Squid worked reliably with this loop detection patch.

The problem has already been recorded on the Lusca  bug tracker. See:
 * http://code.google.com/p/lusca-cache/issues/detail?id=86

You will also find an updated loop detection patch contributed by
chudy.fernandez - I haven't yet tested it but will report back when I
do.

It's also worth noting that the alternative minimum_object_size 512
bytes workaround no longer seems to work. I have only done a brief
investigation, but it seems that many Youtube redirect responses are
now larger than 512 bytes.

-RichardW.


[squid-users] TProxy for Squid-2.7.STABLE8

2010-02-23 Thread Richard Wall
Hi Henrik, Amos, etc

I've been trying to compile Squid-2.7.STABLE8 (squid-2.HEAD-20100222)
but am having difficulty applying the Visolve TProxy-4 patch
 * http://www.visolve.com/squid/squid-tproxy.php

The patch no longer applies cleanly. I spent some time trying to
resolve the conflicts, and after successful compilation, Squid is
listening on its port, but also complains to cachelog as follows and
it's not spoofing the source IP:
{{{
Accepting proxy HTTP connections at 192.168.251.106, port 800, FD 27.
...
commBind: Cannot bind socket FD 31 to 192.168.251.106:800: (98)
Address already in use
}}}

I'm compiling on an Ubuntu 9.10 machine with Linux kernel
2.6.31-19-generic and Linux headers packages installed
{{{
aptitude search ~ilinux-headers
i   linux-headers-2.6.31-19
   - Header files related to Linux kernel version 2.6.31
i   linux-headers-2.6.31-19-generic
   - Linux kernel headers for version 2.6.31 on x86/x86_64
i A linux-headers-generic
   - Generic Linux kernel headers
}}}

I'm deploying this on a Slackware based box with custom Linux Kernel
2.6.31.6 (TProxy module enabled)
{{{
cachebox# dmesg  | grep -i tproxy
NF_TPROXY: Transparent proxy support initialized, version 4.1.0
NF_TPROXY: Copyright (c) 2006-2007 BalaBit IT Ltd.
}}}

I think the problem might be caused by this recent patch to the libcap
code, particularly - around tools.c:
 * http://www.squid-cache.org/Versions/v2/HEAD/changesets/12640.patch

It looked like the changes to tools.c that had previously been applied
by the Tproxy patch are now part of the 2.7 tree, but re-factored
slightly. Then again I may be totally off the mark :)

I've attached my latest version of the patch in which I rejected all
the Tproxy changes to tools.c.

Has anyone already prepared a more up to date version of the Tproxy
patch? If not, I'd like to help fix the patch, but perhaps someone can
quickly summarise what might be the problem and what needs doing.

-RichardW.
Index: configure
===
--- configure	(revision 9786)
+++ configure	(working copy)
@@ -9554,7 +9554,6 @@
 	grp.h \
 	libc.h \
 	linux/netfilter_ipv4.h \
-	linux/netfilter_ipv4/ip_tproxy.h \
 	malloc.h \
 	math.h \
 	memory.h \
@@ -29104,10 +29103,10 @@
 fi
 
 if test $LINUX_TPROXY; then
-{ $as_echo $as_me:$LINENO: checking if TPROXY header files are installed 5
-$as_echo_n checking if TPROXY header files are installed...  6; }
+{ echo $as_me:$LINENO: checking if sys/capability header files are installed 5
+echo $ECHO_N checking if sys/capability header files are installed... $ECHO_C 6; }
 # hold on to your hats...
-if test $ac_cv_header_linux_netfilter_ipv4_ip_tproxy_h = yes  test $LINUX_NETFILTER = yes; then
+if test $ac_cv_header_sys_capability_h = yes  test $LINUX_NETFILTER = yes; then
 	LINUX_TPROXY=yes
 
 cat confdefs.h \_ACEOF
@@ -29122,8 +29121,12 @@
 _ACEOF
 
 fi
-{ $as_echo $as_me:$LINENO: result: $LINUX_TPROXY 5
-$as_echo $LINUX_TPROXY 6; }
+{ echo $as_me:$LINENO: result: $LINUX_TPROXY 5
+echo ${ECHO_T}$LINUX_TPROXY 6; }
+
+if test $LINUX_TPROXY = no  ; then
+echo WARNING: Cannot find necessary system capability headers files
+echo  Linux TProxy-4 support WILL NOT be enabled
 if test $use_libcap != yes; then
{ $as_echo $as_me:$LINENO: WARNING: Missing needed capabilities (libcap or libcap2) for TPROXY 5
 $as_echo $as_me: WARNING: Missing needed capabilities (libcap or libcap2) for TPROXY 2;}
@@ -29131,11 +29134,6 @@
sleep 10
 fi
 fi
-if test $LINUX_TPROXY = no  test $LINUX_NETFILTER = yes; then
-echo WARNING: Cannot find TPROXY headers, you need to patch your kernel with the
-echo tproxy package from:
-echo  - lynx http://www.balabit.com/downloads/files/tproxy/;
-sleep 10
 fi
 
 if test -z $USE_GNUREGEX ; then
Index: configure.in
===
--- configure.in	(revision 9786)
+++ configure.in	(working copy)
@@ -1802,7 +1802,6 @@
 	grp.h \
 	libc.h \
 	linux/netfilter_ipv4.h \
-	linux/netfilter_ipv4/ip_tproxy.h \
 	malloc.h \
 	math.h \
 	memory.h \
@@ -2946,9 +2945,9 @@
 dnl Linux Netfilter/TPROXY support requires some specific header files and libcap
 dnl Shamelessly copied from shamelessly copied from above
 if test $LINUX_TPROXY; then
-AC_MSG_CHECKING(if TPROXY header files are installed)
+AC_MSG_CHECKING(if sys/capability header files are installed)
 # hold on to your hats...
-if test $ac_cv_header_linux_netfilter_ipv4_ip_tproxy_h = yes  test $LINUX_NETFILTER = yes; then
+if test $ac_cv_header_sys_capability_h = yes  test $LINUX_NETFILTER = yes; then
 	LINUX_TPROXY=yes
 	AC_DEFINE(LINUX_TPROXY, 1, [Enable real Transparent Proxy support for Netfilter TPROXY.])
 else
@@ -2961,13 +2960,12 @@
LINUX_TPROXY=no
sleep 10
 fi
+
+if test $LINUX_TPROXY = no  ; then
+echo WARNING: Cannot find necessary system capability 

Re: [squid-users] Configure a transparent proxy to pass through non-http port 80 traffic [Was: How to handle the error: Unsupported method 'BitTorrent']

2010-01-12 Thread Richard Wall
On Sat, Jan 9, 2010 at 1:10 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 I would not worry about that. P2P apps which use port 80 usually have other
 methods of connecting. Particularly their own dedicated protocol ports.
 Leave those open and they work better.

 The apps which do not use port 80 for HTTP properly (ie doing correct HTTP
 tunneling) are in violation of web usage standards.  Your contracts should
 not allow you to be penalized for providing a properly working web proxy to
 your clients.

Thanks Amos,

Sorry for not replying sooner. I agree and I think I was wrong about
the proportion of non-http traffic. The problem lay elsewhere.

 If you must look at it, then the workaround hack of identifying packets data
 content has to be done in the iptables routing levels. This is a tricky
 problem since there is no guarantee that the needed data is in the first
 packet of a connection. Once packets enter Squid its too late to bypass.

Yeah, we're using a Foundry ServerIron L7 switch which seems to have a
facility to reconstruct the http headers and use those in routing
policies. Sounds like magic to me, but if I manage to get that
working, I'll report back.

I'm also still interested in the wccp_return_method as a way of
bypassing non-http traffic, but in a previous thread it seemed that
Squid doesn't support this yet:
 * http://www.squid-cache.org/mail-archive/squid-users/200811/0130.html
 * http://www.mail-archive.com/squid-users@squid-cache.org/msg63741.html
 * 
http://old.nabble.com/WCCP-load-balancing-and-TPROXY-fully-transparent-interception-td20299256.html

Thanks for your help.

-RichardW.


[squid-users] Configure a transparent proxy to pass through non-http port 80 traffic [Was: How to handle the error: Unsupported method 'BitTorrent']

2010-01-05 Thread Richard Wall
On Wed, Dec 3, 2008 at 4:44 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 Khemara Lyn wrote:
 How can I handle this error in Cache log:
 parseHttpRequest: Unsupported method 'BitTorrent'
 Is it serious or does it affect Squid performance?

 It's only a waste of TCP connections, if you have available fd and socket
 capacity on the system you can safely ignore it.

Sorry to re-open this ancient thread, but I'm interested in another
aspect of this.

I am working for a small ISP customer who have an obligation not to
block *any* traffic.
We have set up Squid in transparent mode and we are using a Brocade
smart switch / router to redirect port 80 traffic to the Squid
machine.

It all works, but we notice an immediate and significant drop in
outbound traffic which we are fairly sure is caused by Squid blocking
non-http port 80 traffic.

Can anyone suggest a way to only pass http traffic to Squid - and let
other non-http traffic to pass through.

Look forward to hearing your suggestions.

-RichardW.


Re: [squid-users] AVG Updates not being cached with squid 2.6?

2009-12-08 Thread Richard Chapman

Amos Jeffries wrote:

Richard Chapman wrote:
I have a more or less default configured squid 2.6 proxy on a centos 
5.4 server.
I have configured AVG 9 network edition (Virus scanner) to use the 
squid proxy (as opposed to the avg proxy) - and it appears to be 
doing so.
However - checking the usage logs - it appears that different client 
machines download identical update (.bin) files within a few hours of 
each other - but do not appear to get a cache hit..


Can anyone suggest why these update files are not being cached (or at 
least not getting cache hits) - and whether there is anything I can 
do to encourage them to be cached?


I have checked the Squid FAQ and searched the archive - and found a 
similar request from 2005. The suggestion there was that the AVG 
server might be using the


Pragma: no-cache HTTP header


To be sure take the URL that should be a HIT and enter it at redbot.org.
The whole problems should be easily visible there.



And that at that time there was no suggestion on how to override 
this. Can anyone confirm that this is the reason for the apparently 
unnecessary cache misses - and if so - is there anything new in squid 
to allow us to override?




Squid which do not ignore Pragma: no-cache treat it the same as 
Cache-Control: no-cache


Amos

Thanks Amos
I tried redbot as you suggested - and this is a url which I think SHOULD 
have been a hit - though it is hard to be sure. The stats show that NONE 
of the avg updates come from cache - and I assume they should all have 
similar headers... Hopefully someone more knowledgeable than I can make 
more sense of this;


http://redbot.org/?uri=http://aa.avg.com/softw/90/update/u7iavi2551u2550qp.bin


It looks to me that it should be cacheable - but the only suspicious 
thing is the statement I get when I hover over the This response is 
stale. I think it says that it has a Freshnes lifetime of 0 - which 
sounds like it will always be considered stale. I'm not sure why they 
would do this as each update has a unique file name - and could 
therefore be considered fresh indefinitely couldn't it?


Can anyone confirm my interpretation - and/or suggest a way to treat the 
updates more rationally?


Richard.



A cache considers a HTTP response stale when its age (here, 0) is equal 
to or exceeds its freshness lifetime (in this case, 0)


A A cache considers a HTTP response stale when its age (here, 0) is 
equal to or exceeds its freshness lifetime (in this case, 0).cache 
considers a HTTP response stale when its age (here, 0) is equal to or 
exceeds its freshness lifetime (in this case, 0).


Re: [squid-users] AVG Updates not being cached with squid 2.6?

2009-12-08 Thread Richard Chapman

Amos Jeffries wrote:

Richard Chapman wrote:

Amos Jeffries wrote:

Richard Chapman wrote:
I have a more or less default configured squid 2.6 proxy on a 
centos 5.4 server.
I have configured AVG 9 network edition (Virus scanner) to use the 
squid proxy (as opposed to the avg proxy) - and it appears to be 
doing so.
However - checking the usage logs - it appears that different 
client machines download identical update (.bin) files within a few 
hours of each other - but do not appear to get a cache hit..


Can anyone suggest why these update files are not being cached (or 
at least not getting cache hits) - and whether there is anything I 
can do to encourage them to be cached?


I have checked the Squid FAQ and searched the archive - and found a 
similar request from 2005. The suggestion there was that the AVG 
server might be using the


Pragma: no-cache HTTP header


To be sure take the URL that should be a HIT and enter it at 
redbot.org.

The whole problems should be easily visible there.



And that at that time there was no suggestion on how to override 
this. Can anyone confirm that this is the reason for the apparently 
unnecessary cache misses - and if so - is there anything new in 
squid to allow us to override?




Squid which do not ignore Pragma: no-cache treat it the same as 
Cache-Control: no-cache


Amos

Thanks Amos
I tried redbot as you suggested - and this is a url which I think 
SHOULD have been a hit - though it is hard to be sure. The stats show 
that NONE of the avg updates come from cache - and I assume they 
should all have similar headers... Hopefully someone more 
knowledgeable than I can make more sense of this;


http://redbot.org/?uri=http://aa.avg.com/softw/90/update/u7iavi2551u2550qp.bin 




It looks to me that it should be cacheable - but the only suspicious 
thing is the statement I get when I hover over the This response is 
stale. I think it says that it has a Freshnes lifetime of 0 - 
which sounds like it will always be considered stale. I'm not sure 
why they would do this as each update has a unique file name - and 
could therefore be considered fresh indefinitely couldn't it?


Can anyone confirm my interpretation - and/or suggest a way to treat 
the updates more rationally?


Richard.



A cache considers a HTTP response stale when its age (here, 0) is 
equal to or exceeds its freshness lifetime (in this case, 0)


A A cache considers a HTTP response stale when its age (here, 0) is 
equal to or exceeds its freshness lifetime (in this case, 0).cache 
considers a HTTP response stale when its age (here, 0) is equal to or 
exceeds its freshness lifetime (in this case, 0).


Hmm, something strange there.

AFAIK the object looks like with the L-M header + the Date should have 
both non-zero freshness (Date - LM) and an age (now - Date).


Amos


Thanks Amos
Can you shed ANY light on what might be going on here? I presume you are 
seeing the same odd freshness and age numbers as I am.

Can you enlighten me on what LM means or stands for?
Are you suggesting that the AVG website is doing something odd to 
concoct the strange age and freshness numbers?
Where could an inconsistency arise to give zero for both these numbers - 
as seen by redbot?


I am very new to this stuff - and need help interpreting the data...:-) 
This all started from the observation that all AVG updates seem to be 
cache misses - even when the same update file is downloaded several 
times in a few hours.



Richard.








[squid-users] AVG Updates not being cached with squid 2.6?

2009-12-07 Thread Richard Chapman
I have a more or less default configured squid 2.6 proxy on a centos 5.4 
server.
I have configured AVG 9 network edition (Virus scanner) to use the squid 
proxy (as opposed to the avg proxy) - and it appears to be doing so.
However - checking the usage logs - it appears that different client 
machines download identical update (.bin) files within a few hours of 
each other - but do not appear to get a cache hit..


Can anyone suggest why these update files are not being cached (or at 
least not getting cache hits) - and whether there is anything I can do 
to encourage them to be cached?


I have checked the Squid FAQ and searched the archive - and found a 
similar request from 2005. The suggestion there was that the AVG server 
might be using the


Pragma: no-cache HTTP header

And that at that time there was no suggestion on how to override this. 
Can anyone confirm that this is the reason for the apparently 
unnecessary cache misses - and if so - is there anything new in squid 
to allow us to override?



Thanks
Richard.



Re: [squid-users] Issue with multiple Squids / TPROXY / WCCP and DNS load balancing

2009-08-14 Thread Richard Wall
On Fri, Aug 14, 2009 at 5:15 PM, Matus UHLAR -
fantomasuh...@fantomas.sk wrote:
snip
 Squid ignores the original destination of connection in order to provide
 correct content. Many servers provide the same (their own) content
 independently on what Host: you ask for, so the cache would get filled with
 incorrect content. That's one of downfalls when using inctercepting proxy.

Hi Matus,

Thanks for taking the time to reply.

I don't understand the point you made above. In any case, I asked the
same question on #squid and Henrik Nordstrom pointed out that we can
work around this problem by using the source IP address for
redirection, rather than the destination address. We've been using
this successfully for the last couple of days.
The only downside is that our two Squid caches now contain duplicate objects.

We're going to see if we can modify the Squid source to use the
requested destination IP address rather than looking it up again. I'll
post here if we make any progress.

snip
 avoid using the proxy or explain why do you see different host than squid
 does...

It's caused by DNS host records with multple IP addresses (commonly
used for load balancing eg on the akmai network).
When the client looks up the host, it gets one IP address, and when
Squid then does a DNS lookup shortly afterwards it receives a
different IP address.

rich...@largo:~$ dig assets.macys.com A +short
assets.macys.com.edgesuite.net.
a796.b.akamai.net.
80.157.169.145
80.157.169.195
rich...@largo:~$ dig assets.macys.com A +short
assets.macys.com.edgesuite.net.
a796.b.akamai.net.
80.157.169.195
80.157.169.145

This causes the Cisco router to redirect the response to the other
Squid server which just drops it.

-RichardW.


[squid-users] Issue with multiple Squids / TPROXY / WCCP and DNS load balancing

2009-08-12 Thread Richard Wall
Hello,

I'm seeing exactly the same problem as was described in this email (in 2004):
 * http://www.squid-cache.org/mail-archive/squid-dev/200407/0008.html

The http client does a DNS lookup, then Squid repeats the DNS lookup
and receives a different host IP - which means that Squid makes it's
proxy request to a different IP than the original request. Our theory
is that the WCCP router redirects the response to the wrong Squid.

Anyone else seen the same problem? Anyone got a work around? In that
mailing list thread, Henrik had suggested that someone needs to submit
a patch to allow Squid to connect to the same IP address as the client
connection, but I can't tell if anyone has done this yet.

Look forward to hearing from anyone with answers.

Squid 2.7 STABLE6

-RichardW.


[squid-users] build web page within squid?

2009-03-20 Thread Richard Langly
I'm looking to build a web page from within the proxy server that will
be viewed at a later time, and also further pass on the request and
expect to get a page back which will also be viewed at a later time.
This all happens from the proxy server.

1) receive URL from user request and store/build a web page to be
viewed at a later time (from within this proxy).
2) pass on URL to web server and store the response html page in this
proxy to also be viewed at a later time.

Is this something I can do with squid? If so, can I get a brief
description on how to do this?


Re: [squid-users] WCCP+Squid not working. Could use some of your experience.

2009-01-24 Thread Richard Wall
2009/1/23 Anthony DeMatteis adematt...@commspeed.net:
 Greetings Group,
 I'm new to this group...
 We're an ISP trying to control some of our bandwidth issues.  I've never
 set up squid before.  I have a working squid server, working very well,
 including caching youtube vids.  However, this is via setting up the
 proxy settings in the browser and pointing to the caching server's ip
 address:3128 or using acl's on the router and redirecting traffic to the
 caching server.  I would like to set it up transparently using wccp.  I
 would rather go the wccp route to allow traffic to continue to flow in
 the event the caching server(s) die.  I understand wccpv2 provides this
 feature.

 My problem is getting the gre tunnel to work. I've been googling for two
 days.  I've used info from pages 143-149 of Squid: The Definitive Guide.
 No luck getting wccp tunnel working.  I've managed to get this:

Hello Tony,

The following commands are useful for debugging WCCP problems.

 * CISCO IOS
debug ip wccp events
debug ip wccp packets

This two commands will make the router log useful WCCP debug info.

 * squid.conf
debug_options 80,3

This will log detailed wccp info to the squid cachelog.
See 
http://squid.cvs.sourceforge.net/viewvc/squid/squid/doc/debug-sections.txt?view=markup

 * Use tcpdump on the physical and gre interfaces to watch packets
arriving from the Cisco router.
 * Configure your firewall to log dropped packets, and search for any
dropped packets originating from the Cisco router.

Perhaps this recent blog will be helpful:
http://fakrul.wordpress.com/2008/12/11/transparent-squid-proxy-server-with-wccp-support/

You should be aware that if you are deploying a standard transparent
Squid proxy, all your web traffic will appear to come from the IP
address of the Squid box. For an ISP this can cause problems for users
if they are accessing sites (eg download sites) that limit concurrent
access based on client source IP.

To get round this, there is a patch for Squid called TPROXY which
allows it to spoof the source IP address of the original user. This is
well supported on Linux, but I'm not sure about FreeBSD (see
http://cacheboy.blogspot.com/2009/01/freebsd-tproxy-works.html)

Hope that helps.

-RichardW.
-- 
Richard Wall
ApplianSys Ltd
http://www.appliansys.com


Re: [squid-users] Problems forcing mandatory proxy use.

2009-01-14 Thread Richard Chapman

Amos Jeffries wrote:


Squid itself won't. But the box underneath it will have firewall and 
routing control you can use (assuming its a non-windows box).


Amos


Hi Amos

I can see this is true where the squid box is also the internet router - 
but is it also true if the squid box is not the internet router - and is 
not dual homed? Currently the squid box has only one network connection 
- and the router function is handled by the netgear box. How much do I 
need to change to eliminate NAT altogether - and go to a mandatory proxy 
solution?


If it can be done without making the squid box dual homed - presumably I 
need to tell the clients (via dhcp) the squid box is the default router 
for the network - so they direct all proxy and non proxy traffic through 
it. Presumably it would need some fancy routing to then forward internet 
traffic correctly.


If the linux box cant do the job without a second network interface - 
the other option would be to get a more functional internet router box. 
I am gradually forming the view that it is easier to maintain networks 
with lots of purpose built single function boxes - than single 
multifunction boxes which do everything - but then break everything when 
they need upgrades or otherwise fail... Of course this is only my 
opinion...:-)


Many thanks for your advice.

Richard.








Thanks again

Richard.



matthew jones wrote:
is there any need to use NAT. you could simply forward all data to 
the squid by setting it's IP address as the DMZ server in the WAN 
setup page. which would send all incomming DSL data to the IP address.


if it's a tight network your after you should think about have the 
squid dual homed, one connecting to the router/firewall and the 
other to your network, thus forcing all data to pass through the 
proxy. also the proxy may be proxying data on more ports than 80 
such as https on port 4** ect.


i have a GD834g too but havent tried the above as i use NAT and not 
a proxy at home.


matt.

Richard Chapman wrote:
I have squid operating well on a small NAT network. Currently - all 
clients select automatic proxy detection and that is all working 
correctly with proxy.pac script on the http server.
I wanted to ensure that the proxy is handling ALL http traffic ALL 
of the time - so I can be confident of the statistics generated by 
sarg (squid analysis and report generator).


I thought this should e easy. I have a netgear DG834G router acting 
as the internet DSL connection. I added 2 outgoing firewall rules 
in the Dg834G:

1) allow all going traffic from the squid servers local IP.
2) Block port 80 traffic from all (other) local ip addresses.

When I apply these 2 rules - the network experiences erratic 
internet access. Some sites work some of the time - but not 
everything works correctly. I have tried disabling the above rules 
- then enabling just rule 1 - and even then the network behaves 
erratically. Note that rule 1 is an allow rule. But as soon as I 
disable both rules - everything returns to normal.


This seems very weird to me. Can anyone suggest some subtlety I am 
overlooking?
I have checked the netgear knowledge base and there are no glaring 
bugs reported related to this behaviour. I have updated to the 
latest netgear firmware. I can only assume the DG834 is not 
behaving as expected. Can anyone se another explanation?


In case it is relevant - the linux box is performing squid, dns, 
dhcp, http and lots of other stuff but the dg834 is performing NAT 
(and only NAT).


Thanks

Richard.















[squid-users] Problems forcing mandatory proxy use.

2009-01-13 Thread Richard Chapman
I have squid operating well on a small NAT network. Currently - all 
clients select automatic proxy detection and that is all working 
correctly with proxy.pac script on the http server.
I wanted to ensure that the proxy is handling ALL http traffic ALL of 
the time - so I can be confident of the statistics generated by sarg 
(squid analysis and report generator).


I thought this should e easy. I have a netgear DG834G router acting as 
the internet DSL connection. I added 2 outgoing firewall rules in the 
Dg834G:

1) allow all going traffic from the squid servers local IP.
2) Block port 80 traffic from all (other) local ip addresses.

When I apply these 2 rules - the network experiences erratic internet 
access. Some sites work some of the time - but not everything works 
correctly. I have tried disabling the above rules - then enabling just 
rule 1 - and even then the network behaves erratically. Note that rule 1 
is an allow rule. But as soon as I disable both rules - everything 
returns to normal.


This seems very weird to me. Can anyone suggest some subtlety I am 
overlooking?
I have checked the netgear knowledge base and there are no glaring bugs 
reported related to this behaviour. I have updated to the latest netgear 
firmware. I can only assume the DG834 is not behaving as expected. Can 
anyone se another explanation?


In case it is relevant - the linux box is performing squid, dns, dhcp, 
http and lots of other stuff but the dg834 is performing NAT (and only NAT).


Thanks

Richard.






Re: [squid-users] Problems forcing mandatory proxy use.

2009-01-13 Thread Richard Chapman

Thanks Matthew

The network has evolved from NAT without squid to NAt+squid - so I 
hadn't thought about eliminating NAT altogether. Do you have much 
experience with squid only networks. Will squid handle all the other 
stuff well. eg IM, bittorrrent, etc. Indeed - can these applications be 
persuaded to direct traffic through the proxy anyway. Are there any 
other consideration before turning of NAT?


Thanks again

Richard.



matthew jones wrote:
is there any need to use NAT. you could simply forward all data to the 
squid by setting it's IP address as the DMZ server in the WAN setup 
page. which would send all incomming DSL data to the IP address.


if it's a tight network your after you should think about have the 
squid dual homed, one connecting to the router/firewall and the other 
to your network, thus forcing all data to pass through the proxy. also 
the proxy may be proxying data on more ports than 80 such as https on 
port 4** ect.


i have a GD834g too but havent tried the above as i use NAT and not a 
proxy at home.


matt.

Richard Chapman wrote:
I have squid operating well on a small NAT network. Currently - all 
clients select automatic proxy detection and that is all working 
correctly with proxy.pac script on the http server.
I wanted to ensure that the proxy is handling ALL http traffic ALL of 
the time - so I can be confident of the statistics generated by sarg 
(squid analysis and report generator).


I thought this should e easy. I have a netgear DG834G router acting 
as the internet DSL connection. I added 2 outgoing firewall rules in 
the Dg834G:

1) allow all going traffic from the squid servers local IP.
2) Block port 80 traffic from all (other) local ip addresses.

When I apply these 2 rules - the network experiences erratic internet 
access. Some sites work some of the time - but not everything works 
correctly. I have tried disabling the above rules - then enabling 
just rule 1 - and even then the network behaves erratically. Note 
that rule 1 is an allow rule. But as soon as I disable both rules - 
everything returns to normal.


This seems very weird to me. Can anyone suggest some subtlety I am 
overlooking?
I have checked the netgear knowledge base and there are no glaring 
bugs reported related to this behaviour. I have updated to the latest 
netgear firmware. I can only assume the DG834 is not behaving as 
expected. Can anyone se another explanation?


In case it is relevant - the linux box is performing squid, dns, 
dhcp, http and lots of other stuff but the dg834 is performing NAT 
(and only NAT).


Thanks

Richard.










[squid-users] Automatic stale sarg report clean-up.

2008-12-30 Thread Richard Chapman

Hi - I'm hoping someone here can help me with a sarg question.

I installed sarg about 6 months ago - and have just noticed that my sarg 
reports seem to be building up indefinitely. Is there some sarg 
configuration I have missed to delete stale reports? All of my sarg 
folders (daily, weekly, monthly, reports) seem to have data there from 5 
months ago.
Presumably - I can manually delete old reports - or write a script (if I 
knew how). But I would have thought sarg would probably do this 
automatically when configured properly.


Thanks.
Richard.



[squid-users] Automatic stale sarg report clean-up

2008-12-30 Thread Richard Chapman
I apologise if this is a double posting - but I received a delayed 
message from my mail server - and haven't received a digest for several 
days.


Hi - I'm hoping someone here can help me with a sarg question.

I installed sarg about 6 months ago - and have just noticed that my sarg 
reports seem to be building up indefinitely. Is there some sarg 
configuration I have missed to delete stale reports? All of my sarg 
folders (daily, weekly, monthly, reports) seem to have data there from 5 
months ago.
Presumably - I can manually delete old reports - or write a script (if I 
knew how). But I would have thought sarg would probably do this 
automatically when configured properly.


Thanks.
Richard.


[squid-users] Re: cached MS updates !

2008-12-20 Thread Richard Neville
Henrik Nordstrom henrik at henriknordstrom.net writes:

 
 On mån, 2008-06-16 at 08:16 -0700, pokeman wrote:
  thanks henrik for you reply 
  any other way to save bandwidth windows updates almost use 30% of my entire
  bandwidth 
 
 Microsoft has a update server you can run locally. But you need to have
 some control over the clients to make them use this instead of windows
 update...
 
 Or you could look into sponsoring some Squid developer to add caching of
 partial objects with the goal of allowing http access to windows update
 to be cached. (the versions using https can not be done much about...)
 
 Regards
 Henrik
 

Hi, Just thought id let you know, I currently am using an IPCop Firewall,
and one of the plugins (the reason i went with IPCOP) is an 
update accelerator plugin, that stores Windows, Apple, Symmantec, Avast and
linux updates on the
firewalls drive..

I actually found this site because i was trying to get help, and the developer
of the plugin seems cranky at the best of times.

Basically the system works, updates that a PC doesnt have gets loaded from the
firewall rather then the internet, but the updates themselves, it seems that MS
use multiple servers to store each update, now when I update a SP2 XP pro
system, it sees SP3, it downloaded a 850meg file, thats fine, it must be
multilanguage versions that its downloading..

the problem is that i update another SP2 system and it starts downloading the

850 megs again as its got the same file name, but comming from a different
 server.

would anyone here know how to rectify this?

im a 100% noob at linux but i have managed to get it up and running without too
much issue.

here's the plugin website for those interested.

http://update-accelerator.advproxy.net/

any help would be appreciated :)
planetx...@gmail.com




Re: [squid-users] WCCP load balancing and TPROXY fully transparent interception

2008-12-19 Thread Richard Wall
2008/12/19 Bin Liu binliu.l...@gmail.com:
 I'm interested to know if you have managed to get this working
 reliably for your ISP environment?
 Not yet. We are still doing some test in our own environment.

Bin,

Thanks for your reply. We're working on a Squid configuration for ISP
customers and it would be really interesting to hear about any
potential problems with this configuration.

 How far have you gone to make Squid truly transparent eg
  * suppressing the Squid headers, error messages etc.
  * Is there any way to configure Squid / Cisco to give SYN_ACK,
 connection refused and ICMP host unreachable responses rather than
 Squid error messages?
  * Can you force Squid to make its request from the same source port
 as the client.

 You mean totally transparent and the clients don't even know the
 existence of squid by any means? It seems a little bit difficult...

Yeah, I agree. It's going to be impossible to totally hide Squid, but
we're going to do as much as we can.

  * If someone uses port 80 for a protocol other than http, can Squid
 reject the redirected traffic in such a way that it is passed through
 directly instead?

 WCCPv2 can support this feature by Packet Return Method. (See
 http://www.cisco.com/en/US/docs/ios/12_0t/12_0t3/feature/guide/wccp.html,
 search Web Cache Packet Return. Also mentioned in your url:
 http://bazaar.launchpad.net/~squid3/squid/3.1/annotate/9363?file_id=draftwilsonwccpv212o-20070417152110-s6qkuxj8uabe-1)
 But Henrik said squid hadn't implemented this feature yet. (See
 http://www.squid-cache.org/mail-archive/squid-users/200811/0130.html)

Thanks for the links.

-RichardW.
-- 
Richard Wall
Support Engineer
ApplianSys Ltd
http://www.appliansys.com

(t) +44 (0)24 7643 0094
(f) +44 (0)87 0762 7063
(e) richard.w...@appliansys.com


[squid-users] WCCP v2 connection dropped intermittently: HERE_I_AM not received by router

2008-12-19 Thread Richard Wall
: Called
2008/12/18 17:11:46| wccp2HereIam: sending to service id 0
2008/12/18 17:11:46| wccp2_update_md5_security: called
2008/12/18 17:11:46| Sending HereIam packet size 160

2008/12/18 17:11:56| wccp2HereIam: Called
2008/12/18 17:11:56| wccp2HereIam: sending to service id 0
2008/12/18 17:11:56| wccp2_update_md5_security: called
2008/12/18 17:11:56| Sending HereIam packet size 160

2008/12/18 17:12:06| wccp2HereIam: Called
2008/12/18 17:12:06| wccp2HereIam: sending to service id 0
2008/12/18 17:12:06| wccp2_update_md5_security: called
2008/12/18 17:12:06| Sending HereIam packet size 160

2008/12/18 17:12:16| wccp2HereIam: Called
2008/12/18 17:12:16| wccp2HereIam: sending to service id 0
2008/12/18 17:12:16| wccp2_update_md5_security: called
2008/12/18 17:12:16| Sending HereIam packet size 160

2008/12/18 17:12:26| wccp2HereIam: Called
2008/12/18 17:12:26| wccp2HereIam: sending to service id 0
2008/12/18 17:12:26| wccp2_update_md5_security: called
2008/12/18 17:12:26| Sending HereIam packet size 160

2008/12/18 17:12:36| wccp2HereIam: Called
2008/12/18 17:12:36| wccp2HereIam: sending to service id 0
2008/12/18 17:12:36| wccp2_update_md5_security: called
2008/12/18 17:12:36| Sending HereIam packet size 160
2008/12/18 17:12:36| wccp2HandleUdp: Called.
2008/12/18 17:12:36| Incoming WCCPv2 I_SEE_YOU length 128.
2008/12/18 17:12:36| Complete packet received
2008/12/18 17:12:36| Incoming WCCP2_I_SEE_YOU Received ID old=1502 new=1504.
2008/12/18 17:12:36| Cleaning out cache list
2008/12/18 17:12:36| Adding ourselves as the only cache
2008/12/18 17:12:36| Change detected - queueing up new assignment
}}}

-- 
Richard Wall
ApplianSys Ltd
http://www.appliansys.com


Re: [squid-users] WCCP load balancing and TPROXY fully transparent interception

2008-12-18 Thread Richard Wall
2008/11/5 Bin Liu binliu.l...@gmail.com:
snip
 I have 2 squid servers, squid A and squid B, both implented TPROXY and
 connected to the same Cisco router:

Internet
|
|
 squid ARoutersquid B
|
|
Customers

 Here squid A wants to send a HTTP request to original destination
 server, the routers just forwards this packet, it's OK; but when the
 response packet from the original server returns in, how does the
 router redirect that packet? Redirect it to squid A or squid B? As
 there's no connection table in router memory or any mark in the
 packet, how can the router determine that this response packet should
 be forwarded to squid A?

 squid A -- (request to original server) -- router -- original server
 -- (response) -- router -- squid A or B?

Hi Bin,

You may already have got the answer to this, but I have recently been
setting this up and had the same question. Seems the key is in the
Redirection with Hash Assignment:

 * 
http://bazaar.launchpad.net/~squid3/squid/3.1/annotate/9363?file_id=draftwilsonwccpv212o-20070417152110-s6qkuxj8uabe-1
(LINE 549)

In the config example that Henrik linked to (above) the outbound
requests are redirected to a particular Squid, based on a hash of
their destination IP and the returning responses are redirected based
on their source ip. This way the response is redirected to the Squid
that made the spoofed request.

Clever in theory; and in my minimal test setup it does seem to work.

I'm interested to know if you have managed to get this working
reliably for your ISP environment? Has it caused an particular
problems for your customers?
How far have you gone to make Squid truly transparent eg
 * suppressing the Squid headers, error messages etc.
 * Is there any way to configure Squid / Cisco to give SYN_ACK,
connection refused and ICMP host unreachable responses rather than
Squid error messages?
 * Can you force Squid to make its request from the same source port
as the client.
 * If someone uses port 80 for a protocol other than http, can Squid
reject the redirected traffic in such a way that it is passed through
directly instead?

Look forward to any information you can provide.

-RichardW.
-- 
Richard Wall
ApplianSys Ltd
http://www.appliansys.com


[squid-users] Questions on research into using digest auth against MS AD2003

2008-10-31 Thread Richard


* What specific piece of the puzzle on the client side is it about the 
NTLM or kerberos authentication methods that allow the authentication 
traffic secure by sending only the credential hashes?  (Am I correct in 
understanding that it is the ntlm_auth program that speaks to the NTLM 
client and negotiates for the credential hashes to be exchanged?)


* When squid is configured to use *digest* authentication, I understand 
that the traffic between the squid server and the LDAP server is 
encrypted .  Is the traffic between the browser and the squid server 
also encrypted when using Digest?   If so, how is it the client browser 
know to encrypt/hash the communications for the return trip to the server?


   **Short of loading a program on a client machine, are there any 
proxy servers out there that can prompt for credentials while keeping 
secure the communication between the workstation and the proxy server?
   ** What is it that has to happen to ensure that the authentication 
traffic from any browser to any proxy server is encrypted?



* Considering the fact that I'm trying to use digest_ldap_auth against 
an MS LDAP/AD 2003 server that should be storing several precomputed 
digest hash versions of H(username:realm:password) that permit these 
hashes to be authenticated without requiring reversible encryption to be 
enabled on the account. (see technet article@: 
http://preview.tinyurl.com/5bxacn)


A) Is it even possible to use digest_ldap_auth to do digest authenticate 
against an Active Directory 2003's LDAP database server?


B) What would be a working example command line of a successful 
digest_ldap_auth test against an AD 2003 server? (In my attempts, I have 
been unable to identify the proper digest hash containing LDAP (-A) 
attribute to use in a lookup.  I *THINK* this is because MS AD2003 
expects the digest hash request to come via a SASL mechanism...which 
begs the question...is there a  SASL mechanism that works with 
squid+AD2003?)


* What would help me identify the necessary pieces of the puzzle so that 
I could configure such a successful lookup?



For what it's worth and/or if you have any questions about how or why 
I'm doing things, I'm keeping all my notes about this project on one of 
my wiki pages. (http://preview.tinyurl.com/6fgyf8)  maybe it will help 
others.


--
Richard


[squid-users] enable/disable logging entire request url

2008-10-26 Thread Richard Hubbell
Was unable to sort this out. 

I want to log the entire request string into the access.log but I get only up 
to the first ?.  

I've looked at the logformat and just don't see how to do this.
Does this require a special compile flag? Or is this simpler?
Thanks.


  


Re: [squid-users] enable/disable logging entire request url

2008-10-26 Thread Richard Hubbell



--- On Sun, 10/26/08, Robin Clayton [EMAIL PROTECTED] wrote:

 From: Robin Clayton [EMAIL PROTECTED]
 Subject: Re: [squid-users] enable/disable logging entire request url
 To: squid-users@squid-cache.org
 Date: Sunday, October 26, 2008, 2:02 PM
 # TAG: strip_query_terms 
 # By default, Squid strips query terms from requested URLs
 before 
 # logging. This protects your user's privacy. 
 # 
 #Default: 
 # strip_query_terms on 
 
 strip_query_terms off

Thanks, did not see this.

 
 
 On Sun, 2008-10-26 at 13:28 -0700, Richard Hubbell wrote:
  Was unable to sort this out. 
  
  I want to log the entire request string into the
 access.log but I get only up to the first ?.  
  
  I've looked at the logformat and just don't
 see how to do this.
  Does this require a special compile flag? Or is this
 simpler?
  Thanks.
  
  

 


  


Re: [squid-users] Update Accelerator, Squid and Windows Update Caching

2008-10-16 Thread Richard Wall
On Fri, Oct 10, 2008 at 12:30 PM, Amos Jeffries [EMAIL PROTECTED] wrote:
 Richard Wall wrote:

 Hi,

 I've been reading through the archive looking for information about
 squid 2.6 and windows update caching. The FAQ mentions problems with
 range offsets but it's not really clear which versions of Squid this
 applies to.

 All versions. The FAQ was the result of my experiments mid last year. With
 some tweaks made early his year since Vista came out.
 We haven't done a intensive experiments with Vista yet.

Hi Amos,

I'm still investigating Windows Update caching (with 2.6.STABLE17/18)

First of all, I have been doing some tests to try and find out the
problem with Squid and Content-Range requests.
 * I watch the squid logs as a vista box does its automatic updates
and I can see that *some* of its requests use ranges. (so far I have
only seen these when it requests .psf files...some of which seem to be
very large files...so the range request makes sense) See:
http://groups.google.hr/group/microsoft.public.windowsupdate/browse_thread/thread/af5db07dc2db9713

# zcat squid.log.192.168.1.119.2008-10-16.gz | grep
multipart/byteranges | awk '{print $7}' | uniq | while read URL; do
echo $URL; wget --spider $URL 21 | grep Length; done
http://www.download.windowsupdate.com/msdownload/update/software/secu/2008/10/windows6.0-kb956390-x86_2d03c4b14b5bad88510380c14acd2bffc26436a7.psf
Length: 91,225,471 (87M) [application/octet-stream]
http://www.download.windowsupdate.com/msdownload/update/software/secu/2008/05/windows6.0-kb950762-x86_0cc2989b92bc968e143e1eeae8817f08907fd715.psf
Length: 834,868 (815K) [application/octet-stream]
http://www.download.windowsupdate.com/msdownload/update/software/secu/2008/03/windows6.0-kb948590-x86_ed27763e42ee2e20e676d9f6aa13f18b84d7bc96.psf
Length: 755,232 (738K) [application/octet-stream]
http://www.download.windowsupdate.com/msdownload/update/software/crup/2008/09/windows6.0-kb955302-x86_1e40fd3ae8f95723dbd76f837ba096adb25f3829.psf
Length: 7,003,447 (6.7M) [application/octet-stream]
...

 * I have found that curl can make range requests so I've been using
it to test how Squid behavesand it seems to do the right thing. eg
 - First ask for a range : The correct range is returned X-Cache: MISS
 - Repeat the range request :  The correct range is returned X-Cache: MISS
 - Request the entire file: The entire file is correctly returned X-Cache: MISS
 - Repeat the request: X-Cache: HIT
 - Repeat the previous range request: X-Cache: HIT
 - Request a different range: X-Cache: HIT

curl --range 1000-1002 --header Pragma: -v -x http://127.0.0.1:3128
http://www.download.windowsupdate.com/msdownload/update/software/secu/2008/05/windows6.0-kb950762-x86_0cc2989b92bc968e143e1eeae8817f08907fd715.psf
 /dev/null

Looking back through the archive I find this conversation from 2005:
http://www.squid-cache.org/mail-archive/squid-users/200504/0669.html

...but the behaviour there sounds like a result of setting:
range_offset_limit -1

Seems to me that Squid should do a good job of Windows Update caching.
There is another thread discussing how to override MS update cache
control headers:
http://www.squid-cache.org/mail-archive/squid-users/200508/0596.html

but I don't see anything evil in the server response headers
today. I guess the client may be sending no-cache headers...I'll
double check that later.

Is there some other case that I'm missing?

 I'm going to experiment, but if anyone has any positive or
 negative experience of Squid and windows update caching, I'd be really
 interested to hear from you.

 In case Squid cannot do windows update caching by its self, I'm also
 looking at integrating Update Accelerator
 (http://update-accelerator.advproxy.net/) script with standard squid
 2.6 and wondered if anyone else had any experience of this.
 The update accelerator script is just a perl wrapper around wget which
 is configured as a Squid url_rewrite_program. It's not clear to me
 what this script is doing that Squid wouldn't do by its self.

 Strange indeed.

I got update accelerator working with Squid but I'm still not
convinced that it's necessary (see above).

-RichardW.


[squid-users] Update Accelerator, Squid and Windows Update Caching

2008-10-10 Thread Richard Wall
Hi,

I've been reading through the archive looking for information about
squid 2.6 and windows update caching. The FAQ mentions problems with
range offsets but it's not really clear which versions of Squid this
applies to. I'm going to experiment, but if anyone has any positive or
negative experience of Squid and windows update caching, I'd be really
interested to hear from you.

In case Squid cannot do windows update caching by its self, I'm also
looking at integrating Update Accelerator
(http://update-accelerator.advproxy.net/) script with standard squid
2.6 and wondered if anyone else had any experience of this.
The update accelerator script is just a perl wrapper around wget which
is configured as a Squid url_rewrite_program. It's not clear to me
what this script is doing that Squid wouldn't do by its self.

Thanks.

-RichardW.


Re: [squid-users] Update Accelerator, Squid and Windows Update Caching

2008-10-10 Thread Richard Wall
On Fri, Oct 10, 2008 at 12:30 PM, Amos Jeffries [EMAIL PROTECTED] wrote:
 Richard Wall wrote:
 I've been reading through the archive looking for information about
 squid 2.6 and windows update caching. The FAQ mentions problems with
 range offsets but it's not really clear which versions of Squid this
 applies to.

 All versions. The FAQ was the result of my experiments mid last year. With
 some tweaks made early his year since Vista came out.
 We haven't done a intensive experiments with Vista yet.

Thanks Amos,

Okay. Well I'm planning on testing with Vista updates so I'll try and
report my findings here.

snip

 In case Squid cannot do windows update caching by its self, I'm also
 looking at integrating Update Accelerator
 (http://update-accelerator.advproxy.net/) script with standard squid
 2.6 and wondered if anyone else had any experience of this.
 The update accelerator script is just a perl wrapper around wget which
 is configured as a Squid url_rewrite_program. It's not clear to me
 what this script is doing that Squid wouldn't do by its self.

 Strange indeed.

I'll let  you know how it goes :)

-RichardW.


[squid-users] Multiple concurrent clients accessing the same site cannot connect

2008-10-09 Thread Richard Onanian
I have Squid Version 2.5.STABLE14 running on Red Hat Enterprise Linux AS 
release 4 as a basic caching web proxy with a couple filters. It's being used 
for student computer labs so that the students cannot access MySpace or 
Facebook from the labs (and to provide caching, of course).

When multiple students try to access the same site (for example, an instructor 
tells a whole class to go to geographynetwork.com, or a class takes an online 
exam using WebCT), many/most end up with their connection timing out. Taking 
them off Squid fixes the problem. This happens with as few as 8 clients (I 
haven't tested less than 8) trying to access the same site.

How can I troubleshoot this problem? I searched the mailing list archives and 
google, but failed to find an answer.


[squid-users] Restarts -- which option changes will and won't get picked up?

2008-07-23 Thread Richard Hubbell
Hello Squid world,

I was trying to understand what changes would get picked up when doing a 
restart and I couldn't find an answer.

From the man page these are the -k options:


reconfigure | rotate | shutdown | interrupt | kill | debug | check | parse

I am wondering if there's documentation describing which changes will get 
picked up by the different -k options?

Particulary I'm trying to understand memory and disk cache size settings.
If those are changed do they require a stop and start of Squid?


  


Re: [squid-users] How to send/create ICP queries manually?

2008-07-18 Thread Richard Hubbell
--- On Thu, 7/17/08, Henrik Nordstrom [EMAIL PROTECTED] wrote:

 From: Henrik Nordstrom [EMAIL PROTECTED]
 Subject: Re: [squid-users] How to send/create ICP queries manually?
 To: [EMAIL PROTECTED]
 Cc: squid-users@squid-cache.org
 Date: Thursday, July 17, 2008, 7:10 PM
 On tor, 2008-07-17 at 09:55 -0700, Richard Hubbell wrote:
 
  Is there a way to craft/construct and send/submit ICP
 queries as if I were a sibling?  Maybe using Netcat or the
 like?
 
 There is an Perl ICP library in CPAN, with a sample sibling
 answering
 ICP requests...
 
 Regards
 Henrik

Nice, thanks.


  


[squid-users] How to send/create ICP queries manually?

2008-07-17 Thread Richard Hubbell
Hello Squid world,

Is there a way to craft/construct and send/submit ICP queries as if I were a 
sibling?  Maybe using Netcat or the like?


  


Re: [squid-users] squid in ISP

2008-07-17 Thread Richard Hubbell
--- On Thu, 7/17/08, Amos Jeffries [EMAIL PROTECTED] wrote:

 From: Amos Jeffries [EMAIL PROTECTED]
 Subject: Re: [squid-users] squid in ISP
 To: [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED], Rhino [EMAIL PROTECTED], 
 squid-users@squid-cache.org
 Date: Thursday, July 17, 2008, 6:26 AM
 Richard Hubbell wrote:
  
  
  --- On Fri, 7/11/08, Rhino [EMAIL PROTECTED]
 wrote:
  
  From: Rhino [EMAIL PROTECTED]
  Subject: Re: [squid-users] squid in ISP
  To: [EMAIL PROTECTED]
  Cc: squid-users@squid-cache.org
  Date: Friday, July 11, 2008, 6:56 AM
  Siu-kin Lam wrote:
  Dear all 
 
  Any experience using squid as caching in ISP
  environment ?
 
  thanks 
  SK 
 
 

 
 
  I'm sure there's much larger ISPs out
 there and
  been using it much longer;
  just passing along our info.
  We're a small ISP serving around 10k
 dialup,dsl,cable
  modem and MAN subs
  via a dual-homed to different ISP BGP WAN.
  We loaded squid on a quad core linux box with
 around 1.2Tb
  disk
  capacity and 32Gb RAM, using a Cisco 4948 switch
 and WCCP2
  to transparently redirect to Squid.
  There were some major hurdles along the way
  mostly getting the 4948 to pass the L2 WCCP
 traffic -
  2 IOS bugs and a year in the process) but once
 that worked
  and we got our IPTABLES set up properly,
 transparent
  redirection
  has been working quite well.
  Some tweaks needed to our Squid config, but with
 the help
  of this list
- particularly Henrik and Amos' posts - at
 this point
  we're very
  encouraged by the performance and bandwidth
 savings
  we're seeing on the
  system which has only been truly active for around
 3 weeks
  now.
  Again, we're a pretty small shop - so when our
 old
  NetApp Netcache
  was no longer able to adequately handle the load,
 we needed
  an
  effective, minimal-cost solution which this is
  demonstrating to be.
  Hope that helps.
  -Ryan
  
  
  Thanks for sharing this. We're doing about 75
 requests/sec on a quad-core Xeon with 16GB.  Still trying
 out some different configs.
  I have cache_mem set to 2GB and it's working well
 so far.
  
  It's not even worked up a sweat and has plenty of
 room for more work.
 
 I'll bet it isn't.
 75 is not even close to half what squid was doing in Y2K.
 :)
 
 If you want to stress it we'd be glad of the results.

I can do this.  Is there a set of tests that people would like to see?
I think we should concoct a test plan, albeit a very basic one.

Is there a squidbench?  squib for short? Maybe it could start by either 
fetching a digest or using one as input?  Or just use a list of urls from an 
access.log?

If we don't have a level SUT (system-under-test playing field) we can at least 
have a level load-generator.  The fewer variables the better.

 
 Amos
 -- 
 Please use Squid 2.7.STABLE3 or 3.0.STABLE7

Are there redhat packages for 2.7?



  


Re: [squid-users] Squid in the Enterpise

2008-07-17 Thread Richard Hubbell
--- On Thu, 7/17/08, Robert V. Coward [EMAIL PROTECTED] wrote:

 From: Robert V. Coward [EMAIL PROTECTED]
 Subject: [squid-users] Squid in the Enterpise
 To: squid-users@squid-cache.org
 Date: Thursday, July 17, 2008, 11:51 AM
 I am running into the standard Open Source fear
 at my local site. Can anyone name some major companies that
 use Squid. We are talking enterprise or ISP here. We
 currently have about 100,000 users with heavy streaming
 video use. Some of the management are afraid Squid will not
 be able to handle the load.
 Our planned deployment box is a 8-way, 16GB ram, 1TB (6
 disks I think) server which will be running RedHat
 Enterprise Linux.

Didn't know there was a 'standard Open Source fear'.  In what context is the 
fear exhibited?  Fear of low quality?  

100,000 users is more carrier-scale than enterprise.  I don't know if there's 
any hard min-max but I thought a ballpark for enterprise was 10,000.   

Squid can handle the load if the system is architected properly. 
You will need some model of current traffic to even start the conversation.

You cannot buy a pc, then shoe-horn the traffic into it. That will lead to 
disappointment.  I suggest hiring an expert since it sounds like you need one.  
There are experts here that can help you.


  


Re: [squid-users] How to send/create ICP queries manually?

2008-07-17 Thread Richard Hubbell
Yes, I'm talking to myself. (^:

But I found two links to two perl programs for doing what I want to do.
Let's face it searching is good and all but sometimes search results are 
optimized in ways that aren't relevant to us.

I ran across these by chance really.

Here's to hoping someone else might benefit.

Both can be used to test ICP

http://icp.ircache.net/tools/icp-test.pl
http://icp.ircache.net/tools/udp-banger.pl




--- On Thu, 7/17/08, Richard Hubbell [EMAIL PROTECTED] wrote:

 From: Richard Hubbell [EMAIL PROTECTED]
 Subject: [squid-users] How to send/create ICP queries manually?
 To: squid-users@squid-cache.org
 Date: Thursday, July 17, 2008, 9:55 AM
 Hello Squid world,
 
 Is there a way to craft/construct and send/submit ICP
 queries as if I were a sibling?  Maybe using Netcat or the
 like?


  


Re: [Fwd: Re: [squid-users] Recommended cache_dir config for large system]

2008-07-16 Thread Richard Hubbell
 THanks much for the quick response, Henrik.
 Filesystem for cache disks currently configured for
 reiserfs with
 notail/noatime opts.
 I did not have the fd amounts set, nor ip_local_port_range.
 My cache_dirs have each disk mounted as partition, i.e.
 disk1=/squid1
 disk2=/squid2; would your suggestion
 be then to halve each disk and partition each as cache_dir?
 (i.e, go
 from squid1-4 to squid1-8 across the 4 disks)
 Also have a 5th disk of equal size that has to be used for
 OS, just fyi
 - so these 4 are totally dedicated to Squid.
 thanks again, appreciate your input.
 -Ryan

Just curious why reiserfs?  I don't think it's supported any longer.



  


[squid-users] getting icp siblings to talk to each other -- squid/2.6.STABLE20

2008-07-16 Thread Richard Hubbell
Hello Squid world,

I can't get siblings to talk to each other.  Maybe there's some reason that 
they don't need to that I'm missing?  Or I have it misconfigured.


I have two squid caches. host1 has 75GB cache and host2 has a 40GB cache.

in host1 I have:

cache_peer host2 sibling  3130 proxy-only

and on host2:

cache_peer host1 sibling  3130 proxy-only


I must be missing something obvious, the setup seems simple enough.

Other settings in both:
acl local src 192.168.0.0/255.255.0.0
icp_port 3130
icp_query_timeout 500
icp_access allow local
icp_access deny all

I don't find anything in access_log i.e. grep CACHE_DIGEST_HIT access_log

Any ideas?




  


Re: [squid-users] Squid requirements

2008-07-16 Thread Richard Hubbell
 I already know that there is not a recipe for squid.  But I
 wonder if anyone 
 knows an official document that lists squid requirements.
 
 Regards,
 
 LD

In addition to the other poster's recommendations. You might want to consider 
having two servers for redundancy.  

But really without detail it's hard to know what to recommend.


  


Re: [Fwd: Re: [squid-users] Recommended cache_dir config for large system]

2008-07-16 Thread Richard Hubbell



--- On Wed, 7/16/08, Rhino [EMAIL PROTECTED] wrote:

 From: Rhino [EMAIL PROTECTED]
 Subject: Re: [Fwd: Re: [squid-users] Recommended cache_dir config for large 
 system]
 To: [EMAIL PROTECTED]
 Cc: squid-users@squid-cache.org
 Date: Wednesday, July 16, 2008, 2:53 PM
 Richard Hubbell wrote:
  THanks much for the quick response, Henrik.
  Filesystem for cache disks currently configured
 for
  reiserfs with
  notail/noatime opts.
  I did not have the fd amounts set, nor
 ip_local_port_range.
  My cache_dirs have each disk mounted as partition,
 i.e.
  disk1=/squid1
  disk2=/squid2; would your suggestion
  be then to halve each disk and partition each as
 cache_dir?
  (i.e, go
  from squid1-4 to squid1-8 across the 4 disks)
  Also have a 5th disk of equal size that has to be
 used for
  OS, just fyi
  - so these 4 are totally dedicated to Squid.
  thanks again, appreciate your input.
  -Ryan
  
  Just curious why reiserfs?  I don't think it's
 supported any longer.
  
  
  

  
  
 size/speed considerations when we set the system up
 originally.  It's 
 worked well so far.
 cheers

Okay, do you mean better performance with larger number of files?
I'm using ext3 that's why I'm asking.



  


Re: [squid-users] squid in ISP

2008-07-16 Thread Richard Hubbell



--- On Fri, 7/11/08, Rhino [EMAIL PROTECTED] wrote:

 From: Rhino [EMAIL PROTECTED]
 Subject: Re: [squid-users] squid in ISP
 To: [EMAIL PROTECTED]
 Cc: squid-users@squid-cache.org
 Date: Friday, July 11, 2008, 6:56 AM
 Siu-kin Lam wrote:
  Dear all 
  
  Any experience using squid as caching in ISP
 environment ?
  
  
  thanks 
  SK 
  
  

  
  
 
 I'm sure there's much larger ISPs out there and
 been using it much longer;
 just passing along our info.
 We're a small ISP serving around 10k dialup,dsl,cable
 modem and MAN subs
 via a dual-homed to different ISP BGP WAN.
 We loaded squid on a quad core linux box with around 1.2Tb
 disk
 capacity and 32Gb RAM, using a Cisco 4948 switch and WCCP2
 to transparently redirect to Squid.
 There were some major hurdles along the way
 mostly getting the 4948 to pass the L2 WCCP traffic -
 2 IOS bugs and a year in the process) but once that worked
 and we got our IPTABLES set up properly, transparent
 redirection
 has been working quite well.
 Some tweaks needed to our Squid config, but with the help
 of this list
   - particularly Henrik and Amos' posts - at this point
 we're very
 encouraged by the performance and bandwidth savings
 we're seeing on the
 system which has only been truly active for around 3 weeks
 now.
 Again, we're a pretty small shop - so when our old
 NetApp Netcache
 was no longer able to adequately handle the load, we needed
 an
 effective, minimal-cost solution which this is
 demonstrating to be.
 Hope that helps.
 -Ryan


Thanks for sharing this. We're doing about 75 requests/sec on a quad-core Xeon 
with 16GB.  Still trying out some different configs.
I have cache_mem set to 2GB and it's working well so far.

It's not even worked up a sweat and has plenty of room for more work.




  


Re: [squid-users] Squid requirements

2008-07-16 Thread Richard Hubbell
--- On Wed, 7/16/08, Adrian Chadd [EMAIL PROTECTED] wrote:

 From: Adrian Chadd [EMAIL PROTECTED]
 Subject: Re: [squid-users] Squid requirements
 To: Chris Robertson [EMAIL PROTECTED]
 Cc: Squid Users squid-users@squid-cache.org
 Date: Wednesday, July 16, 2008, 9:28 AM
 What we're really missing is a bunch of hardware
 x, config y, testing
 z, results a, b, c. TMF used to have some stuff up
 for older hardware
 but there's just nothing recent to use as a measuring
 stick..
 

The problem is that there's so much disparate technology out there.
multi-core cpus, all kinds of different memory, all kinds of different disk 
technologies,  different filesystems,  different OS, different kernels, and on 
and on.  It's hard to get useful measuring sticks.

I still think it's a useful pursuit.  But I think that the reasons above make 
people less inclined to do it.

spec.org tries to level the field, if someone concocted a level field and made 
it easy for people to do, then we'd see more results.


  


Re: [Fwd: Re: [squid-users] Recommended cache_dir config for large system]

2008-07-16 Thread Richard Hubbell



--- On Wed, 7/16/08, Rhino [EMAIL PROTECTED] wrote:

 From: Rhino [EMAIL PROTECTED]
 Subject: Re: [Fwd: Re: [squid-users] Recommended cache_dir config for large 
 system]
 To: [EMAIL PROTECTED]
 Cc: squid-users@squid-cache.org
 Date: Wednesday, July 16, 2008, 6:55 PM
 Richard Hubbell wrote:
 
  --- On Wed, 7/16/08, Rhino [EMAIL PROTECTED]
 wrote:
 

  From: Rhino [EMAIL PROTECTED]
  Subject: Re: [Fwd: Re: [squid-users] Recommended
 cache_dir config for large system]
  To: [EMAIL PROTECTED]
  Cc: squid-users@squid-cache.org
  Date: Wednesday, July 16, 2008, 2:53 PM
  Richard Hubbell wrote:
  
  THanks much for the quick response,
 Henrik.
  Filesystem for cache disks currently
 configured
  
  for
  
  reiserfs with
  notail/noatime opts.
  I did not have the fd amounts set, nor
  
  ip_local_port_range.
  
  My cache_dirs have each disk mounted as
 partition,
  
  i.e.
  
  disk1=/squid1
  disk2=/squid2; would your suggestion
  be then to halve each disk and partition
 each as
  
  cache_dir?
  
  (i.e, go
  from squid1-4 to squid1-8 across the 4
 disks)
  Also have a 5th disk of equal size that
 has to be
  
  used for
  
  OS, just fyi
  - so these 4 are totally dedicated to
 Squid.
  thanks again, appreciate your input.
  -Ryan
  
  Just curious why reiserfs?  I don't think
 it's

  supported any longer.
  
 

 
 

  size/speed considerations when we set the system
 up
  originally.  It's 
  worked well so far.
  cheers
  
 
  Okay, do you mean better performance with larger
 number of files?
  I'm using ext3 that's why I'm asking.
 
 
 

 
 

 we went with the reiserfs out of googling the choices and
 most impressed 
 with what we found on it as opposed to ext3.
 we're also using sata drives, if that helps; just hoped
 for the best 
 combination once the server was in production.
 I'm not suggesting this is in fact the optimum
 combination - I'm no 
 linux or filesystem expert by any means.
 Just trying to keep our customers happy and my paycheck
 coming :)
 -Ryan

Yep, no problem.  We're using ext3 on redhat since that's what's supported. 
Always good to hear/learn from others' experience.

We've seen some issues with processes spending a litte too much time in D 
state (as reported by ps) a.k.a. uninteruptible sleep.  Not clear what is 
causing that yet.  Seems like a deadlock somewhere.  Not in squid. We saw it in 
apache and others. 


  


Re: [squid-users] getting icp siblings to talk to each other -- squid/2.6.STABLE20

2008-07-16 Thread Richard Hubbell
--- On Wed, 7/16/08, Amos Jeffries [EMAIL PROTECTED] wrote:

 From: Amos Jeffries [EMAIL PROTECTED]
 Subject: Re: [squid-users] getting icp siblings to talk to each other --  
 squid/2.6.STABLE20
 To: [EMAIL PROTECTED]
 Cc: squid-users@squid-cache.org
 Date: Wednesday, July 16, 2008, 8:32 PM
  Hello Squid world,
 
  I can't get siblings to talk to each other.  Maybe
 there's some reason
  that they don't need to that I'm missing?  Or
 I have it misconfigured.
 
 
  I have two squid caches. host1 has 75GB cache and
 host2 has a 40GB cache.
 
  in host1 I have:
 
  cache_peer host2 sibling  3130 proxy-only
 
  and on host2:
 
  cache_peer host1 sibling  3130 proxy-only
 
 
  I must be missing something obvious, the setup seems
 simple enough.
 
  Other settings in both:
  acl local src 192.168.0.0/255.255.0.0
 
 You can make that /16 for easier reading.
 
  icp_port 3130
  icp_query_timeout 500
  icp_access allow local
  icp_access deny all
 
  I don't find anything in access_log i.e. grep
 CACHE_DIGEST_HIT access_log
 
  Any ideas?
 
 
 In order rough of relevance and easy checking...
 
  - It may be a SIBLING_HIT or ICP_HIT in the log.


Nothing like those in the logs either.


 
  - That the squid are actually listening for traffic on
 http_port .


Yes, it's listening on that port.


 
  - Check that there is no firewall between them on port
 3130.


Nope.  Although it may be something UDP related, just not sure what yet.
Are there any debugging flags to be enabled?
I did some sniffing and didn't see any UDP packets destined for port 3130.



 
  - check that forwarded_for and Via headers are turned ON
 (the defaults).


If they're defaults then they are on.


 
  - Maybe the timeout is too short. Try it at default
 (unset) first before
 tweaking.

I hadn't set the timeout, it was something I tried since it wasn't working at 
the default.


 
 Amos


  


Re: Re: [squid-users] Re: Help with sarg usage

2008-06-19 Thread Richard Chapman

Hi Chris - and many many thanks...

See also below.

/path/to/sarg -l /var/log/squid/access.log -l 
/var/log/squid/access.log.1 -l /var/log/squid/access.log.2.gz


will parse the data from access.log, access.log.1 and (after 
gunzipping) access.log.2.gz, and will give you one report for the time 
covered by these log files.


Great. That is exactly what I wanted - but haven't found it in any 
documentation so far.



What i want to do is produce a report that shows me:
Which clients visited which sites between (say) 18:00 and 19:00. 
Maybe this is possible with sarg - but if so I haven't found it.


You have two choices with SARG.

The first is the simplest, but might not meet your needs.  Make sure 
in your sarg.conf file the report_type directive includes 
users_sites and date_time and/or site_user_time_date.  The first 
will give you a listing of the sites each username/IP accessed.  The 
second, will show bandwidth usage per hour for each username/IP 
(linked from the main report).  The third will give you a listing of 
the times an individual accessed a specific website (linked from the 
users_sites report).


I have checked that these report_types are enabled - and can find most 
of what you are talking about except the one I really want. The thing 
you describe as the second above is exactly what I want - but:
The date-time reports I get don't seem to be exactly what you describe. 
If I go to the main page, then click on the most recent report I get a 
list of client IP addresses.
If I click on the Date-Time icon near the left of each row - I get an 
array with hours across and dates down. Each cell contains what appears 
to be an elapsed time. I don't really understand what this time means 
- but it doesn't appear to be the Bandwidth used during that hour.
Am I in the wrong place - or am I misunderstanding something? Either way 
- what do these times mean?


Your second option is to create a report just covering a specific time 
period.  The following hourly cron job does this:


#!/bin/bash

#Get current date
TODAY=$(date +%d/%m/%Y)
LASTHOUR=$(date +%H -d 1 hour ago)
HOUR=$(date +%H)

/bin/touch /path/to/web/squid-reports/images
# Be careful with this next line, as it recursively removes any files 
modified more

# than 30 minutes ago in the path given!
/usr/bin/find /path/to/web/squid-reports -type d -cmin +30 -exec rm 
-rf {} \;

/usr/bin/sarg -d $TODAY-$TODAY -t $LASTHOUR:00-$HOUR:00

Many thanks for this. If I can't find what I'm looking for in the first 
suggestion above - then I will give this a try. The earlier option 
should work with my historical data - whereas - this cron job will only 
provide reports on new data. I did try using sarg with the -t option at 
one stage - and it didn't seem to follow the -t directive - but perhaps 
there was a problem with my syntax. I might have left the :00 off the times.


Tanks Chris

Richard.



Re: [squid-users] custom logformats and squid-2.5.STABLE14-1.4E

2008-06-18 Thread Richard Hubbell



--- On Wed, 6/18/08, Henrik Nordstrom [EMAIL PROTECTED] wrote:

 From: Henrik Nordstrom [EMAIL PROTECTED]
 Subject: Re: [squid-users] custom logformats and squid-2.5.STABLE14-1.4E
 To: [EMAIL PROTECTED]
 Cc: squid-users@squid-cache.org
 Date: Wednesday, June 18, 2008, 6:20 AM
 On tis, 2008-06-17 at 22:24 -0700, Richard Hubbell wrote:
  I looked around but seem to not have much luck finding
 or figuring out
  exactly what logging features work in this version of
 Squid. Is there a version matrix somewhere listing versions
 and the features they support?
 
 The squid.conf documentation for each release is a good
 start. You have
 it as squid.conf.default after installing the specific
 version.
 
 You also have that documentation on the web 
 http://www.squid-cache.org/Versions/v2/2.5/cfgman/

That one gave me a 404, but this one works:
 http://www.squid-cache.org/Versions/v2/2.5/

Is the cfgman new to 2.6 and above?  Or is there another link?

 
 but may differ slightly from the specific vendor release
 you are using.

When one runs Redhat one gets used to this, sort of

Thank you again Henrik

 
 
 Regards
 Henrik


  


Re: [squid-users] Where are the ircache.net cgi for creating graphs?

2008-06-18 Thread Richard Hubbell
--- On Wed, 6/18/08, Henrik Nordstrom [EMAIL PROTECTED] wrote:

 From: Henrik Nordstrom [EMAIL PROTECTED]
 Subject: Re: [squid-users] Where are the ircache.net cgi for creating graphs?
 
 Then I would suggest one of the MRTG templates for
 Squid.Quick, easy,
 and only needs cron + any kind of web server capable of
 serving static
 files.

Okay will have a look, thanks again.

While on the topic has anyone written a squid-top?
i.e. something akin to the top command but for squid?


 
 Regards
 Henrik


  


[squid-users] Squid problem:. Some addresses work OK - but most hang indefinitely.

2008-06-17 Thread Richard Chapman

Hi

I installed Squid 2.6 on Centos 5.1 X86_64 system about a week ago - and 
it worked fine for the first few days.
I have set all clients to use the Squid Proxy for all external (non 
private 192.168.0.0/24) ip addresses. The only squid config settings I 
changed from default were  ACL changes to allow proxy access to everyone 
on the local network.


I now have the following situation on this client:
1) I can browse local addresses fine (as they are direct)
2) I can browse a few non local addresses fine. I can refresh my ISPs 
usage data OK for example, and it is clearly refreshing the live data 
via squid.
3) If I browse most arbitrary web addresses - the firefox tab hangs 
indefinitely with the little circular animation on the tab.

4) If I revert to direct access (Non proxy) - everything works fine.
5) I have deleted the entire cache - and maybe that helped for a bit - 
but the problem returned very soon after.
6) I have checked CPU and memory usage on the centos machine - and 
everything looks fine - almost nothing happening.
7) I did make some router changes to try to prevent direct access from 
clients - but I have since reverted these changes because the router did 
not behave as expected. It is now back to the starting point - but the 
problem persists.
8) I have recently installed sarg, Calamaris and Webalizer - but I doubt 
these could be responsible for the problem.


Can anyone suggest what might be going on here, and if so - how to fix it?
If not - can anyone advise diagnostic steps?

Regards

richard.




Re: [squid-users] Squid problem:. Some addresses work OK - but most hang indefinitely.

2008-06-17 Thread Richard Chapman

Amos Jeffries wrote:

Hi

I installed Squid 2.6 on Centos 5.1 X86_64 system about a week ago - and
it worked fine for the first few days.
I have set all clients to use the Squid Proxy for all external (non
private 192.168.0.0/24) ip addresses. The only squid config settings I
changed from default were  ACL changes to allow proxy access to everyone
on the local network.

I now have the following situation on this client:
1) I can browse local addresses fine (as they are direct)
2) I can browse a few non local addresses fine. I can refresh my ISPs
usage data OK for example, and it is clearly refreshing the live data
via squid.
3) If I browse most arbitrary web addresses - the firefox tab hangs
indefinitely with the little circular animation on the tab.
4) If I revert to direct access (Non proxy) - everything works fine.
5) I have deleted the entire cache - and maybe that helped for a bit -
but the problem returned very soon after.
6) I have checked CPU and memory usage on the centos machine - and
everything looks fine - almost nothing happening.
7) I did make some router changes to try to prevent direct access from
clients - but I have since reverted these changes because the router did
not behave as expected. It is now back to the starting point - but the
problem persists.
8) I have recently installed sarg, Calamaris and Webalizer - but I doubt
these could be responsible for the problem.

Can anyone suggest what might be going on here, and if so - how to fix it?
If not - can anyone advise diagnostic steps?



It sounds like you are hitting one of the interception catch-22s. Only you
don't mention interception.
Do you have any FW entries specifically for the proxy box?

What exactly do your ACL and access lines look like now?

  
Thanks Amos. Interestingly - whatever the problem was seems to time out 
after several hours. All is working OK this morning. I don't 
intentionally have any interceptions.


I did try to set up firewall rules for the proxy box - but my 
firewall/router is a Netgear DG834G - and there seems to be something 
wrong with its outgoing rules implementation. Specifically - I set up 
the following rules - in order.


always allow any port outgoing from proxy IP.
always disallow any port outgoing from all IPs.

When I set this up - I had very erratic behaviour. Some web pages came 
up slowly - and some not at all. There were also problems with fetchmail 
(running on the same box as squid) downlaoding mail. I attributed this 
to a problem in the router. When I removed these rules - things reverted 
to normal - but then a bit later - I had this apparent proxy problem.


Given the fact that the problem appears to time out after several hours 
- I am wondering if there is a DNS issue. I have seen some references to 
SQUID caching DNS info - but I don't know much about it. If there was a 
temporary DNS problem at some time - would squid (or something else) 
cache the DNS miss - and continue returning the miss after the 
problem was resolved?


While the problem was occurring - I did test the DNS server (bind) 
running on the SQUID box - and it was able to resolve the addresses 
which were failing via squid. If there is a DNS problem - I don't think 
it is in the bind server.


Thanks again for your advice.

Richard.




Amos

  




Re: [squid-users] Where are the ircache.net cgi for creating graphs?

2008-06-17 Thread Richard Hubbell



--- On Fri, 6/13/08, Henrik Nordstrom [EMAIL PROTECTED] wrote:

 From: Henrik Nordstrom [EMAIL PROTECTED]
 Subject: Re: [squid-users] Where are the ircache.net cgi for creating graphs?
 To: [EMAIL PROTECTED]
 Cc: squid-users@squid-cache.org
 Date: Friday, June 13, 2008, 3:54 PM
 On fre, 2008-06-13 at 14:53 -0700, Richard Hubbell wrote:
  Hello squid world,
  
  I was looking for the scripts that create the graphs
 on ircache.net, I found everything but the cgi scripts. 
 Does anyone know where to get them?
  
  Or maybe there's another package that's
 preferred to make use of RRD for Squid?
 
 I have a small script at
 http://www.henriknordstrom.net/code/
 
 There is also Squid packages to most of the system
 monitoring tools
 
munin
cacti
monit
Ganglia
nagios
Zenoss
 
 Any other SNMP capable monitoring tools is also able to
 graph Squid
 without too much effort
 
Zabbix
 
 
 The above is just a small collection, there is plenty of
 these tools
 around..   
 
 
 My experience after making the script mentioned above is
 that you are
 most likely better off using a real monitoring package than
 something
 special tailored just for Squid.

In general I agree.  But in this case I wanted something quick and just for 
squid. And I didn't want to use cgi, just want something that runs out of cron. 
 I'm just trying to understand Squid and what it's doing.

Have looked at mod_*cache and now looking at Squid.  My main goal is to 
conserve bandwidth. 

Thank you Henrik for your help.

 
 Regards
 Henrik


  


[squid-users] custom logformats and squid-2.5.STABLE14-1.4E

2008-06-17 Thread Richard Hubbell
I looked around but seem to not have much luck finding or figuring out
exactly what logging features work in this version of Squid. Is there a version 
matrix somewhere listing versions and the features they support?

Or maybe I'm just looking in the wrong places?  Is it just me or is google 
becoming less and less useful?  With all this SEO stuff it seems like 
irrelevant stuff percolates up to the top of google results. 


  


[squid-users] Re: Help with sarg usage

2008-06-16 Thread Richard Chapman

Hi Indunil

Yes. That did help quite a bit. I had actually seen the link before from 
a google search - but hadn't studied it properly. It certainly explains 
my forbidden problem. I think I found a missing step though. The step 
by step instructions do not tell you to make sarg.cron executable - so 
the cron job wouldn't run until I corrected this.

It still leaves me with a few queries though:
1) I would like to be able to see what traffic flowed between (say) 
13.00 on 13 June 2008 and 14:00 on 13th June 2008.
I haven't found a way to do this yet. Is it possible with Sarg? I am 
trying to find out what causes occasional large traffic bursts.
2) Is there some more complete documentation somewhere? The man page 
refers to documentation in the GNU info format. Where could I find this?



Regards and thanks

Richard.



Indunil Jayasooriya wrote:

Hi Richard,

I hope this may help you.

http://www.squid-cache.org/mail-archive/squid-users/200805/0172.html



On Sun, Jun 15, 2008 at 12:33 PM, Richard Chapman
[EMAIL PROTECTED] wrote:
  

Hi

I have satrg installed and working - but have not found much documentation
other than the man pages - which are fairly brief.

Can anyone help me with these issues with sarg.

1) It appears to only use the current squid log by default - and the
documentation doesn't seem to tell me how I can get it to read several squid
log files.
2) When I first installed it - and told it to place reports in
/var/ww/html/sarg - I could browse the report fine as
http://192.168.0.201/sarg - but for some reason - I now get a Forbidden
error.
3) How do you access the reports at the default location: /var/www/sarg?
4) If I specify the -t HH-HH option to restrict the report to a time range
- it doesn't seem to behave as I would expect. I get far less traffic
reported than I would expect over the period. I can't find any way to check
that it is reporting all the relevant trafic.

Thanks

Richard.







  




[squid-users] Re: Help with sarg usage

2008-06-16 Thread Richard Chapman

Hi again Indunil - and many thanks.

I think I have sarg working - at least to the point of generating 
regular report files as suggested in your step by step instructions. I 
had already set execute bit on sarg.cron. I set the cron job to run 
every 30 minutes - and it appears to be doing just that. However - I 
can't find a way to filter the data within a time window. The cron job 
seems to create a new report every 30 minutes - and delete the old one. 
The new report covers the full period (presumably) covered by the 
current squid log file - until the time the report is generated. I can't 
find a way to narrow down the time window of the report. I also can't 
find a way to make it cover further back than the current squid log file 
- though the daily and weekly reports will presumably go further back.


I seem to have something in my /etc/cron.daily which rotates the squid 
access.log file at 4.02 am every morning. I'm not sure whether this is 
normal (logrotate) - or whether another web statistics package (possible 
webalizer) is doing this, and thereby interfering with sarg. From what I 
can see - logrotate is set to rotate weekly rather than daily. Does 
anyone have any thoughts on this?


Thanks again.

Richard.







Indunil Jayasooriya wrote:

Yes. That did help quite a bit. I had actually seen the link before from a
google search - but hadn't studied it properly. It certainly explains my
forbidden problem. I think I found a missing step though. The step by step
instructions do not tell you to make sarg.cron executable - so the cron job
wouldn't run until I corrected this.



Yeah, sarg.cron should be executable. So, Pls execute it in following way.

chmod 755 /var/www/sarg/sarg.cron

  

It still leaves me with a few queries though:
1) I would like to be able to see what traffic flowed between (say) 13.00 on
13 June 2008 and 14:00 on 13th June 2008.



I think that it depends on the crontab.  my crontab @ step by steb doc
displays every 5 minute execution. So , Then, It has every 5 miniute
data.

For example, crontab executes every 5 minutes.  Let's say 13 hrs ,
13.05 hrs, 13.10 hrs ans so on.

  

I haven't found a way to do this yet. Is it possible with Sarg? I am trying
to find out what causes occasional large traffic bursts.



First, Pls get Sarg woking. Then, begin to analyse.

  

2) Is there some more complete documentation somewhere? The man page refers
to documentation in the GNU info format. Where could I find this



Difficult to say.


  




Re: [squid-users] Anyone know where surftrackr has gone?

2008-06-16 Thread Richard Chapman
Thanks guys. That server was down for several days - but I found it 
again yesterday. I am working on installing it - but it has a lot of 
dependencies...


Regards

Richard.

F-D. Cami wrote:

Hi Richard,

Does http://download.surftrackr.org/ help ?

Cheers

F


On Mon, 16 Jun 2008 09:30:11 +0800
Richard Chapman [EMAIL PROTECTED] wrote:

  

Hi Henrik

I tried contacting him via an email address from the freshmeat website - 
and the email bounced. Any ideas where to find other contact details?


Regards

Richard.


Henrik Nordstrom wrote:


On sön, 2008-06-15 at 14:57 +0800, Richard Chapman wrote:

  
  
I have been looking into various analysis and reporting tools for squid 
logs. I have tried both webalizer and sarg - and both have strengths - 
and (for me) weaknesses. I have seen references to squidalyser which I 
think became Surftrackr - and I have found references to new releases 
as recently as march 2008 - but all the download links I try seem to be 
dead.



Have you tried contacting the author?

Regards
Henrik
  
  


  




[squid-users] Re: Help with sarg usage

2008-06-16 Thread Richard Chapman

Hi again Indunil

Indunil Jayasooriya wrote:

I can't find a way to narrow down the time window
  

of the report. I also can't find a way to make it cover further back than
the current squid log file -


What do u mean ? do you need older data than the current report
No. I don't think so. The point I am trying to make here is that the 
sarg report starts at the beginning of the current squid log. I can't 
get any useful data in the current report from earlier squid logs. Can 
sarg take multiple input log files? I guess the problem is exacerbated 
by the fact that my squid log is rotating every morning. I probably need 
to find out why that is happening.


In any case - the report seems to cover the whole period of the log. 
Even though the report is generates every 30 minutes - it appears to 
cover the whole squid log period. Is there any way to restrict the 
report to a short period (say 1 hour) of within the coverage of the 
squid log.



So , pls try below squid.cron

[EMAIL PROTECTED] ~]# cat /var/www/sarg/sarg.cron
#!/bin/sh
/usr/bin/sarg
cd /var/www/sarg/reports
rm -rf *.11

Pls try it out.


  
I'm not sure this is my problem. The latest report appears to cover the 
whole period of the squid log in any case. What I really need is to 
either slow down the squid log file rotation - or get sarg to read 
multiple squid log files.



I seem to have something in my /etc/cron.daily which rotates the squid
access.log file at 4.02 am every morning.



May I have a look at that?

  
I am really not sure about this now. The log files did appear to be 
rotating daily - but not consistently. I'll have to keep an eye on it. 
My logrotate is set for weekly rotation of squids logs - but I haven't 
really been running it for a week yet. There appears to have been 
several rotations - at 4.02 am on various days - but not a week apart - 
and not every day. It seems odd - but I will have to keep an eye on it.


I would love you to look into it - but I'm not sure what I can send you...

And also , have a look at these scrips.
I have not used these.Pls try and see. If you succeed, Pls put a mail
to this mailing list, then others can benifit from them.

http://sarg.sourceforge.net/enhancements.php

http://sarg.sourceforge.net/zhaolei.txt

  
I did have a look at these - but I don't think they are doing what I am 
looking for. What i want to do is produce a report that shows me:
Which clients visited which sites between (say) 18:00 and 19:00. Maybe 
this is possible with sarg - but if so I haven't found it.


Thanks again Indunil

Regards

Richard.



[squid-users] Anyone know where surftrackr has gone?

2008-06-15 Thread Richard Chapman

Hi,

I have been looking into various analysis and reporting tools for squid 
logs. I have tried both webalizer and sarg - and both have strengths - 
and (for me) weaknesses. I have seen references to squidalyser which I 
think became Surftrackr - and I have found references to new releases 
as recently as march 2008 - but all the download links I try seem to be 
dead.


Ideally I would like to find an RPM package - but source would also be fine.

Does anyone know anything about this package - and where I can get it?

Thanks

Richard.





[squid-users] Help with sarg usage

2008-06-15 Thread Richard Chapman

Hi

I have satrg installed and working - but have not found much 
documentation other than the man pages - which are fairly brief.


Can anyone help me with these issues with sarg.

1) It appears to only use the current squid log by default - and the 
documentation doesn't seem to tell me how I can get it to read several 
squid log files.
2) When I first installed it - and told it to place reports in 
/var/ww/html/sarg - I could browse the report fine as 
http://192.168.0.201/sarg - but for some reason - I now get a 
Forbidden error.

3) How do you access the reports at the default location: /var/www/sarg?
4) If I specify the -t HH-HH option to restrict the report to a time 
range - it doesn't seem to behave as I would expect. I get far less 
traffic reported than I would expect over the period. I can't find any 
way to check that it is reporting all the relevant trafic.


Thanks

Richard.



Re: [squid-users] Anyone know where surftrackr has gone?

2008-06-15 Thread Richard Chapman

Hi Henrik

I tried contacting him via an email address from the freshmeat website - 
and the email bounced. Any ideas where to find other contact details?


Regards

Richard.


Henrik Nordstrom wrote:

On sön, 2008-06-15 at 14:57 +0800, Richard Chapman wrote:

  
I have been looking into various analysis and reporting tools for squid 
logs. I have tried both webalizer and sarg - and both have strengths - 
and (for me) weaknesses. I have seen references to squidalyser which I 
think became Surftrackr - and I have found references to new releases 
as recently as march 2008 - but all the download links I try seem to be 
dead.



Have you tried contacting the author?

Regards
Henrik
  




[squid-users] Where are the ircache.net cgi for creating graphs?

2008-06-13 Thread Richard Hubbell
Hello squid world,

I was looking for the scripts that create the graphs on ircache.net, I found 
everything but the cgi scripts.  Does anyone know where to get them?

Or maybe there's another package that's preferred to make use of RRD for Squid?




  


[squid-users] Job Opportunity: Squid support engineer at ApplianSys, UK

2008-06-12 Thread Richard Wall
Hello,

I work for a company called ApplianSys, in Coventry, UK. We sell a
range of server appliances, one of which, the CACHEBox, is based
around Squid 2.6

http://www.appliansys.com/products/

We're currently recruiting for developers, support engineers and sales
people. We are particularly looking for people familiar with
installing and administering Squid and Linux

There is a list of current vacancies on our website and details about
how to apply:

http://www.appliansys.com/company/employment.html

If you're looking for a job and are able to re-locate to the Midlands
(UK) we'd really like to hear from you.

-RichardW.
-- 
Richard Wall
Support Engineer
ApplianSys Ltd
http://www.appliansys.com

(t) +44 (0)24 7643 0094
(f) +44 (0)87 0762 7063
(e) [EMAIL PROTECTED]


[squid-users] Web Usage Statistics by Client IP

2008-06-11 Thread Richard Chapman

Hi

I am new to Squid - but found it very easy to get going. I am running 
Squid 2.6 on Centos 5.1 Linux. and it workd brilliantly.


I was hoping to be able to track down the Bandwidth Usage Stats for 
individual client machines - to try to find out where all our bandwidth 
is going. I have found the Cache Manager Statistics Reports - but 
haven't found one with this info broken down by Client.
Is it there somewhere in one of the report - or do I need some 
additional reporting tool?


Thanks for the help.

Richard.





RE: [squid-users] SSL Accel - Reverse Proxy

2008-05-02 Thread Keith M. Richard
Tory,

If you are going to use Certificates from a provider like Verisign
or similar, and will be using an Intermediate cert, will need to chain
them together as to avoid errors from EU web browsers.

Keith

 -Original Message-
 From: Amos Jeffries [mailto:[EMAIL PROTECTED]
 Sent: Friday, May 02, 2008 7:26 AM
 To: Tory M Blue
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] SSL Accel - Reverse Proxy
 
 Tory M Blue wrote:
  On Thu, May 1, 2008 at 2:02 AM, Amos Jeffries [EMAIL PROTECTED]
 wrote:
   You could make a second peer connection using HTTPS between squid
and
 the
  back-end server and ACL the traffic so that only requests coming in
via
 SSL
  are sent over that link. Leaving non-HTTPS incoming going over the
old
 HTTP
  link fro whatever the server want to do.
 
  Thanks Amos
 
  Not sure that I made myself clear or that I understand your
suggestion.
 
 You made the situation clear. I mentioned the only reasonably easy
 solution.
 If you didn't understand me, Keith M Richad provided you with the
exact
 squid.conf settings I was talking about before.
 
 Squid can talk HTTPS to the clients, HTTPS to the web server, and
still
 sit in the middle caching files. Exactly as it would for HTTP.
 All you need is SSL certificates for each side of squid. Configured as
 Keith gave you.
 
 
  I need to allow squid to connect and talk to my servers via http
  (only), i want squid to handle the SSL termination (SSL
acceleration,
  take the overhead off the back end servers).
 
  However since squid talks to the back end servers via http (and not
  https on pages that require https), I need to somehow tell the
server
  that the original connection, or the connection that will go back to
  the client will be https, even though the server is responding via
  http..
 
  I handle secure and non secure fine now, the same website for
example.
  apps.domain.com, listens to both 443 and 80, so squid can handle
  secure and non secure. there is code on apps.domain.com that checks
  the incoming protocol to verify that's it's secure, if not it sends
a
  secure url for the client to come back in on.  As you can see if I
  allow Squid to handle the SSL portion, the back end server has no
way
  of knowing (the piece I'm missing) if the actual client connection
is
  secure or not. (hard to explain possibly)..
 
  Client  apps.domain.com (443) Squid - backend server
 (80)
  backend server (80)  -- Squid apps.domain.com (443)
--
  Client (443)
 
  I'm wondering if Squid can tell the peer (server) that the original
  request was in fact secure, so that we can tell the application,
feel
  free to respond with the secure data via non secure port, because
  squid will encrypt the server response and get back to the client
via
  https
 
  Sorry kind of long winded.
  Tory
 
 
 --
 Please use Squid 2.6.STABLE20 or 3.0.STABLE5


RE: [squid-users] SSL Accel - Reverse Proxy

2008-05-01 Thread Keith M. Richard
Tony,

You can try something like this.

acl port443 port 443
acl SAFE_ports port 443 #https
https_port 443 accel vhost vport defaultsite=www.mywebsite.com
cache_peer [backend webserver IP] parent 443 0 no-query originserver ssl
login=PASS name=httpsWeb
acl ourWebSite dstdomain www. mywebsite.com 
cache_peer_access httpsWeb allow ourWebSite



 -Original Message-
 From: Tory M Blue [mailto:[EMAIL PROTECTED]
 Sent: Thursday, May 01, 2008 11:21 AM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] SSL Accel - Reverse Proxy
 
 On Thu, May 1, 2008 at 2:02 AM, Amos Jeffries [EMAIL PROTECTED]
 wrote:
 
   You could make a second peer connection using HTTPS between squid
and
 the
  back-end server and ACL the traffic so that only requests coming in
via
 SSL
  are sent over that link. Leaving non-HTTPS incoming going over the
old
 HTTP
  link fro whatever the server want to do.
 
 Thanks Amos
 
 Not sure that I made myself clear or that I understand your
suggestion.
 
 I need to allow squid to connect and talk to my servers via http
 (only), i want squid to handle the SSL termination (SSL acceleration,
 take the overhead off the back end servers).
 
 However since squid talks to the back end servers via http (and not
 https on pages that require https), I need to somehow tell the server
 that the original connection, or the connection that will go back to
 the client will be https, even though the server is responding via
 http..
 
 I handle secure and non secure fine now, the same website for example.
 apps.domain.com, listens to both 443 and 80, so squid can handle
 secure and non secure. there is code on apps.domain.com that checks
 the incoming protocol to verify that's it's secure, if not it sends a
 secure url for the client to come back in on.  As you can see if I
 allow Squid to handle the SSL portion, the back end server has no way
 of knowing (the piece I'm missing) if the actual client connection is
 secure or not. (hard to explain possibly)..
 
 Client  apps.domain.com (443) Squid - backend server
(80)
 backend server (80)  -- Squid apps.domain.com (443) --
 Client (443)
 
 I'm wondering if Squid can tell the peer (server) that the original
 request was in fact secure, so that we can tell the application, feel
 free to respond with the secure data via non secure port, because
 squid will encrypt the server response and get back to the client via
 https
 
 Sorry kind of long winded.
 Tory


Re: [squid-users] RAID is good

2008-03-27 Thread Richard Wall
On Thu, Mar 27, 2008 at 1:59 AM, Marcus Kool
[EMAIL PROTECTED] wrote:
snip
  Only one cache directory per disk is recommended while you have 4 cache
  directories on one file system.  Consider dropping 2 COSS cache directories
  so that you have 1 COSS and 1 AUFS.

Yep, I understand. Unfortunately in that area I'm restricted by a 2GB
file size limit, beyond which various system binaries don't recognise
the COSS files. Don't ask. :)
There are ways round it though and for a one off benchmark I can setup
a 7.5 GB COSS and a 2.5GB AUFS store. I'll let you know if this
improves the benchmark results.

I also intend to run a benchmark on the same hardware but with JBOD,
and hopefully, given time I'll be able to benchmark a cluster of
boxes.

  Kinkie and I rewrote the RAID for Squid section of the FAQ and
  it includes more details about price, performance and reliability trade-offs.

Yep, that's a really good write up. Thanks.

-RichardW.


RE: [squid-users] How do I allow access to a specific URL:port_number

2008-03-26 Thread Keith M. Richard
If you are using this as an accelerator, try this in the squid.conf:

acl SAFE_ports port 10020 #random port
http_port 10020 accel vhost vport defaultsite=www.myisp.com
cache_peer [IP of website] parent 10020 0 no-query originserver
name=RPServ
acl ranport myport 10020
cache_peer_access httpsWeb deny ranport
cache_peer_access RPServ allow ranport ourWebSite



Ed Flecko wrote:
 Hi folks,
 Our ISP has a SPAM server with a web page that you have to be able to
 reach in order to manage your SPAM settings.

 I can't figure out how to tell Squid to allow this page.

 The web page is: myisp.com:10020

 I've tried using the always_direct method 

Unless you are using a parent cache, this will have no effect.

 and adding the 10020 port
 number to my Safe_ports,

Unless you have modified the line...

Safe_ports port 1025-65535  # unregistered ports

...this is redundant.

  but neither method worked.

 I always get:

 The following error was encountered:

 * Access Denied.

   Access control configuration prevents your request from being
 allowed at this time. Please contact your service provider if you feel
 this is incorrect.

 Suggestions???
   

Supply more details (your squid.conf without comments, the real URL used

to access the SPAM page), or hit up the FAQ 
(http://wiki.squid-cache.org/SquidFaq/SquidAcl).

 Thank you,
 Ed
   

Chris


[squid-users] Forwarding client ip address

2008-03-26 Thread Keith M. Richard
I am using squid 2.6 Stable18 as a Reverse Proxy / Accelerator to my
internal website. I would like to have it forward the requesting
client's IP address to the web server and I am unsure how to do this. 

Thanks,
Keith


Re: [squid-users] RAID is good (was: Re: [squid-users] Hardware setup ?)

2008-03-26 Thread Richard Wall
On Tue, Mar 25, 2008 at 1:23 PM, Marcus Kool
[EMAIL PROTECTED] wrote:
 I wish that the wiki for RIAD is rewritten.
  Companies depend on internet access and a working Squid proxy
  and therefore the advocated no problem if a single disk fails
  is not from today's reality.
  One should also consider the difference between
  simple RAID and extremely advanced RAID disk systems

Recently I've spent a fair bit of time benchmarking a Squid system
whose COSS and AUFS storage (10GB total) + access logging are on a
RAID0 array of two consumer grade SATA disks. For various reasons, I'm
stuck with RAID0 for now, but I thought you might be interested to
hear that the box performs pretty well.

The box can handle a 600 - 700 Req/Sec Polygraph polymix-4 benchmark with a
~40% document hit ratio.
usage
Doubling the total storage to 20GB, increased the doc hit ratio to
55%, but hit response times began to increase noticably during the top
phases.

CPU was about 5% idle during the top phases. Logs were being rotated
and compressed every five minutes. CPU usage never

Some initial experiments suggest that removing RAID doesn't
particularly improve performance, but I intend to do a more thorough
set of benchmarks soon.

I'm not sure how relevant this is to your discussion. I don't know how
RAID0 performance is expected to compare to RAID5.

I'll post here if and when I do more benchmarking without RAID.

-RichardW.

== Spec ==
CPU: Intel(R) Celeron(R) CPU 2.53GHz
RAM: 3GB
Disks: 2 x Seagate Barracuda 160GB
Squid:  2.6.STABLE17
Linux Kernel: 2.6.23.8
FS: reiserfs

==  Squid Conf (extract) ==
# NETWORK OPTIONS
http_port 800 transparent

# MEMORY CACHE OPTIONS
cache_mem 152 MB
maximum_object_size_in_memory 50 KB

# DISK CACHE OPTIONS
cache_replacement_policy lru
# TOTAL AVAILABLE STORAGE: 272445 MB
# MEMORY STORAGE LIMIT: 46694 MB
# CONFIGURED STORAGE LIMIT: 1 MB
cache_dir coss /squid_data/squid/coss0 2000 max-size=16000
cache_swap_log /squid_data/squid/%s
cache_dir coss /squid_data/squid/coss1 2000 max-size=16000
cache_swap_log /squid_data/squid/%s
cache_dir coss /squid_data/squid/coss2 2000 max-size=16000
cache_swap_log /squid_data/squid/%s
cache_dir aufs /squid_data/squid 4000 16 256
max_open_disk_fds 0
maximum_object_size 2 KB

# LOGFILE OPTIONS
debug_options ALL,1
buffered_logs on
logfile_rotate 10

# MISCELLANEOUS
memory_pools_limit 10 MB
memory_pools off
cachemgr_passwd none all
client_db off


Re: [squid-users] NTLM authentication testing

2008-02-19 Thread Richard Wall
On 2/18/08, Adrian Chadd [EMAIL PROTECTED] wrote:
 Thats basically right - Squid doesn't handle the NTLM itself, it just
  passes the blob right through. The helper framework can handle hundreds
  of requests a second without too much thought; I'd like to spend some
  time figuring out what Samba is doing thats so slow. I thought that winbind
  was actually handling the NTLM challenge/response stuff itself and caching
  data rather than passing it upstream to the DC for every request.
  I haven't yet looked at it, so I can't say for certain that is correct.

I've done some pretty unscientific tests using curl against our Squid box.
 * CPU: Intel(R) Celeron(R) CPU 2.53GHz
 * MemTotal: 2075628 kB
 * Squid2.6 STABLE17 (using epoll)
 * NTLM auth_param ntlm children 100

I've been running multiple curl instances on four clients as follows:
{{{

for i in {1..100}; do
while true; do
curl -x 192.168.1.97:800 \
 --proxy-ntlm \
 --proxy-user DOMAINNAME\\username:password \
 --include \
 --silent \
 --header Pragma: http://www.mydomain.com/index.html /dev/null
done 
sleep 1
done

}}}

According to cachemgr this is generating a load of ~250req/sec.

client_http.requests = 252.175917/sec
client_http.hits = 126.159625/sec
client_http.errors = 0.00/sec
client_http.kbytes_in = 90.109732/sec
client_http.kbytes_out = 2735.581866/sec
client_http.all_median_svc_time = 0.851301 seconds
client_http.miss_median_svc_time = 0.000911 seconds
client_http.nm_median_svc_time = 0.00 seconds
client_http.nh_median_svc_time = 0.00 seconds
client_http.hit_median_svc_time = 0.806511 seconds

First problem is that you have to reinterpret the Squid reported hit
ratios when using NTLM auth. Only half of these are hits, the other
half being TCP_DENIED/407 that form part of the NTLM auth negotiation.

Second problem is that the majority of requests seem to result in auth
requests to the DC. There is an article describing Win2003 performance
counters showing Number of auth requests / sec, but those counters
don't seem to exist on my copy.
 * http://support.microsoft.com/kb/928576

Instead I used the difference in a minute of the total number of
security events (as shown in the titel bar of the windows event
viewer.
 * ~127 successful auth events per second
...which is about the same as the client_http.hits reported by squid.

I have the following setting defined in smb.conf:
 * winbind cache time = 10
...which clearly isn't being respected.

 * Does anyone else see this behaviour or have you managed to get auth
requests cached by winbindd?
 * Can winbindd even do caching of auth reqests or is it only
concerned with caching other domain data?

If anyone has answers, I'd really appreciate to hear from you. I'll
continue to experiment and will post my findings.

-RichardW.


Re: [squid-users] NTLM authentication testing

2008-02-19 Thread Richard Wall
On 2/19/08, Adrian Chadd [EMAIL PROTECTED] wrote:
 G'day,
  THanks for this stuff.
  Could you possibly try hitting it hard enough to cause Squid to back up
  on pending authentications? It'd be good to replicate a fail situation;
  we can then take that to the samba guys and ask wtf?

Adrian,

Yep I've seen that and it's easy to reproduce by lowering the number
of authenticators. So when I start squid configured with:
auth_param ntlm children 50

# /usr/local/squid/sbin/squid -d100 -X -N  -D -f /RamDisk/squid.conf
2008/02/19 14:29:09| WARNING: All ntlmauthenticator processes are busy.
2008/02/19 14:29:09| WARNING: up to 50 pending requests queued
2008/02/19 14:29:11| storeDirWriteCleanLogs: Starting...
2008/02/19 14:29:11| WARNING: Closing open FD   64
2008/02/19 14:29:11| commSetEvents: epoll_ctl(EPOLL_CTL_DEL): failed
on fd=64: (1) Operation not permitted
2008/02/19 14:29:11|   Finished.  Wrote 93 entries.
2008/02/19 14:29:11|   Took 0.0 seconds (140060.2 entries/sec).
FATAL: Too many queued ntlmauthenticator requests (251 on 50)
Aborted

# echo $?
134

It exits immediatly with return code 134

-RichardW.


RE: [squid-users] Random image generator w/ reverse-proxy

2008-02-19 Thread Keith M. Richard
Amos,

I have a slightly older version of squid and it is setup as an
accelerator. Let me give you the layout. 

Domain name: www.my-company.org
Domain IP: 204.public address
DMZ IP Addr: 172.220.201.135 (squid server)
Internal IP: 192.1.0.59 (Web Server)
SQUID: Loads with the -D for no DNS and the host file has an entry for
192.1.0.59 as www.my-company.org.

Here is a dump from my cache.log from the last restart of squid:
2008/02/18 16:32:29| Starting Squid Cache version 2.6.STABLE6 for
i686-redhat-linux-gnu...
2008/02/18 16:32:29| Process ID 23575
2008/02/18 16:32:29| With 1024 file descriptors available
2008/02/18 16:32:29| Using epoll for the IO loop
2008/02/18 16:32:29| DNS Socket created at 0.0.0.0, port 32938, FD 5
2008/02/18 16:32:29| Adding domain groupbenefits.org from
/etc/resolv.conf
2008/02/18 16:32:29| Adding nameserver 204.xxx.xxx.xxx from
/etc/resolv.conf
2008/02/18 16:32:29| Adding nameserver 204.xxx.xxx.xxx from
/etc/resolv.conf
2008/02/18 16:32:29| User-Agent logging is disabled.
2008/02/18 16:32:29| Referer logging is disabled.
2008/02/18 16:32:29| Unlinkd pipe opened on FD 10
2008/02/18 16:32:29| Swap maxSize 1024 KB, estimated 787692 objects
2008/02/18 16:32:29| Target number of buckets: 39384
2008/02/18 16:32:29| Using 65536 Store buckets
2008/02/18 16:32:29| Max Mem  size: 8192 KB
2008/02/18 16:32:29| Max Swap size: 1024 KB
2008/02/18 16:32:29| Local cache digest enabled; rebuild/rewrite every
3600/3600 sec
2008/02/18 16:32:29| Rebuilding storage in /var/cache/squid (CLEAN)
2008/02/18 16:32:29| Using Least Load store dir selection
2008/02/18 16:32:29| Current Directory is /
2008/02/18 16:32:29| Loaded Icons.
2008/02/18 16:32:29| Accepting accelerated HTTP connections at 0.0.0.0,
port 80, FD 12.
2008/02/18 16:32:29| Accepting accelerated HTTP connections at 0.0.0.0,
port , FD 13.
2008/02/18 16:32:29| Accepting HTTPS connections at 0.0.0.0, port 443,
FD 14.
2008/02/18 16:32:29| Accepting ICP messages at 0.0.0.0, port 3130, FD
15.
2008/02/18 16:32:29| WCCP Disabled.
2008/02/18 16:32:29| Configuring Parent 192.1.0.59/443/0
2008/02/18 16:32:29| Configuring Parent 192.1.0.59//0
2008/02/18 16:32:29| Ready to serve requests.

All I really want to do is setup a http accelerator for this internal
website. I have read everything I can find about this and I guess I do
not understand the options. I do know that the option in the squid.conf
change rapidly and I am not running the newest version. I am running the
version that is loaded on my Red Hat server. I have downloaded the
newest version and am planning an upgrade very soon, but I am needing to
get this going first.

Thanks,
Keith
 -Original Message-
 From: Amos Jeffries [mailto:[EMAIL PROTECTED]
 Sent: Monday, February 18, 2008 5:13 PM
 To: Keith M. Richard
 Cc: squid-users@squid-cache.org
 Subject: Re: [squid-users] Random image generator w/ reverse-proxy
 
  All,
 
  I have a web page on my site that has a randomly
generated
  image (Alpha numeric picture) to allow users to register. I am using
 squid
  as an accelerator in my DMZ to this internal web server. Right now
the
  image is coded as an unsecured (http) link/servlet on port ,
which
 is
  just a random port. This is embedded in a HTTPS page. If I don't use
 squid
  it works but through squid if fails to display the image.
  I have checked the firewall and it is properly
configured.
  When I check the firewalls log, it shows the request to  from
the
  outside, but those same requests are never passed through squid for
some
  reason. I have also run Wireshark on the squid server to capture the
  traffic as users made requests and I see the TCP [SYN] from the
client
 to
  the squid servers IP address, but then the squid sends a TCP [RST,
ACK].
  When I watch the same request being made from the squid server
running
  FireFox to the internal web server it makes the handshake. I cannot
 figure
  out why the reset is happening.
 
 You have a forwarding loop in the config below.
 
  I modified the logformat so that I can get some readable
data
 and
  this is what I get from the output:
 
  18/Feb/2008:13:03:12 -0600 xxx.xxx.xxx.xxx:51651 192.168.0.135:
  TCP_MISS/404 697 GET
  http://www.my-company.org/randomimages/servlet/org.groupbenefits.por
  tal.RandomImageGenServlet? FIRST_UP_PARENT/192.1.0.59 text/html
 
  **
  # Basic config
  acl all src 0.0.0.0/0.0.0.0
  acl manager proto http cache_object
  acl localhost src 127.0.0.1/255.255.255.255
  acl to_localhost dst 127.0.0.0/8
  acl SSL_ports port 443
  acl Safe_ports port 80 # http
  acl Safe_ports port 21 # ftp
  acl Safe_ports port 443 # https
  acl Safe_ports port 70 # gopher
  acl Safe_ports port 210 # wais
  acl Safe_ports port 8080 # safe
  acl Safe_ports port  # safe
 
 Check #1. access to port  is possible. Great.
 
  acl Safe_ports port 1025-65535 # unregistered ports
  acl

Re: [squid-users] NTLM authentication testing

2008-02-19 Thread Richard Wall
On 2/19/08, Guido Serassio [EMAIL PROTECTED] wrote:
  At 14:40 19/02/2008, Richard Wall wrote:
  First problem is that you have to reinterpret the Squid reported hit
  ratios when using NTLM auth. Only half of these are hits, the other
  half being TCP_DENIED/407 that form part of the NTLM auth negotiation.
 This is caused by the NTLM over HTTP authentication sequence, look
  here for details:
  http://davenport.sourceforge.net/ntlm.html

Guido,

Yep, I've looked at it, but have not completely absorbed it yet :)

  Second problem is that the majority of requests seem to result in auth
  requests to the DC. There is an article describing Win2003 performance
  counters showing Number of auth requests / sec, but those counters
  don't seem to exist on my copy.
* http://support.microsoft.com/kb/928576
 Correct, you should request the hotfix to Microsoft.

Thanks will search it out.

 What Samba version do you are using ?
  I remember that in Samba 3.0.25 there was big changes into winbindd
  regarding off-line logon support, but I don't know if this could help.

# /usr/upgrade/samba/sbin/winbindd --version
Version 3.0.24

So I guess I'll try compiling the latest version. Thanks for th tip.

  Another question, what type of NTLM authentication is supported by curl ?
  Lan manager/NTLMv1 or full NTLMv2 ? (See the previous link for details)

I'm not sure, but in full debug mode, curl will show the various
headers it exchanges with the server.
It seems to correspond to:
 * http://devel.squid-cache.org/ntlm/client_proxy_protocol.html

...but of course we're starting at point 4 which means that in real
life, there'd be even more squid requests I guess.

Anyway, here's the output from curl. Does this give enough information
to work out which type is being used?

{{{

* About to connect() to proxy 10.0.0.12 port 800 (#0)
*   Trying 10.0.0.12... connected
* Connected to 10.0.0.12 (10.0.0.12) port 800 (#0)
* Proxy auth using NTLM with user 'COVENTRYOFFICE\stafftest'
 GET http://www.squid-cache.org/Images/img4.jpg HTTP/1.1
 Proxy-Authorization: NTLM TlRMTVNTUAABBoIIAAA=
 User-Agent: curl/7.16.4 (i486-pc-linux-gnu) libcurl/7.16.4 OpenSSL/0.9.8e 
 zlib/1.2.3.3 libidn/1.0
 Host: www.squid-cache.org
 Accept: */*
 Proxy-Connection: Keep-Alive

* HTTP 1.0, assume close after body
 HTTP/1.0 407 Proxy Authentication Required
 Server: squid/2.6.STABLE17
 Date: Tue, 19 Feb 2008 15:03:05 GMT
 Content-Type: text/html
 Content-Length: 1371
 Expires: Tue, 19 Feb 2008 15:03:05 GMT
 X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0
 Proxy-Authenticate: NTLM
TlRMTVNTUAACDgAOADAGgokAN+ZK+JnmUOEAAIoAigA+Q09WRU5UUllPRkZJQ0UCABwAQwBPAFYARQBOAFQAUgBZAE8ARgBGAEkAQwBFAAEAEABBAFAALQBUAEUAUwBUADIABAAcAGMAYQBjAGgAZQAuAGUAMgBiAG4ALgBvAHIAZwADAC4AYQBwAC0AdABlAHMAdAAyAC4AYwBhAGMAaABlAC4AZQAyAGIAbgAuAG8AcgBnAAA=
 X-Cache: MISS from ntlmsquidbox.test
 X-Cache-Lookup: NONE from ntlmsquidbox.test:800
 Via: 1.0 ntlmsquidbox.test:800 (squid/2.6.STABLE17)
* HTTP/1.0 proxy connection set to keep alive!
 Proxy-Connection: keep-alive

* Ignoring the response-body
{ [data not shown]
* Connection #0 to host 10.0.0.12 left intact
* Issue another request to this URL:
'http://www.squid-cache.org/Images/img4.jpg'
* Re-using existing connection! (#0) with host 10.0.0.12
* Connected to 10.0.0.12 (10.0.0.12) port 800 (#0)
* Proxy auth using NTLM with user 'COVENTRYOFFICE\stafftest'
 GET http://www.squid-cache.org/Images/img4.jpg HTTP/1.1
 Proxy-Authorization: NTLM 
 TlRMTVNTUAADGAAYAEAYABgAWA4ADgBwCQAJAH4IAAgAhwAABoKJAFb2ATKsj8TWAA6YY1ymLs5AgU5/lxbNCYtJnhdC67O5c0NPVkVOVFJZT0ZGSUNFc3RhZmZ0ZXN0cG9seXNydjE=
 User-Agent: curl/7.16.4 (i486-pc-linux-gnu) libcurl/7.16.4 OpenSSL/0.9.8e 
 zlib/1.2.3.3 libidn/1.0
 Host: www.squid-cache.org
 Accept: */*
 Proxy-Connection: Keep-Alive

* HTTP 1.0, assume close after body
 HTTP/1.0 200 OK
 Date: Tue, 19 Feb 2008 15:00:26 GMT
 Server: Apache/2.2.6 (FreeBSD) mod_ssl/2.2.6 OpenSSL/0.9.7e-p1 DAV/2
PHP/5.2.5 with Suhosin-Patch
 Last-Modified: Mon, 22 Jan 2007 10:51:58 GMT
 ETag: 6daaa8-7083-d9b9ef80
 Accept-Ranges: bytes
 Content-Length: 28803
 Content-Type: image/jpeg
 Age: 159
 X-Cache: HIT from ntlmsquidbox.test
HTTP/1.0 407 Proxy Authentication Required
Server: squid/2.6.STABLE17
Date: Tue, 19 Feb 2008 15:03:05 GMT
Content-Type: text/html
Content-Length: 1371
Expires: Tue, 19 Feb 2008 15:03:05 GMT
X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0
Proxy-Authenticate: NTLM
TlRMTVNTUAACDgAOADAGgokAN+ZK+JnmUOEAAIoAigA+Q09WRU5UUllPRkZJQ0UCABwAQwBPAFYARQBOAFQAUgBZAE8ARgBGAEkAQwBFAAEAEABBAFAALQBUAEUAUwBUADIABAAcAGMAYQBjAGgAZQAuAGUAMgBiAG4ALgBvAHIAZwADAC4AYQBwAC0AdABlAHMAdAAyAC4AYwBhAGMAaABlAC4AZQAyAGIAbgAuAG8AcgBnAAA=
X-Cache: MISS from ntlmsquidbox.test
X-Cache-Lookup: NONE from ntlmsquidbox.test:800
Via: 1.0 ntlmsquidbox.test:800 (squid/2.6.STABLE17)
Proxy-Connection: keep-alive

HTTP/1.0 200 OK
Date: Tue, 19 Feb 2008 15:00:26

[squid-users] Squid automatically builds the COSS storage file

2008-02-18 Thread Richard Wall
Hi,

My first post to squid-users, so first let me thank the developers for
their work on Squid.

I'm working with 2.6.STABLE17 and am experimenting with COSS storage.
According to the COSS Faq page, you have to create the COSS storage
file (using dd) before pointing Squid to it:
 * http://wiki.squid-cache.org/SquidFaq/CyclicObjectStorageSystem

I have found that infact, you can simply configure Squid with some
non-existent COSS file and if it is not there and Squid has
permissions to write to the parent directory, Squid will build the
file for you.

I was going to update the wiki page, but thought I'd check here first,
to see if there are any benefits to creating the COSS file manually or
circumstances in which it is required?

Before realising that I could let Squid manage the COSS file creation,
I wrote a script to check for the existence of the file and check that
it had the expected size. If not then it DDs the file as part of the
squid start up.

This seemed to work, but after running squid for a while, I find that
the file size has increased and therefore my script thinks there's a
problem.

 * I wondered whether this is expected behaviour?
 * If the size of the COSS file does change unexpectedly, does it
signify a problem?
 * I find that I can reuse an existing COSS file, but tell Squid that
it's capacity is greater than the size of the exisiting file. Should
Squid complain about this, or will it dynamically resize the file once
it reaches capacity?

Thanks in advance.

-RichardW.


Re: [squid-users] NTLM authentication testing

2008-02-18 Thread Richard Wall
Hi Adrian,

My comments are below.

On 2/18/08, Adrian Chadd [EMAIL PROTECTED] wrote:
  I've got one customer who is asking for some testing of Squid in a large
  NTLM environment. The problem, as those who have tried it will have
  encountered, is that although Squid can keep up with it, the Samba/Winbind 
 stuff
  plainly just can't.

This is something that I'm currently very interested in. I had heard
that NTLM auth could significantly reduce Squids throughput but
haven't seen any figures. I couldn't tell from your message above
whether you / your customer has already tried deploying Squid / NTLM
auth in live environment. If so, I'm really interested to know what
request rate Squid was able to maintain.

I understand from the documentation, that the three stage NTLM
authentication negotiation has to be repeated for every new connection
and that this is  the bottleneck. I'd assumed that winbindd was able
to CACHE the NTLM user credentials, so that subsequent requests would
not result in network calls to the NTLM authentication server. Is this
your understanding?

  So I'm looking for some tools to let me craft and fire off NTLM type 
 authentication
  stuff to a proxy. I don't really care if they're free or not, unix or 
 windows.
  If anyone knows of anything that'll let me create -lots- of NTLM 
 authentication
  requests and fire them through a proxy then please, please let me know.

We were considering the possibility of using something like Selenium
control the web browser and send requests that way, but some further
googling suggests that curl may be able to send NTLM Proxy auth
requests.

  Hopefully the result from all of this will be slightly better NTLM 
 interoperability.

-RichardW.


Re: [squid-users] NTLM authentication testing

2008-02-18 Thread Richard Wall
On 2/18/08, Richard Wall [EMAIL PROTECTED] wrote:
  googling suggests that curl may be able to send NTLM Proxy auth
  requests.

Sorry forgot to include the link:

 * http://curl.haxx.se/docs/manpage.html#--proxy-ntlm

-RichardW.


Re: [squid-users] NTLM authentication testing

2008-02-18 Thread Richard Wall
On 2/18/08, Adrian Chadd [EMAIL PROTECTED] wrote:
 Well, I'll be stuffed:

  violet:~ adrian$ curl --help | grep ntlm
 --ntlm  Enable HTTP NTLM authentication (H)
 --proxy-ntlmEnable NTLM authentication on the proxy (H)

  I wonder how well it'll work. Oh well, time to have a play! Thanks!

It does seem to work with the following options:

curl -v -x 192.168.1.97:800 --proxy-ntlm --proxy-user
DOMAINNAME\\username:password  http://www.google.com/

The -v reveals the details of the NTLM authentication headers.

I'll let you know if I get any further.

-RichardW.


[squid-users] Random image generator w/ reverse-proxy

2008-02-18 Thread Keith M. Richard
All,
 
    I have a web page on my site that has a randomly generated image 
(Alpha numeric picture) to allow users to register. I am using squid as an 
accelerator in my DMZ to this internal web server. Right now the image is coded 
as an unsecured (http) link/servlet on port , which is just a random port. 
This is embedded in a HTTPS page. If I don't use squid it works but through 
squid if fails to display the image.
    I have checked the firewall and it is properly configured. When I 
check the firewalls log, it shows the request to  from the outside, but 
those same requests are never passed through squid for some reason. I have also 
run Wireshark on the squid server to capture the traffic as users made requests 
and I see the TCP [SYN] from the client to the squid servers IP address, but 
then the squid sends a TCP [RST, ACK]. When I watch the same request being made 
from the squid server running FireFox to the internal web server it makes the 
handshake. I cannot figure out why the reset is happening.
I modified the logformat so that I can get some readable data and this 
is what I get from the output:
 
18/Feb/2008:13:03:12 -0600 xxx.xxx.xxx.xxx:51651 192.168.0.135: 
TCP_MISS/404 697 GET 
http://www.my-company.org/randomimages/servlet/org.groupbenefits.por
tal.RandomImageGenServlet? FIRST_UP_PARENT/192.1.0.59 text/html
 
**
# Basic config
acl all src 0.0.0.0/0.0.0.0
acl manager proto http cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 8080 # safe
acl Safe_ports port  # safe
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
 
# Accelerator Mode
http_port 80 defaultsite=www.my-company.org
http_port 192.1.0.59: defaultsite=www.my-company.org
https_port 443 cert=/etc/squid/cert/portalcert.pem key=/etc/squid/cert/key.pem 
defaultsite=www.my-company.org
cache_peer 192.1.0.59 parent 443 0 no-query originserver ssl login=PASS 
name=www.my-company.org
cache_peer 192.1.0.59 parent  0 no-query originserver
visible_hostname www.my-company.org
acl ourSite dstdomain www.my-company.org
http_access allow ourSite
 
# Log file and cache options
logformat squid %tl %a:%p %la:%lp %Ss/%03Hs %st %rm %ru %Sh/%A %mt
cache_dir ufs /var/cache/squid 100 16 256
cache_swap_low 90
cache_swap_high 95
access_log /var/log/squid/access.log squid
cache_log /var/log/squid/cache.log
cache_store_log /var/log/squid/store.log
pid_filename /var/spool/squid/squid.pid
 
#Cache Manager settings
http_access allow manager localhost
http_access deny manager
http_access deny all
***
Thanks,
Keith
 


RE: [squid-users] Random image generator w/ reverse-proxy

2008-02-18 Thread Keith M. Richard
To try and give a little more data to the original message, here is output from 
squidclient:

[EMAIL PROTECTED] squidclient -r -v -p  -m GET 
http://www.my-company.org/randomimages/servlet/org.groupbenefits.portal.RandomImageGenServlet?key=C72J9aHy%2BTw%3D%0D%0A;
headers: 'GET 
http://www.my-company.org/randomimages/servlet/org.groupbenefits.portal.RandomImageGenServlet?key=C72J9aHy%2BTw%3D%0D%0A
 HTTP/1.0
Pragma: no-cache
Accept: */*

'
HTTP/1.0 404 Not Found
Date: Mon, 18 Feb 2008 22:12:50 GMT
Server: Oracle HTTP Server Powered by Apache/1.3.19 (Win32) mod_plsql/3.0.9.8.5 
mod_ssl/2.8.1 OpenSSL/0.9.5a mod_fastcgi/2.2.10 mod_oprocmgr/1.0 mod_perl/1.25
Content-Type: text/html; charset=iso-8859-1
X-Cache: MISS from www.my-company.org
X-Cache-Lookup: HIT from www.my-company.org:80
Via: 1.0 www.my-company.org:80 (squid/2.6.STABLE6)
Connection: close

!DOCTYPE HTML PUBLIC -//IETF//DTD HTML 2.0//EN
HTMLHEAD
TITLE404 Not Found/TITLE
/HEADBODY
H1Not Found/H1
The requested URL 
/randomimages/servlet/org.groupbenefits.portal.RandomImageGenServlet was not 
found on this server.P
/BODY/HTML
2008-02-18 16:12:49 [1]: 0.011 secs, 61.789773 KB/s (1KB)

Even though I have told the squidclient to use port  it still goes over to 
port 80?

Also, I am using Squid Cache: Version 2.6.STABLE6 I plan to upgrade to the 
latest soon. If the latest version is what I need to fix this though, just say 
the word.

Thanks,
Keith
 All,
 
     I have a web page on my site that has a randomly generated
 image (Alpha numeric picture) to allow users to register. I am using squid
 as an accelerator in my DMZ to this internal web server. Right now the
 image is coded as an unsecured (http) link/servlet on port , which is
 just a random port. This is embedded in a HTTPS page. If I don't use squid
 it works but through squid if fails to display the image.
     I have checked the firewall and it is properly configured.
 When I check the firewalls log, it shows the request to  from the
 outside, but those same requests are never passed through squid for some
 reason. I have also run Wireshark on the squid server to capture the
 traffic as users made requests and I see the TCP [SYN] from the client to
 the squid servers IP address, but then the squid sends a TCP [RST, ACK].
 When I watch the same request being made from the squid server running
 FireFox to the internal web server it makes the handshake. I cannot figure
 out why the reset is happening.
 I modified the logformat so that I can get some readable data and
 this is what I get from the output:
 
 18/Feb/2008:13:03:12 -0600 xxx.xxx.xxx.xxx:51651 192.168.0.135:
 TCP_MISS/404 697 GET http://www.my-
 company.org/randomimages/servlet/org.groupbenefits.por
 tal.RandomImageGenServlet? FIRST_UP_PARENT/192.1.0.59 text/html
 
 **
 # Basic config
 acl all src 0.0.0.0/0.0.0.0
 acl manager proto http cache_object
 acl localhost src 127.0.0.1/255.255.255.255
 acl to_localhost dst 127.0.0.0/8
 acl SSL_ports port 443
 acl Safe_ports port 80 # http
 acl Safe_ports port 21 # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70 # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 8080 # safe
 acl Safe_ports port  # safe
 acl Safe_ports port 1025-65535 # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl CONNECT method CONNECT
 
 # Accelerator Mode
 http_port 80 defaultsite=www.my-company.org
 http_port 192.1.0.59: defaultsite=www.my-company.org
 https_port 443 cert=/etc/squid/cert/portalcert.pem
 key=/etc/squid/cert/key.pem defaultsite=www.my-company.org
 cache_peer 192.1.0.59 parent 443 0 no-query originserver ssl login=PASS
 name=www.my-company.org
 cache_peer 192.1.0.59 parent  0 no-query originserver
 visible_hostname www.my-company.org
 acl ourSite dstdomain www.my-company.org
 http_access allow ourSite
 
 # Log file and cache options
 logformat squid %tl %a:%p %la:%lp %Ss/%03Hs %st %rm %ru %Sh/%A %mt
 cache_dir ufs /var/cache/squid 100 16 256
 cache_swap_low 90
 cache_swap_high 95
 access_log /var/log/squid/access.log squid
 cache_log /var/log/squid/cache.log
 cache_store_log /var/log/squid/store.log
 pid_filename /var/spool/squid/squid.pid
 
 #Cache Manager settings
 http_access allow manager localhost
 http_access deny manager
 http_access deny all
 ***
 Thanks,
 Keith
 


[squid-users] Change squid process name

2008-01-30 Thread Richard

Hello!

I am running the current Squid Version on a Unix server, and don't want 
other users to see that Squid is running!


How can I prevent that Squid will be listed by tools like top?

It is possible to change the process name?

Please excause my bad english!

Thank you!

Best regards,

Richard


Re: [squid-users] Change squid process name

2008-01-30 Thread Richard

Thank you folks that works great!

Richard

Ammad Shah schrieb:

Richard,

This is simple, just you have to rename squid binary. if you are using
Fedora/Redhat change the following
/usr/sbin/squid
/usr/local/squid/sbin/squid
/etc/rc.d/init.d/squid

and content of /etc/rc.d/init.d/squid
thats all

Ammad Shah


  

Talk to your OS vendor support and see if they've got tools to limit
access to the process list to the processes running under your uid.
THen users can only see processes running under their uid, and won't
see Squid.



Adrian

On Wed, Jan 30, 2008, Richard wrote:


Hello!

I am running the current Squid Version on a Unix server, and don't want
other users to see that Squid is running!

How can I prevent that Squid will be listed by tools like top?

It is possible to change the process name?

Please excause my bad english!

Thank you!

Best regards,

Richard
  

--
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid
Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -







  




Re: [squid-users] dansguardian, squid, shorewall

2008-01-08 Thread Richard Pyne
On 7 Jan 2008 at 23:43, Amos Jeffries wrote:

 Adrian Chadd wrote:
  Have you configured the http_port with 'transparent' ?
  
 
 I think the problem here is that its *dansguardian* being redirected to 
 not squid. The transparent features of squid will not work when there is 
 no redirection, ie between dansguardian and squid.

After many hours of searching and reading, it looks like the real problem 
is that squid v3 does not support:

httpd_accel_host virtual
httpd_accel_port 80
httpd_accel_with_proxy on
httpd_accel_uses_host_header on

which everything I can find on this problem says are necessary for 
DansGuardian + squid configuration.

I have not been able to find anything to indicate what configuration 
directives in squid 3 replace this functionality or if such even exists.

Any help or direction will be greatly appreciated.

--Richard



Re: [squid-users] dansguardian, squid, shorewall

2008-01-08 Thread Richard Pyne
Well, I solved my problem for now, I went back to squid 2.6, made NO 
configuration changes, and everything is working fine. So much for 
improved versions.

--Richard

On 8 Jan 2008 at 10:08, Richard Pyne wrote:

 On 7 Jan 2008 at 23:43, Amos Jeffries wrote:
 
  Adrian Chadd wrote:
   Have you configured the http_port with 'transparent' ?
   
  
  I think the problem here is that its *dansguardian* being redirected to 
  not squid. The transparent features of squid will not work when there is 
  no redirection, ie between dansguardian and squid.
 
 After many hours of searching and reading, it looks like the real problem 
 is that squid v3 does not support:
 
 httpd_accel_host virtual
 httpd_accel_port 80
 httpd_accel_with_proxy on
 httpd_accel_uses_host_header on
 
 which everything I can find on this problem says are necessary for 
 DansGuardian + squid configuration.
 
 I have not been able to find anything to indicate what configuration 
 directives in squid 3 replace this functionality or if such even exists.
 
 Any help or direction will be greatly appreciated.
 
 --Richard
 
 
 
 
 -- 
 No virus found in this incoming message.
 Checked by AVG Free Edition. 
 Version: 7.5.516 / Virus Database: 269.17.13/1213 - Release Date: 1/7/2008 
 9:14 AM
 




Re: [squid-users] dansguardian, squid, shorewall

2008-01-08 Thread Richard Pyne
On 9 Jan 2008 at 11:50, Amos Jeffries wrote:

 Richard Pyne wrote:
  Well, I solved my problem for now, I went back to squid 2.6, made NO 
  configuration changes, and everything is working fine. So much for 
  improved versions.
 
 Are you sure 2.6 accepted those config lines? did you mean 2.5 works 
 with that 2.5 syntax?
 
 In 2.6 and 3.x the accelerator host stuff has been replaced by a single 
 simple:
 
 http_port 80 accel vhost
 
 As stated in the 2.6 release notes:
http://www.squid-cache.org/Versions/v2/2.6/RELEASENOTES.html#s2
 
 Amos
 

It did not accept the config lines, but even without them it works with 
DansGuardian whre squid 3.0 does not while using EXACTLY the same 
configuration with 2.6 and 3.0.  

--Richard



  
  --Richard
  
  On 8 Jan 2008 at 10:08, Richard Pyne wrote:
  
  On 7 Jan 2008 at 23:43, Amos Jeffries wrote:
 
  Adrian Chadd wrote:
  Have you configured the http_port with 'transparent' ?
 
  I think the problem here is that its *dansguardian* being redirected to 
  not squid. The transparent features of squid will not work when there is 
  no redirection, ie between dansguardian and squid.
  After many hours of searching and reading, it looks like the real problem 
  is that squid v3 does not support:
 
  httpd_accel_host virtual
  httpd_accel_port 80
  httpd_accel_with_proxy on
  httpd_accel_uses_host_header on
 
  which everything I can find on this problem says are necessary for 
  DansGuardian + squid configuration.
 
  I have not been able to find anything to indicate what configuration 
  directives in squid 3 replace this functionality or if such even exists.
 
 
 
 
  Any help or direction will be greatly appreciated.
 
  --Richard



  1   2   >