Re: [squid-users] Squid SSL transparent proxy - SSL_connect:error in SSLv2/v3 read server hello A

2013-10-17 Thread Bill Houle
Based on more general experience - sorry, no specific Squid expertise
to help - that line stood out to me. The cert entry should reference a
.cer/crt file (in PEM format). The use of a CSR is wrong.

--bill



> On Oct 17, 2013, at 9:25 AM, Larry Zhao  wrote:
>
> Hi, Bill Thanks a lot for helping.
>
> if what you mean is here: http_port 443 transparent
> cert=/home/larry/ssl/server.csr key=/home/larry/ssl/server.key
>
> Yes I am sure that's a csr file at that location.
> --
>
> Cheers ~
>
> Larry
>
>
>> On Fri, Oct 18, 2013 at 12:00 AM, Bill Houle  wrote:
>> Did you really point the Cert to the CSR (CertReq file), or is that a typo?
>>
>> --bill
>>
>>
>>
>>
>>> On Oct 17, 2013, at 8:45 AM, Larry Zhao  wrote:
>>>
>>> Hi, Guys,
>>>
>>>
>>> I am trying to setup a SSL proxy for one of my internal servers to
>>> visit `https://www.googleapis.com` using Squid, to make my Rails
>>> application on that server to reach `googleapis.com` via the proxy.
>>>
>>>
>>> I am new to this, so my approach is to setup a SSL transparent proxy
>>> with Squid. I build `Squid 3.3` on Ubuntu 12.04, generated a pair of
>>> ssl key and crt, and configure squid like this:
>>>
>>>
>>>   http_port 443 transparent cert=/home/larry/ssl/server.csr
>>> key=/home/larry/ssl/server.key
>>>
>>>
>>> And leaves almost all other configurations default. The authorization
>>> of the dir that holds key/crt is `drwxrwxr-x  2 proxy proxy4096
>>> Oct 17 15:45 ssl`
>>>
>>>
>>> Back on my dev laptop, I put ` www.googleapis.com` in
>>> my `/etc/hosts` to make the call goes to my proxy server.
>>>
>>>
>>> But when I try it in my rails application, I got:
>>>
>>>
>>>   SSL_connect returned=1 errno=0 state=SSLv2/v3 read server hello A:
>>> unknown protocol
>>>
>>>
>>> And I also tried with openssl in cli:
>>>
>>>
>>>   openssl s_client -state -nbio -connect www.googleapis.com:443 2>&1
>>> | grep "^SSL"
>>>
>>>   SSL_connect:before/connect initialization
>>>
>>>   SSL_connect:SSLv2/v3 write client hello A
>>>
>>>   SSL_connect:error in SSLv2/v3 read server hello A
>>>
>>>   SSL_connect:error in SSLv2/v3 read server hello A
>>>
>>>
>>>
>>> Where did I do wrong?
>>>
>>> --
>>>
>>> Cheers ~
>>>
>>> Larry


Re: [squid-users] Squid SSL transparent proxy - SSL_connect:error in SSLv2/v3 read server hello A

2013-10-17 Thread Bill Houle
Did you really point the Cert to the CSR (CertReq file), or is that a typo?

--bill




> On Oct 17, 2013, at 8:45 AM, Larry Zhao  wrote:
>
> Hi, Guys,
>
>
> I am trying to setup a SSL proxy for one of my internal servers to
> visit `https://www.googleapis.com` using Squid, to make my Rails
> application on that server to reach `googleapis.com` via the proxy.
>
>
> I am new to this, so my approach is to setup a SSL transparent proxy
> with Squid. I build `Squid 3.3` on Ubuntu 12.04, generated a pair of
> ssl key and crt, and configure squid like this:
>
>
>http_port 443 transparent cert=/home/larry/ssl/server.csr
> key=/home/larry/ssl/server.key
>
>
> And leaves almost all other configurations default. The authorization
> of the dir that holds key/crt is `drwxrwxr-x  2 proxy proxy4096
> Oct 17 15:45 ssl`
>
>
> Back on my dev laptop, I put ` www.googleapis.com` in
> my `/etc/hosts` to make the call goes to my proxy server.
>
>
> But when I try it in my rails application, I got:
>
>
>SSL_connect returned=1 errno=0 state=SSLv2/v3 read server hello A:
> unknown protocol
>
>
> And I also tried with openssl in cli:
>
>
>openssl s_client -state -nbio -connect www.googleapis.com:443 2>&1
> | grep "^SSL"
>
>SSL_connect:before/connect initialization
>
>SSL_connect:SSLv2/v3 write client hello A
>
>SSL_connect:error in SSLv2/v3 read server hello A
>
>SSL_connect:error in SSLv2/v3 read server hello A
>
>
>
> Where did I do wrong?
>
> --
>
> Cheers ~
>
> Larry


Re: [squid-users] Exchange 2010 and 502 Bad Gateway

2013-08-29 Thread Bill Houle


On 8/23/2013 2:33 AM, Amos Jeffries wrote:

On 23/08/2013 8:18 p.m., Bill Houle wrote:
For the next in my continuing Exchange saga, let's talk 502 errors. 
I've got a couple different instances.


1) ActiveSync sends periodic 'Ping' requests to implement its "server 
push" feature.


potential problem #1: what type of keep-alive request? the old 
HTTP/1.0 "Keep-Alive:" header is deprecated, not supported by Squid 
and does not actually work most places anyway.


Requests are HTTP 1.1 style.

It uses a back-off algorithm to eventually settle on a timing value 
that it knows the network can support:


potential problem #2: are they using HTTP/1.1 1xx status codes from 
the server as this sync ping or HTTP/1.0 simple request/reply pairs?


Keeping in mind that this is Microsoft after all, no, it looks like they 
do not do much handling of the status codes. Either a 200 OK is received 
and it keeps listening, or all others trigger a sync and a timing 
adjustment.


Squid older than 3.2 do not support the 1xx status response. So is 
there any HTTP/1.0 software along the network path? (including Squid 
up to version 3.1).


Not in this case, but to your point, this is not a guarantee for all cases.

This is where we come back to the whole design of this being a 
terrible way to operate.


Oh well.

But enough about ActiveSync...

2) Next problem is OWA (WebMail). OWA is designed to mimic Outlook, 
so if Outlook can support 10Meg attachments, so can OWA. A user tries 
to send a large attachment... 


When I raised this issue, it was basically a repeat of a similar 
question posted on this list last year:


http://www.squid-cache.org/mail-archive/squid-users/201209/0272.html

The answer at the time was the expected "Squid doesn't care about size". 
And it doesn't. But there was never an actual resolution from the 
standpoint of making Exchange work properly. In case anyone else is 
interested in the solution, I have to thank kiphat@singleuser. He broke 
out wireshark and discovered that SSL 2.0 key negotiation was breaking 
the connection.


http://singleuser.blogspot.com/2013/05/exchange-owaoutlook-anywhere-proxy-with.html?m=1

When SSL 3.0 was forced on the Squid cache_peer, all was right with the 
world. We made the same change and now appear to be in a similar state 
of nirvana.


--bill




[squid-users] Exchange 2010 and 502 Bad Gateway

2013-08-23 Thread Bill Houle
For the next in my continuing Exchange saga, let's talk 502 errors. I've 
got a couple different instances.


1) ActiveSync sends periodic 'Ping' requests to implement its "server 
push" feature. If I understand the process correctly, the client sends 
an empty (Content-Length: 0) keep-alive HTTP request and tries to see 
how long the server+network honor the session. It uses a back-off 
algorithm to eventually settle on a timing value that it knows the 
network can support: if the keep-alive expires cleanly, they up the ante 
and repeat; if the HTTP session aborts, they drop it down to the 
previous success and lock in the refresh rate. From that point forward, 
they've got a sync window and continue to issue Pings at that duration. 
That way, if the Ping aborts, it is a signal that a 'Sync' is needed 
because "server push" has new data.


What I'm actually seeing is that the system is never able to settle on a 
consistent keep-alive sync window as MS might like. The Ping, or string 
of Pings, might last minutes or could only be seconds. When the Ping 
ultimately fails, the system does a Sync even though there may be 
nothing new. The end result is that it is less like "server push" and 
more like polling at a variable rate.


The users don't really notice or care since they still get their updates 
promptly. It's hardly catastrophic for me, but I could envision that the 
variable-polling behavior might be slightly more taxing as the number of 
users scale upward. But I'm curious if there's any Squid debug I can add 
that might reveal why the session durations seem to vary so much? At 
11,2 level, the only thing I see is:


2013/08/19 00:46:51 kid1| WARNING: HTTP: Invalid Response: No object data received 
forhttps://mail.domain.com/Microsoft-Server-ActiveSync?User=user&DeviceId=ApplF4KKR4GLF199&DeviceType=iPad&Cmd=Ping
  AKA 
mail.domain.com/Microsoft-Server-ActiveSync?User=user&DeviceId=ApplF4KKR4GLF199&DeviceType=iPad&Cmd=Ping

To which Squid replies back to the client as 502 Bad Gateway. 
X-Squid-Error is ERR_ZERO_SIZE_OBJECT.


2) Next problem is OWA (WebMail). OWA is designed to mimic Outlook, so 
if Outlook can support 10Meg attachments, so can OWA. A user tries to 
send a large attachment. Unlike the ActiveSync problem I previously 
posted about, UploadReadAhead does not seem to enter into the equation - 
possibly because the POST is redirected to an /EWS/ proxy. It happily 
chunks well past the ActiveSync threshold, but at some point the 
connection may still fail:


2013/08/21 07:41:07.616 kid1| http.cc(1172) readReply: local=proxy.IP:42891 
remote=Exchange.IP:443 FD 39 flags=1: read failure: (32) Broken pipe.

To which Squid replies back to the client as 502 Bad Gateway. 
X-Squid-Error is ERR_READ_ERROR 104.


I know Squid doesn't touch the data, and thus doesn't care about 
transaction size. But is there anything more I can do to minimize all 
possible drops & connection timeouts, particularly with large POSTs? I'm 
not saying the drops are Squid's fault, I just want to idiot-proof the 
setup on this end as much as possible.


3) Final example is RPC-over-HTTPS.  I routinely see 502s on "connection 
reset by peer" (RSTs seem to be par for the course on Windows systems). 
But I've also seen ERR_READ_ERROR 104 on a "No error" error.


2013/08/19 21:09:37.239 kid1| http.cc(1172) readReply: local=proxy.IP:58798 
remote=Exchange.IP:443 FD 44 flags=1: read failure: (0) No error..


What could this possibly indicate?

--bill



Re: [squid-users] Exchange ActiveSync HTTP 413

2013-08-22 Thread Bill Houle
I still don't have my answer why it seemed to work internally but not
proxied. The extra debug in the 3.3 release (thanks Amos) was
certainly helpful, but not enlightening. Frankly, knowing what I now
know, I question the original test result. But getting a definitive
answer has become more of a scholastic exercise at this point, so I
can offer closure on the topic:

Exchange 2010 IIS sets a  maxRequestLength that is used
for /OWA and other virtual directories. Depending on where you look,
this could be anywhere from 30meg to "unlimited". Yet if RequestLength
is exceeded, a 403 is generated. But in the case of the /ActiveSync
virtual, the  UploadReadAhead value seems to trump
this, and will generate a 413. UploadReadAhead is set to 48k by
default.

Increasing this value allowed successively larger EAS transactions to
occur. MS warns that changing this value is a potential DoS vector, so
they intentionally kept it low. While mobile users don't typically
*create* large messages that will hit this limit, it is not uncommon
to add to already-large discussions or forward a large attachment
originated by someone else, necessitating a bump.

--bill



On Aug 18, 2013, at 6:07 PM, Amos Jeffries  wrote:

> On 19/08/2013 4:26 a.m., Bill Houle wrote:
>> Thanks Amos, perfectly targeted. Now I guess I'll be taking my troubles off 
>> the Squid list. The output clearly shows that it is IIS that is returning 
>> the 413. The HLB is not IIS, so obviously it is coming from the Exchange/CAS 
>> level. But if anyone can hazard a guess why the CAS might be inclined to 
>> behave when not proxied but reject under Squid, I'm all ears...
>
> Ar. Okay your Squid is too old to log the request headers being received and 
> sent by Squid. Are you able to upgrade to the 3.3 RPM provided by Eliezer?
> http://wiki.squid-cache.org/KnowledgeBase/CentOS
>
> "debug_options 11,2" with that newer version will dump you out a full trace 
> of the HTTP request and reply on both connections.
>
> Amos
>


Re: [squid-users] Exchange ActiveSync HTTP 413

2013-08-18 Thread Bill Houle
Thanks Amos, perfectly targeted. Now I guess I'll be taking my troubles 
off the Squid list. The output clearly shows that it is IIS that is 
returning the 413. The HLB is not IIS, so obviously it is coming from 
the Exchange/CAS level. But if anyone can hazard a guess why the CAS 
might be inclined to behave when not proxied but reject under Squid, I'm 
all ears...


--bill




2013/08/18 09:15:01.193| processReplyHeader: key 
'167D5E46E0E618965373B336E14716E9'

2013/08/18 09:15:01.193| GOT HTTP REPLY HDR:
-
HTTP/1.1 413 Request Entity Too Large^M
Content-Type: text/html^M
Server: Microsoft-IIS/7.5^M
X-Powered-By: ASP.NET^M
Date: Sun, 18 Aug 2013 16:14:24 GMT^M
Connection: close^M
Content-Length: 67^M
^M
The page was not displayed because the request entity is too large.
--




On 8/18/2013 2:27 AM, Amos Jeffries wrote:

On 18/08/2013 6:06 a.m., Bill Houle wrote:
Greetings! We have a Squid 3.1.10 (installed via yum on 64b CentOS6) 
that

we are using as reverse proxy for Exchange. OWA, EWS, and RPC-over-HTTPS
seem to be operating without incident, but we have run into "request too
large" HTTP 413 errors with certain "large" ActiveSync POST messages 
from

mobile phones. iPhone and Android, equal opportunity.

To be correct, these large messages really aren't that large - we're
talking kilobytes not mega. But they generate a 413 error and stay 
stuck in
the phone's outbox. Other (smaller) messages sent after will sidestep 
the

blockage and are sent thru.

Our Exchange 2010 is dual Client Access Server DAG fronted by a 
hardware-
based network load balancer. Squid points to the HLB, the HLB to the 
DAG,
and ultimately to the active CAS. If we run the same tests internally 
(ie,

injecting the message at the HLB) everything goes thru fine. This would
seem to indicate that the source of the 413 is the proxy itself. But per
the squid config (below) we should be running at "unlimited" request 
size,

so I'm not sure why 413 would be thrown.

The log snippet below should show a sync transaction from an iPhone
followed by a failed "large" message send attempt. This is followed by a
successful send of a smaller message - so we know a POST works - and 
again,

a failed retry of the one that still remains queued.

I tried to correlate to cache.log running as "-k debug" but it is 
difficult

with all the traffic.

Any ideas?


Try "debug_options 33,1 11,9" instead.

Amos




[squid-users] Exchange ActiveSync HTTP 413

2013-08-17 Thread Bill Houle

Greetings! We have a Squid 3.1.10 (installed via yum on 64b CentOS6) that
we are using as reverse proxy for Exchange. OWA, EWS, and RPC-over-HTTPS
seem to be operating without incident, but we have run into "request too
large" HTTP 413 errors with certain "large" ActiveSync POST messages from
mobile phones. iPhone and Android, equal opportunity.

To be correct, these large messages really aren't that large - we're
talking kilobytes not mega. But they generate a 413 error and stay stuck in
the phone's outbox. Other (smaller) messages sent after will sidestep the
blockage and are sent thru.

Our Exchange 2010 is dual Client Access Server DAG fronted by a hardware-
based network load balancer. Squid points to the HLB, the HLB to the DAG,
and ultimately to the active CAS. If we run the same tests internally (ie,
injecting the message at the HLB) everything goes thru fine. This would
seem to indicate that the source of the 413 is the proxy itself. But per
the squid config (below) we should be running at "unlimited" request size,
so I'm not sure why 413 would be thrown.

The log snippet below should show a sync transaction from an iPhone
followed by a failed "large" message send attempt. This is followed by a
successful send of a smaller message - so we know a POST works - and again,
a failed retry of the one that still remains queued.

I tried to correlate to cache.log running as "-k debug" but it is difficult
with all the traffic.

Any ideas?

--bill




visible_hostname mail.verance.com
via off
redirect_rewrites_host_header off
forwarded_for transparent
ignore_expect_100 on
ssl_unclean_shutdown on
request_body_max_size 0
#client_request_buffer_max_size 5 MB
maximum_object_size_in_memory 128 KB
cache_mem 32 MB





1376678951.412   1259 IP.IP.IP.IP TCP_MISS/000 0 POST
https://mail.verance.com/Microsoft-Server-ActiveSync? -
FIRST_UP_PARENT/EXCH-Mail -
1376678951.562169 IP.IP.IP.IP TCP_MISS/200 446 POST
https://mail.verance.com/Microsoft-Server-ActiveSync? -
FIRST_UP_PARENT/EXCH-Mail application/vnd.ms-sync.wbxml
1376679016.869940 IP.IP.IP.IP TCP_MISS/413 349 POST
https://mail.verance.com/Microsoft-Server-ActiveSync? -
FIRST_UP_PARENT/EXCH-Mail text/html
1376679037.864806 IP.IP.IP.IP TCP_MISS/200 184 POST
https://mail.verance.com/Microsoft-Server-ActiveSync? -
FIRST_UP_PARENT/EXCH-Mail -
1376679038.409  84897 IP.IP.IP.IP TCP_MISS/000 0 POST
https://mail.verance.com/Microsoft-Server-ActiveSync? -
FIRST_UP_PARENT/EXCH-Mail -
1376679038.623214 IP.IP.IP.IP TCP_MISS/200 497 POST
https://mail.verance.com/Microsoft-Server-ActiveSync? -
FIRST_UP_PARENT/EXCH-Mail application/vnd.ms-sync.wbxml
1376679039.514766 IP.IP.IP.IP TCP_MISS/413 349 POST
https://mail.verance.com/Microsoft-Server-ActiveSync? -
FIRST_UP_PARENT/EXCH-Mail text/html




Re: [squid-users] How to modify the process owner name in syslog

2013-01-21 Thread Bill Yuan
Hi Eliezer,

Thanks for you reply,

I understand, but currently I am still using squid 2.7.  it is good
enough for me,

now I am still trying to find out whether I can change the name in the
syslog like below

Jan 21 08:09:10 192.168.0.1 *squid[12345]*: log message

I just want hide the "squid[12345]"

anyone know it ?

thanks.

Best Regards,


On Mon, Jan 21, 2013 at 9:22 PM, Eliezer Croitoru  wrote:
> Hey Bill,
>
> Since squid 2.7 is not maintained anymore I doubt you will get much support
> about it but if you have the relevant settings you have used maybe someone
> can help you.
>
> Regards,
> Eliezer
>
>
> On 1/21/2013 12:09 PM, Bill Yuan wrote:
>>
>> Hi all,
>> I just finished the configuration on my squid 2.7, make it send all the
>> access log to an external syslog server. it is working properly.
>>
>> thanks very much for creating such a nice software.  but I want to know
>> whether can change the name in the syslog like below:
>>
>> Jan 21 08:09:10 192.168.0.1 *squid[12345]*: log message
>>
>> And when I trigger the logger via command line , I can get another syslog
>> record like below,
>>
>> Jan 21 08:09:10 192.168.0.1 root: message via command line
>>
>> So my question is whether I can change the "process name" in the system
>> log? or Just dont show it .
>>
>>
>> thanks in advance.  :)
>>
>


[squid-users] How to modify the process owner name in syslog

2013-01-21 Thread Bill Yuan
Hi all,
I just finished the configuration on my squid 2.7, make it send all the
access log to an external syslog server. it is working properly.

thanks very much for creating such a nice software.  but I want to know
whether can change the name in the syslog like below:

Jan 21 08:09:10 192.168.0.1 *squid[12345]*: log message

And when I trigger the logger via command line , I can get another syslog
record like below,

Jan 21 08:09:10 192.168.0.1 root: message via command line

So my question is whether I can change the "process name" in the system
log? or Just dont show it .


thanks in advance.  :)


[squid-users] Help! Jane... Stop this crazy thing! lol

2011-09-23 Thread Bill Arlofski
unsubscribe


[squid-users] squid failing with downstream proxy, yet Apache works

2011-03-15 Thread Bill DeGan
We have been using squid in a reverse proxy mode for several weeks now
and its been working well.

Lately we have remote users that have a transparent proxy and users
are getting hung when trying to access a particular page.

Going thru cache.log and all I see for every connection is "ALLOWED",
but do see lines line this:

2011/03/15 09:47:20| clientReadBody: start fd=48 body_size=97
in.offset=0 cb=0x450740 req=0x1cf9dd40
2011/03/15 09:47:20| clientProcessBody: start fd=48 body_size=97
in.offset=0 cb=0x450740 req=0x1cf9dd40
2011/03/15 09:47:20| clientProcessBody: start fd=48 body_size=97
in.offset=97 cb=0x450740 req=0x1cf9dd40
2011/03/15 09:47:20| clientProcessBody: end fd=48 size=97 body_size=0
in.offset=0 cb=0x450740 req=0x1cf9dd40
2011/03/15 09:47:20| The reply for POST
http://IP_ADDRESS/services/forward/jcore_security_check is ALLOWED,
because it matched 'all'

Not sure if clientProcessBody is a problem or not?

Another group wants to replace the squid with Apache reverse proxy and
tried it out this morning and it didn't have any problems with the
remote user and the downstream proxy server.

Here are my squid_conf settings:


auth_param basic program /usr/lib64/squid/ncsa_auth  /etc/squid/squid_passwd
auth_param basic realm   Ericsson.  For support email
performa...@ericsson.com . A login
auth_param basic credentialsttl 1 hours
authenticate_ttl 1 hour
authenticate_ip_ttl 1 hour
external_acl_type mysession ttl=10 children=5 negative_ttl=0 %LOGIN
%PATH /usr/local/bin/ckuser.pl
acl mysession external mysession %LOGIN %PATH
acl strt1 url_regex [-i] ^http://www.ericssonperformance.com$
acl strt2 url_regex [-i] ^http://129.192.172.19$
acl good_src url_regex -i \.php 129.192.172.19\/$ www.ericssonperformance.com\/$
acl all src all
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 443 # https
acl Safe_ports port 1025-65535  # unregistered ports
acl ncsa_users proxy_auth REQUIRED
acl CONNECT method CONNECT
deny_info ERR_ACCESS_DENIED  Safe_ports
http_access deny !Safe_ports
deny_info ERR_ACCESS_DENIED  ncsa_users
http_access allow mysession ncsa_users
http_access deny all
 http_reply_access allow all
icp_access allow localnet
icp_access deny all
reply_body_max_size 0 allow all
acl_uses_indirect_client on
http_port 10.102.16.101:80 accel defaultsite=129.192.172.19 vhost
forwarded_for on
cache_peer  10.202.16.117 parent 80 0 no-query
originserver name=ADMIN
cache_peer_access ADMIN allow all
cache_peer  10.202.16.37 parent 80 0 no-query
originserver name=WPP1
cache_peer_access WPP1 allow all
cache_peer  10.202.16.40 parent 80 0 no-query
originserver name=WPP2
cache_peer_access WPP2 allow all
hierarchy_stoplist cgi-bin ?
cache_dir null /tmp
access_log /var/log/squid/access.log squid
debug_options ALL,1 33,2
log_fqdn off
url_rewrite_program /usr/local/bin/rewrite.pl
url_rewrite_children 20
url_rewrite_concurrency 0
url_rewrite_host_header on
redirector_bypass off
location_rewrite_program /usr/local/bin/rewrite.pl
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
cache deny all
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
negative_ttl 0 minutes
positive_dns_ttl 1 minutes
acl shoutcast rep_header X-HTTP09-First-Line ^ICY.[0-9]
upgrade_http0.9 deny shoutcast
via on
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
refresh_stale_hit 0 seconds
header_access Accept allow all
header_access Accept-Encoding allow all
header_access Accept-Language allow all
header_access Authorization allow all
header_access Cache-Control allow all
header_access Content-Disposition allow all
header_access Content-Encoding allow all
header_access Content-Length allow all
header_access Content-Location allow all
header_access Content-Range allow all
header_access Content-Type allow all
header_access Cookie allow all
header_access Expires allow all
header_access Host allow all
header_access If-Modified-Since allow all
header_access Location allow all
header_access Range allow all
header_access Referer allow all
header_access Set-Cookie allow all
header_access WWW-Authenticate allow all
header_access All deny all
client_persistent_connections off
always_direct allow all
check_hostnames off
forwarded_for on
coredump_dir /var/spool/squid

Any help would be appreciated.

thanks


[squid-users] Re: No authentication + basic authentication

2010-10-21 Thread Bill Filt

> Can anyone give an example configuration that allows one subnet (eg. for 
> users) to access squid using basic authentication while another subnet (eg. 
> for servers) doesn't require authentication?
> 
> Thx
> 
> 


Re: [squid-users] TPROXY Routing

2010-04-02 Thread bill

Henrik N. has got to be as dense as any forest tree.

I've asked him twice, I've asked him thrice, I swear I'd almost pay a  
price.


I have no interest in squibs email trists, please take me off your  
mailing list.


Bill
785-887-6966
b...@billfair.com



On Apr 2, 2010, at 1:13 PM, Henrik Nordström  
 wrote:



fre 2010-04-02 klockan 09:47 -0700 skrev Kurt Sandstrom:


2 things I may try this evening... grab tcp traffic from eth0 and br0
to see if redirected port 3129 is being routed out of the system
instead of to the localhost. Then try (a shot in the dark) changing:


Which MAC address is being used on the packets sent out?

Have a feeling the packets never gets diverted off the bridge.. if so
then the MAC is unchanged when the packet is sent out.

If the packet did get diverted from the bridge to routing then the
source MAC of the packets when leaving the server will be that of br0.

other sign to look for is if the IP ttl gets decremented. If the  
packet
is being bridged then ttl stays the same, if it's being routed then  
ttl

is decremented by one.

Regards
Henrik




Re: [squid-users] TPROXY Routing

2010-04-01 Thread bill

PLEASE HELP!

I have been to the squid site and unsubscribed to every list, I have  
asked Henrick several times with no answer. And STILL I get these  
emails about your business.


Won't one of you PLEASE tell ne how to get off of your mailing list?

Bill
785-887-6966
www.billfair.com


On Apr 1, 2010, at 3:28 PM, Henrik Nordström  
 wrote:



tor 2010-04-01 klockan 11:10 -0700 skrev Kurt Sandstrom:
It is set up with 2 nics as a bridge. The routing I was refering to  
is

only internal to the box.. ie through iptables


bridge... haven't tried TPROXY in bridge mode, only router mode.

Due to the complexity involved I would recomment you first try  
TPROXY in
router mode, then move on to extend it to bridge mode. And remember  
that
you need to divert the return traffic as well in the bridge or it  
won't

work.

Regards
Henrik




[squid-users] POST denied?

2010-02-16 Thread Bill Stephens
All,

I'm attempting to configure squid to proxy my requests to a Web
Service. I can access via a GET request in my browser but attempting
to submit a request via Java that has been configured to use squid as
my proxy:

Execute:Java13CommandLauncher: Executing
'/usr/lib/jvm/java-1.5.0-sun-1.5.0.18/jre/bin/java' with arguments:
'-Djava.endorsed.dirs=extensions/endorsed'
'-Dhttp.proxyPort=3128'
'-Dhttp.proxyHost=127.0.0.1'

1266334195.708  1 127.0.0.1 TCP_DENIED/411 1949 POST
http://cadsr-dataservice.nci.nih.gov:80/wsrf/services/cagrid/CaDSRDataService
- NONE/- text/html

Thinking that I had messed up my config, I returned to the out of the
box squid.conf and I get the same error.

Thoughts?


[squid-users] Configuring Squid to proxy by protocol (only http)?

2010-02-15 Thread Bill Stephens
All,

My institution has a proxy.pac configuration that proxies HTTP traffic
but not HTTPS. This works fine in a browser.  When I try to configure
Java to use the proxy it will connect to HTTP URLs just fine and barf
on HTTPS because the proxy changes the protocol on secure requests to
HTTP and our Web Services do not like that.

Can a Squid proxy be configured as follows?
1. HTTP traffic: forward to existing proxy
2. HTTPS traffic: direct connect

Thanks,
Bill S.


Re: [squid-users] MSN causing a breach.. help!

2010-01-12 Thread Bill Jacqmein
Honestly the easiest technical fix is to deny access at the firewall
or squid acl to the paid proxy site.

Best long term fix is an enforced security policy (I think I might be
too optimistic).

On Tue, Jan 12, 2010 at 6:56 AM, Roland Roland  wrote:
> i have the following config set to allow msn messenger to connect through my
> squid.
>
> acl msnport port 1863
> http_access allow connect  msnport
> http_access allow msnport
>
> i have a security breach where one of the users may be using port 1863 to
> reach a  paid proxy that he acquired.
> is there a way to allow port 1863 to only work with msn messenger
> destinations? i've already denied access to that domain and warned the user
> but i want a more permanent solution
> the simplest way possible is to do an AND access rule with msn's domains but
> there's a vast list of domains that should be added and i dont have them
> all..
> so is there another way ?
>
> PS: i'm using ADIUM client to connect to msn so when using msn's mime type
> its not working not sure why...
>
>
>


RE: [squid-users] any work arounds for bug 2176

2009-12-22 Thread Bill Allison
Amos (or anyone)

I'm trying to marry up debug output with tcpdump traces taken on the squid box 
but I want greater precision. Is there a debug_options setting which will cause 
entries in cache.log to be timestamped in micro-seconds? If not, is there 
something I can change (in debug.c ?) to make that happen (bearing in mind I'm 
an absolute novice C coder)?

Bill A.

-Original Message-
From: vincent.blon...@ing.be [mailto:vincent.blon...@ing.be] 
Sent: 22 December 2009 07:36
To: squid-users@squid-cache.org; squ...@treenet.co.nz
Cc: Bill Allison; vincent.blon...@ing.be
Subject: RE: [squid-users] any work arounds for bug 2176

 
Hello all,

Just to inform you I exactly get the same problem. Firstly I thought it
was a problem with WWW-Authenticate but it is not ONLY  

next is the reference of my first post ...
http://www.squid-cache.org/mail-archive/squid-users/200912/0029.html

I also get this same message ( httpReadReply: Request not yet fully sent
) when sending some POST requests bigger than x bytes to an  IIS server
...

I applied the patch from the bugzilla (2176) on a 2.7.4. The user does
not receive the traditional 'Page cannot be displayed' from Internet
Explorer any more but the browser freeze instead :(-

below the current config ...


client_persistent_connections on
server_persistent_connections on
acl protime url_regex -i ^http://services.group.intranet/rec
acl protime_src src all
cache_peer 1.2.3.4 parent 80 0 forceddomain=services.group.intranet
originserver proxy-only no-query no-digest connection-auth=on login=PASS
cache_peer_access 1.2.3.4 allow protime


I am certainly interested with a definitive solution so if I can be part
of the tests, just say it ...


many thks
Vincent.

-Original Message-
From: Bill Allison [mailto:bill.alli...@bsw.co.uk] 
Sent: Friday, December 18, 2009 10:47 AM
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] any work arounds for bug 2176

Reposted for info to the list, without the attachments that cause the
list to bounce the message

-Original Message-
From: Bill Allison 
Sent: 18 December 2009 09:43
To: 'Amos Jeffries'; Brett Lymn
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] any work arounds for bug 2176

"I  get the same error as Brett only when the body of the post is much
greater than that which causes the post to fail."

Correction after further testing...

I  get the same error as Brett only when the body of the post is much
greater than that which causes the post to fail, and even then only
sometimes, in repeated tests with the same file being uploaded. 

Other times the browser reports "The connection was reset" and tcpdump
shows that the proxy sent a FIN to the server then to the client in
response to the second 401 from the server. THe server closes the
connection but the client continues sending a POST and the proxy then
sends the client a string of RSTs. 

For info "Invalid Verb" is issued by http.sys in IIS 6.0, in response to
receiving a header that is not strictly rfc-compliant (including
truncated).

Attached as requested is my squid.conf and tcpdumps of the Invalid Verb
and RST failure cases.

Unlike Brett I'm very much a novice C coder but I'm perfectly happy to
patch, compile and test if it helps generate a solution.

Regards
Bill A.

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz]
Sent: 17 December 2009 09:10
To: Brett Lymn
Cc: Bill Allison; squid-users@squid-cache.org
Subject: Re: [squid-users] any work arounds for bug 2176

Brett Lymn wrote:
> On Wed, Dec 16, 2009 at 07:57:21AM -0600, Bill Allison wrote:
>> Sorry - that was misleading. I've had 
>> persistent_connection_after_error set on throughout my testing.
> 
> I don't have that in my config file at all so I would guess it is at 
> the default.
> 

Which is off. Now I'm confused.

>> I  get the same error as Brett only when the body of the post is much
greater than that which causes the post to fail.
>>
> 
> I only tried a large-ish document.  We did observe the same strange 
> limit that Bill has seen when we tested without the patch applied, 
> under a certain "magic" threshold the document would upload - the 
> threshold seemed to be around the 50k mark, over that threshold we 
> would just get popups.
> 
>> I'd like to correlate network traces with debug output and would 
>> appreciate suggestions as to which debug_options would include all 
>> possibly relevant info
>>
> 
> I am a C coder and may have some time to do some debugging on this 
> between christmas and new year so, Amos, if you have any thoughts or 
> hints as to where to go looking I can certainly have a stab at it.
> 

Thank you. Any help at all would be great.

I *think* the relevant code is off src/client_side_reply.cc, but what to
lo

RE: [squid-users] any work arounds for bug 2176

2009-12-18 Thread Bill Allison
Reposted for info to the list, without the attachments that cause the list to 
bounce the message

-Original Message-
From: Bill Allison 
Sent: 18 December 2009 09:43
To: 'Amos Jeffries'; Brett Lymn
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] any work arounds for bug 2176

"I  get the same error as Brett only when the body of the post is much greater 
than that which causes the post to fail."

Correction after further testing...

I  get the same error as Brett only when the body of the post is much greater 
than that which causes the post to fail, and even then only sometimes, in 
repeated tests with the same file being uploaded. 

Other times the browser reports "The connection was reset" and tcpdump shows 
that the proxy sent a FIN to the server then to the client in response to the 
second 401 from the server. THe server closes the connection but the client 
continues sending a POST and the proxy then sends the client a string of RSTs. 

For info "Invalid Verb" is issued by http.sys in IIS 6.0, in response to 
receiving a header that is not strictly rfc-compliant (including truncated).

Attached as requested is my squid.conf and tcpdumps of the Invalid Verb and RST 
failure cases.

Unlike Brett I'm very much a novice C coder but I'm perfectly happy to patch, 
compile and test if it helps generate a solution.

Regards
Bill A.

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz]
Sent: 17 December 2009 09:10
To: Brett Lymn
Cc: Bill Allison; squid-users@squid-cache.org
Subject: Re: [squid-users] any work arounds for bug 2176

Brett Lymn wrote:
> On Wed, Dec 16, 2009 at 07:57:21AM -0600, Bill Allison wrote:
>> Sorry - that was misleading. I've had 
>> persistent_connection_after_error set on throughout my testing.
> 
> I don't have that in my config file at all so I would guess it is at 
> the default.
> 

Which is off. Now I'm confused.

>> I  get the same error as Brett only when the body of the post is much 
>> greater than that which causes the post to fail.
>>
> 
> I only tried a large-ish document.  We did observe the same strange 
> limit that Bill has seen when we tested without the patch applied, 
> under a certain "magic" threshold the document would upload - the 
> threshold seemed to be around the 50k mark, over that threshold we 
> would just get popups.
> 
>> I'd like to correlate network traces with debug output and would 
>> appreciate suggestions as to which debug_options would include all 
>> possibly relevant info
>>
> 
> I am a C coder and may have some time to do some debugging on this 
> between christmas and new year so, Amos, if you have any thoughts or 
> hints as to where to go looking I can certainly have a stab at it.
> 

Thank you. Any help at all would be great.

I *think* the relevant code is off src/client_side_reply.cc, but what to look 
for is where I'm currently stuck. The keep_alive values resolved things for you 
Brett but not Bill.


The variable nature of the threshold looks like some timing between actions 
triggering the bug vs the rate at which Squid is sucking the request in.

AFAIK popups only occur when the client gets sent two re-auth challenges. Which 
in the un-patched Squid was caused by the first half-authenticated link being 
closed by Squid before auth could complete. Then the second link being 
challenged for more auth would cause popup.

I think the next step is to find out what the difference between your two 
setups is exactly:
  * squid.conf
  * headers between Squid and the POSTing app.
  * headers between Squid and the web server.

Particularly in what reply headers are going back.  That should give us a 
little more of an idea what areas to look at.

If as you say the patch solved the issue but you saw the same thing earlier. 
Then I suspects it's probably a squid.conf detail being overlooked.

Amos
--
Please be using
   Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20
   Current Beta Squid 3.1.0.15

--
This message has been scanned for viruses and dangerous content by MailScanner, 
and is believed to be clean.



RE: [squid-users] any work arounds for bug 2176

2009-12-16 Thread Bill Allison
Sorry - that was misleading. I've had persistent_connection_after_error set on 
throughout my testing. I  get the same error as Brett only when the body of the 
post is much greater than that which causes the post to fail.

I'd like to correlate network traces with debug output and would appreciate 
suggestions as to which debug_options would include all possibly relevant info


-Original Message-----
From: Bill Allison [mailto:bill.alli...@bsw.co.uk] 
Sent: 16 December 2009 12:07
To: Amos Jeffries; squid-users@squid-cache.org
Subject: RE: [squid-users] any work arounds for bug 2176

"Do you get the same result as Brett with persistent_connection_after_error set 
to ON?"

Yes - I've had it set on throughout my testing.

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: 16 December 2009 10:46
To: squid-users@squid-cache.org
Subject: Re: [squid-users] any work arounds for bug 2176

Bill Allison wrote:
> Amos
> 
> I've done some more testing / tracing with this - one finding and one 
> question to help me do more.
> 
> My finding is that the response to a large POST varies - put simply, a small 
> (< 5Kb) POST succeeds, larger POST pops up UID/PWD request, large (> 80Kb) 
> gives the INVALID VERB error that Brett reported. There is no exact 
> borderline size. Sometimes a 6Kb upload will succeed, sometimes the max is 
> around 4Kb. This is on an otherwise idle test server. So far I've only 
> tcpdumped the first two cases and can see that in the middle case, the proxy 
> issues FIN packets to server and client just after receiving and passing on 
> the second 401 response from the server. A feature, if not a factor, common 
> to failures, at least in traces taken so far, is that transfer of the upload 
> to the server begins before receipt of the upload from the client has 
> completed. Comment please?
> 
> My question - I'd now like to marry up tcpdump traces with squid debug 
> output. Having read up on debug_options, I've used 5,6 17,6 33,6 41,6 48,6 
> 58,6 73,6 85,6 87,6 88,6. What would be a better set?
> 
> For the avoidance of doubt - I'm a rank amateur (as if you haven't already 
> guessed :-) ) but really need to find a fix or workaround, despite knowing 
> that Microsoft state that IIS NTLM authentication can not work through proxy 
> servers. Any pointers gratefully received.
> 
> Kind regards
> Bill A.
> 


Hmm, I wonder...

Do you get the same result as Brett with 
persistent_connection_after_error set to ON?

The default in all Squid is OFF which forces the connection closed on 
all 4xx replies regardless of the patched Squid now seeing it as a 
viable connection.

Amos
-- 
Please be using
   Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20
   Current Beta Squid 3.1.0.15

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



RE: [squid-users] any work arounds for bug 2176

2009-12-16 Thread Bill Allison
"Do you get the same result as Brett with persistent_connection_after_error set 
to ON?"

Yes - I've had it set on throughout my testing.

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: 16 December 2009 10:46
To: squid-users@squid-cache.org
Subject: Re: [squid-users] any work arounds for bug 2176

Bill Allison wrote:
> Amos
> 
> I've done some more testing / tracing with this - one finding and one 
> question to help me do more.
> 
> My finding is that the response to a large POST varies - put simply, a small 
> (< 5Kb) POST succeeds, larger POST pops up UID/PWD request, large (> 80Kb) 
> gives the INVALID VERB error that Brett reported. There is no exact 
> borderline size. Sometimes a 6Kb upload will succeed, sometimes the max is 
> around 4Kb. This is on an otherwise idle test server. So far I've only 
> tcpdumped the first two cases and can see that in the middle case, the proxy 
> issues FIN packets to server and client just after receiving and passing on 
> the second 401 response from the server. A feature, if not a factor, common 
> to failures, at least in traces taken so far, is that transfer of the upload 
> to the server begins before receipt of the upload from the client has 
> completed. Comment please?
> 
> My question - I'd now like to marry up tcpdump traces with squid debug 
> output. Having read up on debug_options, I've used 5,6 17,6 33,6 41,6 48,6 
> 58,6 73,6 85,6 87,6 88,6. What would be a better set?
> 
> For the avoidance of doubt - I'm a rank amateur (as if you haven't already 
> guessed :-) ) but really need to find a fix or workaround, despite knowing 
> that Microsoft state that IIS NTLM authentication can not work through proxy 
> servers. Any pointers gratefully received.
> 
> Kind regards
> Bill A.
> 


Hmm, I wonder...

Do you get the same result as Brett with 
persistent_connection_after_error set to ON?

The default in all Squid is OFF which forces the connection closed on 
all 4xx replies regardless of the patched Squid now seeing it as a 
viable connection.

Amos
-- 
Please be using
   Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20
   Current Beta Squid 3.1.0.15

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



RE: [squid-users] any work arounds for bug 2176

2009-12-16 Thread Bill Allison
Amos

I've done some more testing / tracing with this - one finding and one question 
to help me do more.

My finding is that the response to a large POST varies - put simply, a small (< 
5Kb) POST succeeds, larger POST pops up UID/PWD request, large (> 80Kb) gives 
the INVALID VERB error that Brett reported. There is no exact borderline size. 
Sometimes a 6Kb upload will succeed, sometimes the max is around 4Kb. This is 
on an otherwise idle test server. So far I've only tcpdumped the first two 
cases and can see that in the middle case, the proxy issues FIN packets to 
server and client just after receiving and passing on the second 401 response 
from the server. A feature, if not a factor, common to failures, at least in 
traces taken so far, is that transfer of the upload to the server begins before 
receipt of the upload from the client has completed. Comment please?

My question - I'd now like to marry up tcpdump traces with squid debug output. 
Having read up on debug_options, I've used 5,6 17,6 33,6 41,6 48,6 58,6 73,6 
85,6 87,6 88,6. What would be a better set?

For the avoidance of doubt - I'm a rank amateur (as if you haven't already 
guessed :-) ) but really need to find a fix or workaround, despite knowing that 
Microsoft state that IIS NTLM authentication can not work through proxy 
servers. Any pointers gratefully received.

Kind regards
Bill A.


-Original Message-
From: Bill Allison 
Sent: 10 December 2009 12:53
To: 'Brett Lymn'; Amos Jeffries
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] any work arounds for bug 2176

Hi

I finally found time to test the patch - with different results from Brett. In 
my case it made no apparent difference - I still get the UID/PWD popup. 
Attached are two wireshark traces, for the same POST attempt before and after 
patching. The traces were taken on the squid box and show client 192.0.1.145 
and webserver 192.0.1.105 traffic - both are on our LAN. Also attached is my 
squid.conf. We're still on 2.6-17 - sorry.

If there is anything you want me to try, I have this test instance available 
for a few days before it has to go live.

Thanks
Bill A.

-Original Message-
From: Brett Lymn [mailto:bl...@baesystems.com.au]
Sent: 07 December 2009 22:19
To: Amos Jeffries
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] any work arounds for bug 2176

On Mon, Dec 07, 2009 at 10:36:52PM +1300, Amos Jeffries wrote:
> 
> I think another trace of the request-reply sequence is needed to see 
> if there is anything different now and what.
> 

I do have a trace from snoop.  I don't want to post it to the list due to it 
containing details of the site we are trying to upload to.
Can I mail it to you off list?

--
Brett Lymn
"Warning:
The information contained in this email and any attached files is confidential 
to BAE Systems Australia. If you are not the intended recipient, any use, 
disclosure or copying of this email or any attachments is expressly prohibited. 
 If you have received this email in error, please notify us immediately. VIRUS: 
Every care has been taken to ensure this email and its attachments are virus 
free, however, any loss or damage incurred in using this email is not the 
sender's responsibility.  It is your responsibility to ensure virus checks are 
completed before installing any data sent in this email to your computer."



-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



RE: [squid-users] any work arounds for bug 2176

2009-12-07 Thread Bill Allison
As the reporter of this bug, apologies Amos for not responding promptly and 
thanks Brett for doing so - my excuse is pressure of work. It's particularly on 
my conscience because we so badly need this fix. Today I have a test instance I 
can use, and I guess the next step, and my contribution, is apply the patch, 
tcpdump the result, and report back (unless Brett gets there first ;-) )

Bill A.

-Original Message-
From: Brett Lymn [mailto:bl...@baesystems.com.au] 
Sent: 07 December 2009 01:39
To: Amos Jeffries
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] any work arounds for bug 2176

On Wed, Dec 02, 2009 at 07:22:57PM +1300, Amos Jeffries wrote:
> 
> Sorry. I attached it to the bug report.
> 

I manually applied the patch - I couldn't be bothered with patch for a
simple #if removal.  The symptoms have changed.  We no longer get an
auth pop up but at the end of the upload the browser displays:

Bad Request (Invalid Verb)

and the document is not shown in the list.  So, the patch made a
difference but something else is amiss still.

-- 
Brett Lymn
"Warning:
The information contained in this email and any attached files is
confidential to BAE Systems Australia. If you are not the intended
recipient, any use, disclosure or copying of this email or any
attachments is expressly prohibited.  If you have received this email
in error, please notify us immediately. VIRUS: Every care has been
taken to ensure this email and its attachments are virus free,
however, any loss or damage incurred in using this email is not the
sender's responsibility.  It is your responsibility to ensure virus
checks are completed before installing any data sent in this email to
your computer."



-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



[squid-users] RE: SQUID PAC-File and JAVA (1.6.11) SOLVED?

2009-08-18 Thread Bill Allison
Long post - hope some of it makes sense / helps. By coincidence, I also have 
just spent the last week trying to sort out a proxy.pac file that works for all 
of our situations - Windows road-warriors that have to use our Squids from any 
of our LANs, from VPN and direct from the Internet to our HQ firewall/squid's 
outside interface. I too had problems with Java applications - until I realised 
that if proxy.pac returns an IP address that cannot be resolved by DNS reverse 
lookup to a hostname then Java triggers all sorts of weird 
delay-and-failure-inducing behaviours e.g. attempts to get the proxy IP using 
repeated Netbios lookups with a null hostname!! So, return a hostname from the 
proxy.pac - e.g.

function FindProxyForURL(url, host) {
// Are we on our LAN? Check first three octets of IP address
var myIPArray = myIpAddress().split(".");
var myClassC = myIPArray[0] + "." + myIPArray[1] +"." + myIPArray[2];
switch (myClassC) {
case "192.0.1": // HQ LAN and VPN
return "PROXY 192.0.1.124:3128";
case "192.0.10": // Branch LAN
return "PROXY 192.0.10.104:3128";
case "192.0.20": // Branch LAN
return "PROXY 192.0.20.104:3128";
.
.
.
case "192.0.110": Branch LAN
return "PROXY 192.0.1.124:3128";
default:
// Not on a LAN so use HQ proxy, via it's external
// interface, but fall back to no proxy if that fails
// so that if we're connecting via a public access
// point, we're able to get the logon page it serves

return "PROXY proxy:12345;DIRECT";
}
}

if the proxy is on an unregistered IP then any old hostname will do, provided 
it is defined in the client's hosts file.

"For example, don't try and code the wpad.dat to use its own IP address.  That 
really doesn't work in lots of situations."

For example, on a Windoze client (XP-SP3 at least) on VPN, the javascript 
function myIPAddress() will return the IP address of the *outside* of the 
tunnel (e.g. the address of the WiFi or 3G interface) and therefore prevent you 
differentiating between clients on the Internet connected to Squid via the 
outside interface of the corporate firewall and clients on the Internet 
connected to Squid via VPN. A nuisance if you have Squid configured to request 
authentication when the connection is from outside but not when it is from the 
LAN or VPN. Needs more detailed specifying / careful ordering of access rules 
in squid.conf to prevent.

Also - if the proxy.pac file is on the client file-system, you must set Java 
proxy settings to use default browser settings - do not specify the location in 
the Java network settings. Then in FF specify the location in this way 
"file:///c:/windows/proxy.pac" and in IE specify it this way 
"file://c:\windows\proxy.pac" in both LAN and VPN profiles

Be warned - the above is quite new, i.e. has not yet stood the test of time!!
 
Cheers
Bill A.

-Original Message-
From: Gavin McCullagh [mailto:gavin.mccull...@gcd.ie] 
Sent: 17 August 2009 17:46
To: squid-users@squid-cache.org
Subject: Re: [squid-users] SQUID PAC-File and JAVA (1.6.11)

Hi,

On Mon, 17 Aug 2009, Volker Jahns wrote:

> We have a lot of IE clients here with a url..proxy.pac file as proxy
> configuration and without automatically finding a proxy server. Whenever we
> use SSL explorer and a JAVA program the final sync failed. If I change the
> configuration to the same manual proxy server and its port it works.

In my experience, what the Java VM can read in proxy.pac/wpad.dat files is
somewhat more limited than IE.  I'd suggest you keep a _very_ simple wpad
if at all possible.  For example, don't try and code the wpad.dat to use
its own IP address.  That really doesn't work in lots of situations.

A tcpdump/windump on the computer watching port 80 should give you an idea
whether Java is really following the proxy settings you think it should.

If you want you can post your script here.

Gavin



-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



RE: [squid-users] JAVA Applet notinited

2008-08-25 Thread bill . allison

Hello

This *might* be the same problem as I encountered recently, that in our case 
(squid-2.6.STABLE17, no authentication) caused very slow loading of applets. 
Here is the record from our task log that shows what we did to fix it. Hope it 
helps.

"The delay was observed in Wireshark traces to be due to unsatisfied netbios 
name -service requests from client to proxy. Googling "netbios name java applet 
proxy" found chatter regarding known bugs in Java 5 still present in Java 6. 
The applet loader attempts to do a reverse lookup of a source host ip. It 
attempts firstly using DNS. That fails for our proxies as they are not in DNS. 
The applet loader then tries to resolve using netbios name services, but 
instead of using the WINS server specified for the PC's interface in the 
MSWindows Networking setup, it attempts to use the proxy, on which there is no 
netbios server, hence long delays due to request timeouts. The two possible 
workarounds are a) configure the proxies in internal DNS or b) install Samba 
netbios services on the proxies. The former is vastly easier and better and was 
immediately successful."

Regards
Bill A.

 -Original Message-
From: [EMAIL PROTECTED]
Sent: 25 August 2008 16:05
To: squid-users@squid-cache.org
Subject: [squid-users] JAVA Applet notinited


   

 --  --

Hello all.

My configuration:
OS: Fedora Core 8
Squid: 2.6 Stable 19
Authentication: NTLM_Auth with Active Directory

A user in my company needs to go on a website, for his job, that start a
JAVA Applet. When the user is using the proxy to access this website, he
gets in the status bar of Internet Explorer "Applet xyz notinited".

Here is the log of the java console.

java.lang.ClassNotFoundException: JavaVersionCheck.class
 at sun.applet.AppletClassLoader.findClass(Unknown Source)
 at java.lang.ClassLoader.loadClass(Unknown Source)
 at sun.applet.AppletClassLoader.loadClass(Unknown Source)
 at java.lang.ClassLoader.loadClass(Unknown Source)
 at sun.applet.AppletClassLoader.loadCode(Unknown Source)
 at sun.applet.AppletPanel.createApplet(Unknown Source)
 at sun.plugin.AppletViewer.createApplet(Unknown Source)
 at sun.applet.AppletPanel.runLoader(Unknown Source)
 at sun.applet.AppletPanel.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)
java.lang.ClassNotFoundException:
sps.wfds.client.dataEntry.XtencilApplet.class
 at sun.applet.AppletClassLoader.findClass(Unknown Source)
 at java.lang.ClassLoader.loadClass(Unknown Source)
 at sun.applet.AppletClassLoader.loadClass(Unknown Source)
 at java.lang.ClassLoader.loadClass(Unknown Source)
 at sun.applet.AppletClassLoader.loadCode(Unknown Source)
 at sun.applet.AppletPanel.createApplet(Unknown Source)
 at sun.plugin.AppletViewer.createApplet(Unknown Source)
 at sun.applet.AppletPanel.runLoader(Unknown Source)
 at sun.applet.AppletPanel.run(Unknown Source)
 at java.lang.Thread.run(Unknown Source)

If for this user I disable the proxy settings, so it means that he will
not use the proxy, the java applet start normally wihout any problem and
the user is able to use the website.

I don't want to add an exeption for this website. I want to limit the
number of exception to our local network and company website.

Is there a way to allow this java applet to work when the proxy is
configured ? Is it a configuration in Squid or Internet Explorer that I
need to do ? Is it a Squid problem ?

Thanks

Jonathan
_


 --
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



RE: [squid-users] POST + NTLM Authentica

2008-07-10 Thread bill . allison
Joe thanks, that's helpful and appreciated - we'll go 3.0. Fingers   
crossed.

 -Original Message-
From: [EMAIL PROTECTED]
Sent: 10 July 2008 14:53
To: squid-users@squid-cache.org
Subject: RE: [squid-users] POST + NTLM Authentica


   

 --  
 --
 Bill,

If you want to test upgrading to 3.0 if you don't have a preference
either way, I'll test 2.7 (I'm currently using 2.6 Stable 20) as I can't
move to 3 yet because of some of the reverse proxy limitations.

Here's hoping! :)

Joe


 -Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Thursday 10 July 2008 14:35
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; squid-users@squid-cache.org
Subject: RE: [squid-users] POST + NTLM Authentica

Thanks. Pressure of work means that it will be a few days before I can

upgrade, but I will let you know when done. What should I consider when

deciding whether to go 3.0 or 2.7?

 -Original Message-
From: [EMAIL PROTECTED]
Sent: 10 July 2008 12:44
To: Bill Allison
Cc: [EMAIL PROTECTED]; squid-users@squid-cache.org
Subject: Re: [squid-users] POST + NTLM Authentica


 


   

 
 --
 --
[EMAIL PROTECTED] wrote:
> Hi
>
> I think this may be the same as I reported as bug 2176
> (http://www.squid-cache.org/bugs/show_bug.cgi?id=2176) against 2.6
> STABLE17. Is it known to be fixed in 2.7 or 3.0?. Apologies for asking

 -
> I haven't had time yet to go through changes logs for subsequent
> versions.

Ah if thats the bug, then unkown. It's still open. Primarily because we
have not ben able to get a usable trace in the current code to find out
whats causing it.

If you upgrade you will either see it disappear; at which point we can
get the others to test 2175 by upgrading themselves. Maybe closing the
issue off.
Or worst case it remains, at which point you are using a version we can
support you while debug tracking the issue down.

Amos

>
> Bill A.
>
>  -Original Message-
> From: [EMAIL PROTECTED]
> Sent: 10 July 2008 05:28
> To: [EMAIL PROTECTED]
> Cc: squid-users@squid-cache.org
> Subject: Re: [squid-users] POST + NTLM Authentica
>
>
>
>
>
   

 
 -
 -

>  --
> Luiz Felipe Ferreira wrote:
>> Guys,
>>
>> We have a squid-2.5STABLE14 with NTLM authentication.
>
> Please upgrade. 2.5 has been obsolete for several years.
>
> We currently support primarily 3.0stable7, and 2.7stable3 for those
who
> can't upgrade to 3.0x.
>
> Either of which is a good upgrade form 2.5.
>
> Amos
>  --
> Please use Squid 2.7.STABLE3 or 3.0.STABLE7
>
>  --
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed to be clean.
>
>
>


 --
Please use Squid 2.7.STABLE3 or 3.0.STABLE7

 --
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.




__

This incoming email was virus scanned for HESA by MessageLabs.
__

_

Higher Education Statistics Agency Ltd (HESA) is a company limited by
guarantee, registered in England at 95 Promenade Cheltenham GL50 1HZ.
Registered No. 2766993. The members are Universities UK and GuildHE.
Registered Charity No. 1039709. Certified to ISO 9001 and BS 7799.
   

HESA Services Ltd (HSL) is a wholly owned subsidiary of HESA,
registered in England at the same address. Registered No. 3109219.
_

This outgoing email was virus scanned for HESA by MessageLabs.
_

 --
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.





RE: [squid-users] POST + NTLM Authentica

2008-07-10 Thread bill . allison
Thanks. Pressure of work means that it will be a few days before I can   
upgrade, but I will let you know when done. What should I consider when   
deciding whether to go 3.0 or 2.7?

 -Original Message-
From: [EMAIL PROTECTED]
Sent: 10 July 2008 12:44
To: Bill Allison
Cc: [EMAIL PROTECTED]; squid-users@squid-cache.org
Subject: Re: [squid-users] POST + NTLM Authentica


   

 --  
 --
[EMAIL PROTECTED] wrote:
> Hi
>
> I think this may be the same as I reported as bug 2176
> (http://www.squid-cache.org/bugs/show_bug.cgi?id=2176) against 2.6
> STABLE17. Is it known to be fixed in 2.7 or 3.0?. Apologies for asking   
 -
> I haven't had time yet to go through changes logs for subsequent
> versions.

Ah if thats the bug, then unkown. It's still open. Primarily because we
have not ben able to get a usable trace in the current code to find out
whats causing it.

If you upgrade you will either see it disappear; at which point we can
get the others to test 2175 by upgrading themselves. Maybe closing the
issue off.
Or worst case it remains, at which point you are using a version we can
support you while debug tracking the issue down.

Amos

>
> Bill A.
>
>  -Original Message-
> From: [EMAIL PROTECTED]
> Sent: 10 July 2008 05:28
> To: [EMAIL PROTECTED]
> Cc: squid-users@squid-cache.org
> Subject: Re: [squid-users] POST + NTLM Authentica
>
>
>
>
>   
 -  
 -

>  --
> Luiz Felipe Ferreira wrote:
>> Guys,
>>
>> We have a squid-2.5STABLE14 with NTLM authentication.
>
> Please upgrade. 2.5 has been obsolete for several years.
>
> We currently support primarily 3.0stable7, and 2.7stable3 for those who
> can't upgrade to 3.0x.
>
> Either of which is a good upgrade form 2.5.
>
> Amos
>  --
> Please use Squid 2.7.STABLE3 or 3.0.STABLE7
>
>  --
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed to be clean.
>
>
>


 --
Please use Squid 2.7.STABLE3 or 3.0.STABLE7

 --
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.





RE: [squid-users] POST + NTLM Authentica

2008-07-10 Thread bill . allison
Hi

I think this may be the same as I reported as bug 2176   
(http://www.squid-cache.org/bugs/show_bug.cgi?id=2176) against 2.6   
STABLE17. Is it known to be fixed in 2.7 or 3.0?. Apologies for asking -   
I haven't had time yet to go through changes logs for subsequent   
versions.

Bill A.

 -Original Message-
From: [EMAIL PROTECTED]
Sent: 10 July 2008 05:28
To: [EMAIL PROTECTED]
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] POST + NTLM Authentica


   

 --  
 --
Luiz Felipe Ferreira wrote:
> Guys,
>
> We have a squid-2.5STABLE14 with NTLM authentication.

Please upgrade. 2.5 has been obsolete for several years.

We currently support primarily 3.0stable7, and 2.7stable3 for those who
can't upgrade to 3.0x.

Either of which is a good upgrade form 2.5.

Amos
 --
Please use Squid 2.7.STABLE3 or 3.0.STABLE7

 --
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.





Re: [squid-users] proxy server chained to another proxy server

2008-03-03 Thread Bill Shannon

Amos Jeffries wrote:

Your cache_peer config REQUIRES squid to lookup the IP of the peer on
startup. If that fails it ignores the peer for sanity, and dies.

Place the IP of the peer directly into your squid.conf.


You were right, but not for the obvious reason.

The peer is a multi-homed host.  To avoid the DNS lookups, I had copied
the four entries for the peer into my /etc/hosts file many years ago.
Unfortunately, either some of the addresses had changed, or I copied them
wrong.  Two of the addresses were correct, but two of them that I had
listed as 129.149.x.x were really 129.146.x.x.  I can't tell you how many
times I looked at the list and didn't see the difference.

The two correct addresses seemed to be good enough for the Netscape proxy
server to figure out, but it looks like squid was picking a random address
and getting one of the bad ones, and it stopped there.

Anyway, correcting the addresses caused everything to work!

In summary, here's the changes I made to the default config file:

# diff etc/squid.conf.default etc/squid.conf
589a590
> acl localnet src 192.168.1.0/255.255.255.0
633a635
> http_access allow localnet
939c941
< http_port 3128
---
> http_port 8180
1500a1503
> cache_peer webcache.sfbay.sun.com parent 8080 7 no-query
2974a2978
> cache_mgr shannon
3017a3022
> cache_effective_group nobody
3033a3039
> visible_hostname  nissan.home.sfbay.sun.com
4071a4078
> never_direct allow all
4184a4192,4193
> # copied from /etc/resolv.conf...
> dns_nameservers 129.146.11.51 129.145.155.226 129.147.62.34
4219a4229
> dns_testnames localhost

Thanks for your help!



[squid-users] proxy server chained to another proxy server

2008-03-03 Thread Bill Shannon

I'm trying to set up a proxy server on my home machine (nissan) that
forwards *all* requests over a VPN connection to a proxy server
(webcache.sfbay.sun.com, *not* running squid) on Sun's internal network
(SWAN).  Here's the changes to squid.conf that I've made:

589a590

acl localnet src 192.168.1.0/255.255.255.0

633a635

http_access allow localnet

939c941
< http_port 3128
---

http_port 8180

1500a1503

cache_peer webcache.sfbay.sun.com parent 8080 7 no-query

2974a2978

cache_mgr shannon

3017a3022

cache_effective_group nobody

3033a3039

visible_hostname  nissan.home.sfbay.sun.com

4071a4078

#never_direct allow all

4219a4227

dns_testnames localhost


I've tried adding "default" to the cache_perr line, but it makes
no difference.

My /etc/resolv.conf is (these are all Sun-internal DNS servers):

domain sfbay.sun.com
search sun.com sfbay.sun.com
nameserver 129.146.11.51
nameserver 129.145.155.226
nameserver 129.147.62.34


I'm running into these problems:

1. My home machine uses Sun's internal sfbay DNS servers when connected via
VPN, but these DNS servers can't resolve internet host names, thus my
dns_testnames change.  But really, I don't understand why it needs to resolve
*any* hostnames if I set it up to proxy everything.  Is there no way to
disable DNS lookups entirely?

2. I think the never_direct entry above should cause it to proxy everything
to the parent proxy server, is that correct?   With that line enabled, all
my requests time out.  With that line disabled, it can at least proxy for
requests on SWAN.

3. Probably related to the above problems, with never_direct commented out,
requests to (e.g.) sunweb.central fail, but requests to sunweb.central.sun.com
work.  DNS lookups from my home machine *do* resolve "sunweb.central".

4. Even when things are more or less working, it's darn slow.  The first
request seems to take forever to respond, and subsequent requests aren't
much better.  It took minutes to display the sunweb.central page.

Any idea what I'm doing wrong?

I'm using squid-2.6.STABLE16 on Solaris 10, which is part of Sun's
"Cool Stack" download.  http://cooltools.sunsource.net/coolstack/

Note that I am also running a version of the Netscape proxy server on my
home machine and it's able to handle this networking configuration just
fine.

Thanks for your help!


Re: [squid-users] Blocking webex

2008-01-09 Thread Bill Jacqmein
I think all the webex stuff is still done over https for the actual
sessions so I would say blocking CONNECT or port 443 should achieve
the desired results. Regular http should still access as expected.

Something along the lines of

acl webex dstdomain .webex.com
#Add a custom error message to let the end customer know why they
arent able to get out to webex
#  and how to get out if they absolutely need to.
#place deny_webex in /etc/squid/error/ or /usr/share/squid/errors/English
#deny_info ERR_DENY_WEBEX deny_webex
http_access deny CONNECT webex

On Jan 9, 2008 12:45 AM, Nadeem Semaan <[EMAIL PROTECTED]> wrote:
> Hello everyone,
>
> anyone know a way of blocking webex without blocking the actual site? I mean 
> I still want users to read about it (even on the official website), I just 
> dont want them to be able to use it without prior permission.
>
> Thanks and Happy New Year
>
>
>   
> 
> Never miss a thing.  Make Yahoo your home page.
> http://www.yahoo.com/r/hs
>


[squid-users] new to squid

2007-04-09 Thread Bill Everhart

Hi all,

I'm brand new to squid. Up until now I've been using apache mod_proxy
with a very simple config:

ProxyRequests On

   Order deny,allow
   Deny from all
   Allow from 10



Today I found out I can no longer use mod_proxy because YUM uses
byteranges and apache doesn't support that. I have read over the squid
config file (wow) and I have a couple of questions:

1. Does squid handle byterange requests?

2. squid seems over the top for what I need, I'm looking for something
that does not cache and just allows traffic from my 10.x network to
redhat network. Is there something else out there I should be looking
at?

3. Could anyone provide me with a config that doesn't cache anything
and just works as a proxy between clients on a 10.x network to rhn?

ok, that was more then a couple of questions. I apprecite any help you
guys can give me.


Re: [squid-users] Forbiden

2006-05-26 Thread Bill Jacqmein

Dominique,

   The outside is the Internet?

Bill

On 5/26/06, Dominique Bagnato <[EMAIL PROTECTED]> wrote:

Thank you,
But the forbiden users are from outside my network. They could come from
what ever domain and try to use the proxy from outside.



Bill Jacqmein wrote:

> Salute Dominique,
>
>   abcd.txt will be drive by url_regex given the definition
> provided
>   lines like .gator.com should work
>   http://www.squid-cache.org/Doc/FAQ/FAQ.html#toc10.4 give
> the basic overview
>
> /usr/local/squid/etc/errors (or where the errors directory under
> squid/etc)
>  ERR_NO_abcd <- File name should contain html. A simple 
> as the example in the faq has.
>
> squid.conf additions
>  acl porn url_regex "/usr/local/squid/etc/abcd.txt"
>  deny info ERR_NO_abcd
>
> Bill
>
> On 5/26/06, Dominique Bagnato <[EMAIL PROTECTED]> wrote:
>
>> Merci Bill,
>> But How to trigger Squid to answers to those forbiden requests ?
>> How Squid will make the differnce between a legal request or a
>> forbiden ?
>>
>> In the exemple:
>>
>> acl porn url_regex "/usr/local/squid/etc/porno.txt"
>>
>>
>> What should I put in the file abcd in  /usr/local/squid/etc/abcd.txt ?
>>
>> Thank you.
>>
>>
>> Bill Jacqmein wrote:
>>
>> > Dominique,
>> >
>> >  http://www.squid-cache.org/Doc/FAQ/FAQ-10.html#ss10.24, is a
>> > FAQ section for customizing squid error messages.
>> >
>> > Good Luck,
>> >
>> >  Bill
>> >
>> > On 5/26/06, Dominique Bagnato <[EMAIL PROTECTED]> wrote:
>> >
>> >> Hi squid users,
>> >> I have squid running on Solaris 10 with apache2.
>> >> It's working perfectly but Is it possible for the Not Allowed
>> Proxy User
>> >> to have a message saying :Forbiden to use this proxy.
>> >> Right now they don't have access at all but they don't have any
>> >> messages. They just see "This page cannot be display.
>> >>
>> >> I guess is just cosmetic but If it's easy to do thank you.
>> >>
>> >> --
>> >> Dominique Bagnato - Head of the Technology Department.
>> >> French International School - Bethesda, MD. USA
>> >> Tel:301 530 8260 Ext:279 - http://www.rochambeau.org
>> >>
>> >>
>> >>
>> >>
>> >
>> >
>> >
>> >
>> >
>>
>>
>> --
>> Dominique Bagnato - Head of the Technology Department.
>> French International School - Bethesda, MD. USA
>> Tel:301 530 8260 Ext:279 - http://www.rochambeau.org
>>
>>
>>
>>
>
>
>
>
>


--
Dominique Bagnato - Head of the Technology Department.
French International School - Bethesda, MD. USA
Tel:301 530 8260 Ext:279 - http://www.rochambeau.org






Re: [squid-users] Forbiden

2006-05-26 Thread Bill Jacqmein

Salute Dominique,

  abcd.txt will be drive by url_regex given the definition provided
  lines like .gator.com should work
  http://www.squid-cache.org/Doc/FAQ/FAQ.html#toc10.4 give
the basic overview

/usr/local/squid/etc/errors (or where the errors directory under squid/etc)
 ERR_NO_abcd <- File name should contain html. A simple 
as the example in the faq has.

squid.conf additions
 acl porn url_regex "/usr/local/squid/etc/abcd.txt"
 deny info ERR_NO_abcd

Bill

On 5/26/06, Dominique Bagnato <[EMAIL PROTECTED]> wrote:

Merci Bill,
But How to trigger Squid to answers to those forbiden requests ?
How Squid will make the differnce between a legal request or a forbiden ?

In the exemple:

acl porn url_regex "/usr/local/squid/etc/porno.txt"


What should I put in the file abcd in  /usr/local/squid/etc/abcd.txt ?

Thank you.


Bill Jacqmein wrote:

> Dominique,
>
>  http://www.squid-cache.org/Doc/FAQ/FAQ-10.html#ss10.24, is a
> FAQ section for customizing squid error messages.
>
> Good Luck,
>
>  Bill
>
> On 5/26/06, Dominique Bagnato <[EMAIL PROTECTED]> wrote:
>
>> Hi squid users,
>> I have squid running on Solaris 10 with apache2.
>> It's working perfectly but Is it possible for the Not Allowed Proxy User
>> to have a message saying :Forbiden to use this proxy.
>> Right now they don't have access at all but they don't have any
>> messages. They just see "This page cannot be display.
>>
>> I guess is just cosmetic but If it's easy to do thank you.
>>
>> --
>> Dominique Bagnato - Head of the Technology Department.
>> French International School - Bethesda, MD. USA
>> Tel:301 530 8260 Ext:279 - http://www.rochambeau.org
>>
>>
>>
>>
>
>
>
>
>


--
Dominique Bagnato - Head of the Technology Department.
French International School - Bethesda, MD. USA
Tel:301 530 8260 Ext:279 - http://www.rochambeau.org






Re: [squid-users] HTTPS Web SITE TIMEOUT

2006-04-19 Thread Bill Jacqmein
Any firewall rules in place upstream from the squid proxy?

On 4/19/06, Rodrigo Barros <[EMAIL PROTECTED]> wrote:
> The web site is www.equifax.com.br , but the problem only happens after
> I authenticate in the site and try to access an specific url
> (https://novoequifaxpessoal.equifax.com.br/PessoalPlusWeb/login.jsp).
>
> The result is always the same:
>
> novoequifaxpessoal.equifax.com.br:443
>
> (60) Connection timed out
> Here's what is shown in the access.log file:
>
> 1145466458.378445 XX.XXX.XX.XX TCP_DENIED/407 1901 CONNECT
> novoequifaxpessoal.equifax.com.br:443 - NONE/- text/html
> 1145466459.524591 XX.XXX.XX.XX TCP_DENIED/407 2089 CONNECT
> novoequifaxpessoal.equifax.com.br:443 - NONE/- text/html
> 1145466465.724   6200 XX.XXX.XX.XX TCP_MISS/200 4441 CONNECT
> novoequifaxpessoal.equifax.com.br:443 XXX\barrosr DIRECT/200.142.202.182
> -
> 1145466465.770  2 XX.XXX.XX.XX TCP_DENIED/407 1901 CONNECT
> novoequifaxpessoal.equifax.com.br:443 - NONE/- text/html
> 1145466465.783  9 XX.XXX.XX.XX TCP_DENIED/407 2089 CONNECT
> novoequifaxpessoal.equifax.com.br:443 - NONE/- text/html
> 1145466465.999215 XX.XXX.XX.XX TCP_MISS/200 3576 CONNECT
> novoequifaxpessoal.equifax.com.br:443 XXX\barrosr DIRECT/200.142.202.182
> -
> 1145466466.078 19 XX.XXX.XX.XX TCP_DENIED/407 1901 CONNECT
> novoequifaxpessoal.equifax.com.br:443 - NONE/- text/html
> 1145466466.109 22 XX.XXX.XX.XX TCP_DENIED/407 2089 CONNECT
> novoequifaxpessoal.equifax.com.br:443 - NONE/- text/html
> 1145466466.316202 XX.XXX.XX.XX TCP_MISS/200 3587 CONNECT
> novoequifaxpessoal.equifax.com.br:443 XXX\barrosr DIRECT/200.142.202.182
> -
> 1145466466.323  2 XX.XXX.XX.XX TCP_DENIED/407 1901 CONNECT
> novoequifaxpessoal.equifax.com.br:443 - NONE/- text/html
> 1145466466.334  7 XX.XXX.XX.XX TCP_DENIED/407 2089 CONNECT
> novoequifaxpessoal.equifax.com.br:443 - NONE/- text/html
> 1145466526.011  59676 XX.XXX.XX.XX TCP_MISS/503 0 CONNECT
> novoequifaxpessoal.equifax.com.br:443 XXX\barrosr DIRECT/200.142.202.182
> -
>
> After the last TCP_MISS/503 I got the (60) timeout message.
>
> Here's what it's shown in cache.log:
>
> [2006/04/19 14:06:04, 3] libsmb/ntlmssp.c:ntlmssp_server_auth(606)
>   Got user=[barrosr] domain=[XXX] workstation=[XXX] len1=24 len2=24
> [2006/04/19 14:06:04, 3] libsmb/ntlmssp_sign.c:ntlmssp_sign_init(319)
>   NTLMSSP Sign/Seal - Initialising with flags:
> [2006/04/19 14:06:04, 3] libsmb/ntlmssp.c:debug_ntlmssp_flags(62)
>   Got NTLMSSP neg_flags=0x20088215
>
>
> Is there anythign else I can provide ?
>
> Thanks,
>
> Rodrigo
>
>
> -Original Message-
> From: Mark Elsen [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, April 19, 2006 1:32 AM
> To: Rodrigo Barros
> Cc: squid-users@squid-cache.org
> Subject: Re: [squid-users] HTTPS Web SITE TIMEOUT
>
> > Hi All,
> >
> > I've been searching google for a while and couldn't find a solution
> > for my problem, so if this has already been posted here sorry.
> >
> > I'm running Squid 2.5.10 with ntlm authentication, and I have this ssl
>
> > web site that does not connect. The only error message I get is (60)
> > Connection timed out .
> >
> > If I bypass the proxy and go straight to the web site, I can
> > succesfully access the resource. Any ideas?
> >
>
>  - What's the URL of the site ?
>  - access.log entry when this is tried ?
>
>  - Anything further in cache.log ?
>
>  M.
>
>
>


Re: [squid-users] Multiple Destinations

2006-04-12 Thread Bill Jacqmein
Slight Off-topic but can the same configuration be done with different
ports on the same ip?



On 4/12/06, Sketch <[EMAIL PROTECTED]> wrote:
> On 4/11/06, Henrik Nordstrom <[EMAIL PROTECTED]> wrote:
> > mån 2006-04-10 klockan 17:59 -0400 skrev Sketch:
> >
> > > Not sure what host header based vhosts are, but it's just a single site 
> > > on each.
>
> Gotcha.  I use IP Based hosts, so from my research thus far the following is 
> true:
>
> * set accel host to virtual, call a redirector which is a separate program, 
> and have it rewrite the URL.
>
> My question regarding this is will we see higher performance invoking a small 
> perl script for every request, rather then setting up a completely separate 
> squid instance?
>
> Has anyone else treaded on this ground?  Your results?
>
> Thanks!
>


Re: [squid-users] squid wont let wget traffic thru

2006-03-22 Thread Bill Jacqmein
One more assumption: The browser reported as working was coming from
the same IP address as wget is being used from.

On 3/22/06, Bill Jacqmein <[EMAIL PROTECTED]> wrote:
> The client setting sound normal.
>
> Forbidden is normally and acl or lack of an acl for access.
> Maybe based on something similar to the following
> http://gaugusch.at/squid.shtml would be my guess.
>
> pick an IE string or Mozilla string depending on which browser was
> working for you from
> http://www.zytrax.com/tech/web/browser_ids.htm#msie
>
> http://www.gnu.org/software/wget/manual/wget.html>
> -U agent-string
> --user-agent=agent-string
> Identify as agent-string to the http server.
>
> The http protocol allows the clients to identify themselves using
> a User-Agent header field. This enables distinguishing the www
> software, usually for statistical purposes or for tracing of protocol
> violations. Wget normally identifies as Wget/version, version being
> the current version number of Wget.
>
> However, some sites have been known to impose the policy of
> tailoring the output according to the User-Agent-supplied information.
> While this is not such a bad idea in theory, it has been abused by
> servers denying information to clients other than (historically)
> Netscape or, more frequently, Microsoft Internet Explorer. This option
> allows you to change the User-Agent line issued by Wget. Use of this
> option is discouraged, unless you really know what you are doing.
>
> Specifying empty user agent with --user-agent="" instructs Wget
> not to send the User-Agent header in http requests.
>
> http://www.gnu.org/software/wget/manual/wget.html>
>
> On 3/22/06, Joey S. Eisma <[EMAIL PROTECTED]> wrote:
> > hi!
> >
> > when i run wget it says:
> >
> > connecting to 192.168.0.2:8088... connected.
> > proxy request sent, awaiting response... 403 Forbidden
> > 10:13:53 ERROR 403: Forbidden.
> >
> > i cannot ask the admin yet to see the what the logs say.
> >
> > but as of yet. my client setting seems normal eh?
> >
> > thanks!
> >
> >
> > Henrik Nordstrom wrote:
> > > tor 2006-03-23 klockan 09:30 +0800 skrev Joey S. Eisma:
> > >
> > >
> > >> declare -x ftp_proxy="http://192.168.0.2:8088/";
> > >> declare -x http_proxy="http://192.168.0.2:8088";
> > >>
> > >> which is exactly my proxy settings.
> > >>
> > >
> > > Looks fine..
> > >
> > >
> > >> what's this supposed to mean? i already have the correct setting but
> > >> squid wont still let wget traffic thru?
> > >>
> > >
> > > Which server does wget say it's connecting to? The proxy, or the origin
> > > server?
> > >
> > > Is there anything in the Squid logs?
> > >
> > > Regards
> > > Henrik
> > >
> >
> > --
> >
> > Joey S. Eisma
> > Information Systems
> > P.IMES Corporation
> > Phase IV, CEPZA, Rosario
> > Cavite, Philippines
> > Tel : 63.46.4372401
> > Fax : 63.46.4372425
> > http://www.pimes.com.ph
> >
> >
>


Re: [squid-users] squid wont let wget traffic thru

2006-03-22 Thread Bill Jacqmein
The client setting sound normal.

Forbidden is normally and acl or lack of an acl for access.
Maybe based on something similar to the following
http://gaugusch.at/squid.shtml would be my guess.

pick an IE string or Mozilla string depending on which browser was
working for you from
http://www.zytrax.com/tech/web/browser_ids.htm#msie

http://www.gnu.org/software/wget/manual/wget.html>
-U agent-string
--user-agent=agent-string
Identify as agent-string to the http server.

The http protocol allows the clients to identify themselves using
a User-Agent header field. This enables distinguishing the www
software, usually for statistical purposes or for tracing of protocol
violations. Wget normally identifies as Wget/version, version being
the current version number of Wget.

However, some sites have been known to impose the policy of
tailoring the output according to the User-Agent-supplied information.
While this is not such a bad idea in theory, it has been abused by
servers denying information to clients other than (historically)
Netscape or, more frequently, Microsoft Internet Explorer. This option
allows you to change the User-Agent line issued by Wget. Use of this
option is discouraged, unless you really know what you are doing.

Specifying empty user agent with --user-agent="" instructs Wget
not to send the User-Agent header in http requests.

http://www.gnu.org/software/wget/manual/wget.html>

On 3/22/06, Joey S. Eisma <[EMAIL PROTECTED]> wrote:
> hi!
>
> when i run wget it says:
>
> connecting to 192.168.0.2:8088... connected.
> proxy request sent, awaiting response... 403 Forbidden
> 10:13:53 ERROR 403: Forbidden.
>
> i cannot ask the admin yet to see the what the logs say.
>
> but as of yet. my client setting seems normal eh?
>
> thanks!
>
>
> Henrik Nordstrom wrote:
> > tor 2006-03-23 klockan 09:30 +0800 skrev Joey S. Eisma:
> >
> >
> >> declare -x ftp_proxy="http://192.168.0.2:8088/";
> >> declare -x http_proxy="http://192.168.0.2:8088";
> >>
> >> which is exactly my proxy settings.
> >>
> >
> > Looks fine..
> >
> >
> >> what's this supposed to mean? i already have the correct setting but
> >> squid wont still let wget traffic thru?
> >>
> >
> > Which server does wget say it's connecting to? The proxy, or the origin
> > server?
> >
> > Is there anything in the Squid logs?
> >
> > Regards
> > Henrik
> >
>
> --
>
> Joey S. Eisma
> Information Systems
> P.IMES Corporation
> Phase IV, CEPZA, Rosario
> Cavite, Philippines
> Tel : 63.46.4372401
> Fax : 63.46.4372425
> http://www.pimes.com.ph
>
>


Re: [squid-users] squid wont let wget traffic thru

2006-03-22 Thread Bill Jacqmein
export http_proxy="http://:"
Both should pick it up for the environment.

On 3/22/06, Joey S. Eisma <[EMAIL PROTECTED]> wrote:
> hello!
>
> we have our proxy server running squid (obviously). just wondering why i
> cannot download anything using wget. but if i use a browser and put in
> the download address, download would simply go through.
>
> ive already asked the admin if there is any setting that would
> disallow/block such download. but he said none.
>
> i have configure wgetrc to use proxy but to no avail.
>
> is there anything with squid that would block such (wget) traffic? i
> cant also run apt-get, but i can download sources via browser.
>
>
> thanks!
>
>
>


Re: [squid-users] squid against malware and worms

2006-03-19 Thread Bill Jacqmein
Dave,

Squidguard (http://www.squidguard.org/intro/) should be able
to accomplish what you are looking to do.

Regards,

 Bill


On 3/19/06, Dave <[EMAIL PROTECTED]> wrote:
> Hello,
> Can squid offer any protection against malware such as 180solution's
> zango and other spyware or worms such as the blackworm? I use these two
> because one of my machines got each of those through my antivirus and
> antispyware progs. What i was wondering is if squid could do scanning and if
> needed elimination as the items are coming in?
> Thanks.
> Dave.
>
>


Re: [squid-users] Question

2006-03-18 Thread Bill Jacqmein
Might be easier to setup as a policy matter instead of a technology application.
Setup the AUP and have HR provide the muscle to getting it acknowledged.

On 3/17/06, Richard J Palmer <[EMAIL PROTECTED]> wrote:
> I'm wondering if Squid can help in this situation...
>
> We have a setup where we want to set a range of PCs to use Squid to
> allow access to websites, etc.
>
> Howeve what we idally want the users to do is on their first web request to
> the internet be greeted with a page where they have to "accept" an AUP
> (in reality all I want is a page to appear and then once they have
> viewed it they can access any other sites they want, without future
> issues (at least fro a set time if that is easier).
>
> now I guess this could be done as some form of authentication but would
> be grateful for any thoughts here (or pointers if it has been siscused
> (I can't see anything obvious).
>
> I'm open to thoughts
> --
> Richard Palmer
>
>


Re: [squid-users] proxy.pac help

2006-03-18 Thread Bill Jacqmein
 Raj,

   The below should work. Assuming the isInNet is working properly.
   I would leave the if statement out and just start with returning
 the Proxy statements if possible. Eliminate systems by just not
pointing them to the proxy.pac

 Regards,

   Bill

 // Assign Proxy based on IP Address of Client
   if (isInNet(myIpAddress(), "172.16.96.0", " 255.255.240.0")){
  return PROXY proxy03.au.ap.abnamro.com:3128 PROXY
 proxy04.au.ap.abnamro.com:3128;
 }



> On 3/18/06, Raj <[EMAIL PROTECTED]> wrote:
> >  Hi All,
> >
> > I am running Squid  2.5.STABLE10. All the clients in our company use
> > proxy.pac file in the browser settings. I need some help with the
> > proxy.pac file. At the moment I have the following configuration:
> >
> > // Assign Proxy based on IP Address of Client
> >   if (isInNet(myIpAddress(), "172.16.96.0", "255.255.240.0")) return "PROXY 
> > prox
> > y03.au.ap.abnamro.com:3128; PROXY proxy04.au.ap.abnamro.com:3128";
> >
> > If the source IP address is from that IP range, it should go to
> > proxy03 first and if proxy03 is down it should go to proxy04. But that
> > is not happening. If proxy03 is down, it is not going to proxy04. Is
> > there any syntax error in the above config.
> >
> > What is the correct syntax in  proxy.pac file so that if proxy03 is
> > down it will go to proxy04?
> >
> > Thanks.
> >
>


Re: [squid-users] Number Of Users

2006-03-05 Thread Bill Jacqmein
The number of connections is probably the more important from a
systems point of view.

Should be able to parse the look to generate how many times a
particular IP visits to get a better guessimate of the user connection
volume for the people management view.

On 3/4/06, Kinkie <[EMAIL PROTECTED]> wrote:
> On Sat, 2006-03-04 at 05:13 +0530, Jacob, Stanley (GE Consumer Finance,
> consultant) wrote:
> > this will give you rough estimate
> > netstat -an | grep 3128 | wc -l
> >
> Maybe you mean mis-estimate...
> This is the number of TCP connections to squid, also available in
> cachemgr.
> Unfortunately it has no real connection to the number of people
> accessing squid: each person who is currently downloading some webpage
> might have multiple streams open (up to four per window in Internet
> Explorer, on Mozilla Firefox up to 4 per process by default, but the
> number can be raised quite a lot). On the other hand, someone who is
> reading a page she downloaded will have no active connections to squid
> (except those connections which are kept alive, and you see how things
> get messy fast...)
>
> In other words, guesstimating the number of users accessing a proxy is
> even messier than trying to estimate the number of users accessing a
> website.
>
> Kinkie
>


[squid-users] bypass squid for some sites

2005-09-12 Thread Bill Hughey
I am running squid on a LRP box. It is running fine as a transparent
proxy. I have a group of internal machines going through it, with a
range of IPs that bypass the proxy using ipchains. Is there a way to
bypass squid for the machines that normally go through squid to reach
certain sites? I have tried acls to allow the sites and always_direct,
but the sites are still much much slower going through the proxy. These
are the chain rules I am using to start:
~
# Redirect to Squid proxy server:
ipchains -A input -p tcp -s 0/0 -d 0/0 8080 -j DENY -i eth0 -l
# Bypass for 192.168.1.8/29 range
ipchains -A input -p tcp -s ! 192.168.1.8/29 -d 0/0 80 -j REDIRECT 8080
~ 
I want to keep the other machines going through the proxy, except let
192.168.1.3 bypass the proxy only to get to sportsonline.com. I’m not
too good with ipchains, can I make another rule to let only this bypass?
Thanks,
Bill




[squid-users] yet another 104 connection reset problem

2005-07-05 Thread Bill Hunter
I have squid configured and (largely) working
correctly on linux using iptables for a transparent
proxy. Everything works ok for most sites except some
where they occasionally and inconsistenly return 104
connection reset by peer errors (amazon and imdb do
this almost reliably when doing searches but, as
already mentioned, it's not 100% consistent and
sometimes works exactly as you'd expect it to).  I've
done some packet sniffing and the server is returning
an ACK followed by RST, which looks like squid is
returning the server's response correctly.  What
doesn't make sense is that when the proxy is switched
out (at the iptables level - no redirect to the
proxy), the request looks identical (except it's
missing the proxy's additional payload in the request)
but the server isn't returning the RST and everything
works exactly as it should.  Is the problem with the
outbound request being mangled at the proxy or beyond,
or the response being mishandled between the server
and the proxy, or is it something completely
different?  Any pointers on where to go next with this
would be greatly appreciated.

Bill.





___ 
Yahoo! Messenger - NEW crystal clear PC to PC calling worldwide with voicemail 
http://uk.messenger.yahoo.com


Re: [squid-users] https, redirector

2005-06-16 Thread Bill Mills-Curran
On Thu, Jun 16, 2005 at 12:32:35AM +0200, Henrik Nordstrom wrote:
> On Wed, 25 May 2005, Bill Mills-Curran wrote:
> 
> >I want to add another "backend" web site that uses https.  I've tried
> >many (too many) different configs, but I can't find the right
> >combination to make it work.
> 

> To make HTTPS connections you need Squid-3.0 (under development) or the 
> SSL update patch to Squid-2.5.

> 
> Regards
> Henrik

Henrik,

Thanks for the response -- the open source community is amazing.

About the patch...  I want to make sure I get the right one.  I see
a couple of SSL mentions at:

http://www.squid-cache.org/Versions/v2/2.5/bugs/

squid-2.5.STABLE2-squid_ldap_auth.patch
http://www.squid-cache.org/Versions/v2/2.5/bugs/squid-2.5.STABLE2-squid_ldap_auth.patch


squid-2.5.STABLE2-redhat9-ssl.patch
http://www.squid-cache.org/Versions/v2/2.5/bugs/squid-2.5.STABLE2-redhat9-ssl.patch


squid-2.5.STABLE1-openssl097.patch
http://www.squid-cache.org/Versions/v2/2.5/bugs/squid-2.5.STABLE1-openssl097.patch

There are a couple of other less likely looking ones...  Which is the
one I want?

Thanks,
Bill


[squid-users] https, redirector

2005-05-25 Thread Bill Mills-Curran
I have a question about using squid as a
redirector/accelerator/reverse proxy.  I've found some references on
the topic, but I'm having a problem that's not addressed.

My basic config:

Platform: Redhat 9
Squid version: squid-2.5.STABLE1-3.9

1.  I'm using a redirector script so that I can access multiple
servers, yet present the look of a single web site.  (It's a
company internal site.)

2.  I've been using this successfully for quite some time, listening
on a specific IP, port 80.

3.  I'm using the same machine as an Apache web server, but with a
different IP.

Problem:

I want to add another "backend" web site that uses https.  I've tried
many (too many) different configs, but I can't find the right
combination to make it work.

1.  With just an entry like:

http_port 10.14.21.32:443

and a url like:

https://esd-bcurran.us.dg.com/eRoom/

the redirector script never even gets called.

Is there something "interesting" in the config I'm missing?

2.  If I use a URL like:

http://esd-bcurran.us.dg.com:443/eRoom/

then the redirector is called, but I never see the target web
page.

Perhaps I'm not rewriting the URL properly in the redirector
script?  Here's some debug output from my redirector.  (I'm
playing with the CONNECT method, but I really don't know what I'm
doing here.)

In: http://esd-bcurran.us.dg.com/eRoom/ 10.14.21.32/esd-bcurran.us.dg.com - 
GET

Out: https://esd-tls02.us.dg.com:443/ 10.14.36.202/esd-tls02.us.dg.com - 
CONNECT

Any help would be very much appreciated.

TIA,
Bill


Re: [squid-users] Performance tuning Squid box for ISP traffic

2004-12-09 Thread Bill Harris
We're using one FBSD 5.2.1 box with squid for our entire district ( 7 
schools),
roughly 1300 computers.

I moved from our FBSD 4.10 baseline to 5.2.1 due to the thread problems,
and it's worked great under 5.2.1.  It's been inserted as a transparent
gw by enabling forwarding for all lan segments, and then hijacking port
80 traffic up to squid on 3128.
Bill
On Dec 9, 2004, at 1:09 PM, Thomas-Martin Seck wrote:
* Chris Robertson ([EMAIL PROTECTED]):
I don't think reiser is available on FreeBSD, but I've been wrong 
before...
You're correct. There is no support for ReiserFS on FreeBSD.



Re: [squid-users] Squid and zlib

2004-11-04 Thread Bill Larson
On Nov 4, 2004, at 12:56 PM, Danny wrote:
My reason for asking is that I'm using MacOS X, which ships from Apple
with zlib 1.1.3.  Building version 1.2.2 is trivial, but is it worth
it?
If you are looking for a reason to upgrade then you may want to
consider security fixes.  I'm not sure, maybe Apple's version was
already patched for the vulnerabilities found over the past couple of
years, but it seems strange it would be 1.1.3.
My mistake, MacOS X ships with zlib 1.1.4, which isn't supposed to have 
the security problems.  But still, it isn't 1.2.

Bill Larson


[squid-users] Squid and zlib

2004-11-04 Thread Bill Larson
I'm curious about the zlib library that Squid uses.
On the zlib web site, the current version of zlib is 1.2.2.  There is a 
note that zlib 1.2.x is faster than (I'm assuming) version 1.1.x.

Has anyone noticed any increased speed with Squid using zlib 1.2 as 
compared to 1.1?  Squid compiles fine with either version.  I'm just 
wondering if this change makes a worthwhile difference in the operation 
of Squid.

My reason for asking is that I'm using MacOS X, which ships from Apple 
with zlib 1.1.3.  Building version 1.2.2 is trivial, but is it worth 
it?

Bill Larson


[squid-users] Strange traffice to port 25?

2004-03-23 Thread Bill Moran
I'm trying to finalize the setup of a new squid cache acting as a transparent
proxy.  As we finish up testing and tweak the config, I'm suddenly finding a
lot of traffic getting TCP_DENIED to (what looks like) port 25.
Unfortunatly, I don't have full access to the entire network in question, so
I can't be 100% sure that there isn't a router somewhere else that's
misconfigured, but I wanted to check in to see if maybe there's some squid-
related explanation for this traffic.
Has anyone seen this before?  Am I right in my thinking that SMTP traffic
is (for some reason) trying to go through squid?  Is there some other
explanation than saying a router somewhere is redirecting incorrectly?
The funny thing about the whole situation is that mail is working fine in
spite of it.  You'd think people would be having trouble with email.
Here's an example log entry:
1080054163.917  3  TCP_DENIED/403 1346 CONNECT :25 - NONE/- 
text/html
The ":25" means that's the destination port, correct?

--
Bill Moran
Potential Technologies
http://www.potentialtech.com


[squid-users] NAT/Multiple Domains/Multiple Web Servers

2004-01-17 Thread Bill Skulley
My apologies if this is answered in a FAQ somewhere... I have been searching 
for a couple of hours and must not be phrasing the question properly...

I am trying to set up two separate web servers (separate hardware, one 
Windows system and one Linux) with two different domain names (one for each 
server).  They are both behind a relatively dumb NAT server that does not 
understand http.  It does allow me to redirect ports (like TCP/80), so I can 
redirect http to *one* of the boxes (or alternatively to another box 
altogether, if needed).  I was thinking I could use squid to proxy for both 
systems and have it direct http traffic to the appropriate system, but I 
have been unable to make that work (probably because I have squid on the 
linux server...) so far.

Am I barking up the wrong tree?  Is there a way to have squid "direct" the 
http traffic to the right server?  Is there a better answer (don't tell me 
"dump the windows" server - I would if I could but I can't)?

Any help, pointers to docs/FAQs which may help, etc etc greatly appreciated.

Thanks
Bill
_
Find high-speed ‘net deals — comparison-shop your local providers here. 
https://broadband.msn.com



RE: [squid-users] setting up a blacklist

2003-09-19 Thread Bill McCormick

> A few problems here:
>
> 1) The first porn acl should be url_regex, not dstdom_regex
> (guessing from the
> file name) - dstdom_regex won't match anything after the hostname
> 2) The 3rd porn acl is missing the acl type (suggest url_regex or
> urlpath_regex)
> 3) Since you're referencing files, you might have to make those 3
> porn acls
> porn1, porn2, and porn3. (You definitely will if they're not the same acl
> type)

Ok ... I can see that.

> 4) The "http_access deny porn" is after you've already allowed your local
> network, so it won't have any effect
>

Oops :-)

> I don't see anything that would give the symptoms you report
> (excessive CPU
> utilization on startup and shutdown). Having too many patterns in

Check my top output ... it was memory bog not cpu.


> the files
> can cause high CPU utilization, but I would expect that to be fairly
> constant. Maybe someone else has more insight.
>

I'm now in the process of setting up squidGuard based on the suggestion
Gareth.

Thanks for your suggestions too.

Bill
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.518 / Virus Database: 316 - Release Date: 9/11/2003



RE: [squid-users] setting up a blacklist

2003-09-19 Thread Bill McCormick
> 
> > Squid brings my dual Xeon Dell to it's knees on startup and
> shutdown.
> 
> Can you post your squid.conf (without comments or blank lines)?
> 
> Adam
> 
> 

Here ya go ...

hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
acl homenet src 192.168.212.0/24
http_access allow homenet
http_access allow localhost
http_access deny all
acl porn dstdom_regex "/usr/share/squid/blacklists/porn/urls"
acl porn dstdom_regex "/usr/share/squid/blacklists/porn/domains"
acl porn "/usr/share/squid/blacklists/porn/expressions"
deny_info ERR_NO_PORNO porn
http_access deny porn
http_reply_access allow all
icp_access allow all
visible_hostname billinux
coredump_dir /var/spool/squid
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.518 / Virus Database: 316 - Release Date: 9/11/2003



[squid-users] setting up a blacklist

2003-09-19 Thread Bill McCormick
Hello all,

I'm a squid newbie trying to use a blacklist and am having some problems:

Squid brings my dual Xeon Dell to it's knees on startup and shutdown.
One thing that might help is if my system had more RAM (only 128 meg)
but that'll have to wait. I think I've got some squid.conf configuration
issues, but pooring over on-line docs reveals the solution NOT.

Here are the relevant squid.conf items:

acl porn dstdom_regex "/usr/share/squid/blacklists/porn/urls"
acl porn dstdom_regex "/usr/share/squid/blacklists/porn/domains"
acl porn "/usr/share/squid/blacklists/porn/expressions"
deny_info ERR_NO_PORNO porn
http_access deny porn

and the files:
-rw-rw-r--1 bill bill   807446 Sep  3 19:16 domains
-rw-rw-r--1 bill bill  802 Jun 17  2002 expressions
-rw-rw-r--1 bill bill   746410 Sep  3 19:28 urls


Here's a top (well .. the top part of it):

 14:25:33  up  1:08,  3 users,  load average: 9.09, 6.38, 2.98
81 processes: 79 sleeping, 2 running, 0 zombie, 0 stopped
CPU0 states:   0.7% user  10.115% system0.0% nice   0.0% iowait  89.133%
idle
CPU1 states:   0.10% user  11.29% system0.0% nice   0.0% iowait  88.216%
idle
CPU2 states:   0.3% user  10.118% system0.0% nice   0.0% iowait  89.134%
idle
CPU3 states:   0.11% user  10.244% system0.0% nice   0.0% iowait  89.0%
idle
Mem:   125396k av,  123252k used,2144k free,   0k shrd,1316k
buff
107368k actv, 880k in_d,1440k in_c
Swap:  257000k av,  256996k used,   4k free2576k
cached

  PID USER PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND
 1165 root  15   0  146M  99M   144 D13.3 81.4   0:17   3 squid
---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.518 / Virus Database: 316 - Release Date: 9/11/2003



[squid-users] removing version number from squid

2003-08-21 Thread Bill Wood
Good Day!
Can anyone tell me how I can  remove the version number from squid? I do not want to 
broadcast what I am running. The only thing that  shows up now, when I run ethereal, 
is squid.

Thank you!

Sincerely,

Bill Wood
[EMAIL PROTECTED]
757.729.7543 

 

Sent via Network Data Systems WebMail at ndshq.com


 
   


[squid-users] squid 2.5 manual

2003-03-07 Thread Bill L
Is there a config manual for 2.5??
I would appreciate that very much.

Bill
-- 
__
Sign-up for your own FREE Personalized E-mail at Mail.com
http://www.mail.com/?sr=signup