Re: [squid-users] After upgrade from 5.7 to 5.9 the whitelists were not listed , we had to readd them.

2024-07-11 Thread Alan Long
Our whitelists are separate files. The files were still in the /etc/squid 
directory, but the configs were gone.
We actually go old school and use webmin to manage the squid server and it 
showed an upgrade.
I am thinking the squid.conf got overwritten, which caused our issue.

Alan Long | Senior Network Engineer I


 Web www.vesta.io

-Original Message-
From: squid-users  On Behalf Of Alex 
Rousskov
Sent: Thursday, July 11, 2024 9:52 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] After upgrade from 5.7 to 5.9 the whitelists were 
not listed , we had to readd them.

** WARNING: This email originated from outside of the organization. Do not 
click links or open attachments unless you recognize the sender and know the 
content is safe. **


On 2024-07-11 10:23, Alan Long wrote:
> We did an upgrade from 5.7 to 5.9 and after the upgrade the whitelists
> we had were gone. We had to recreate them and set them up under the
> access control section.
>
> Anyone seen this? I have another one in queue for upgrade, and will
> get more info once we run the upgrade, but wanted to ask if this is a
> known issue.
>
> Also our delay pool had to be recreated as well.

What do you use to upgrade/install Squid? Do you build Squid from sources and 
then run "make install"? Or do you use some packaging software provided by a 
third party?

Do you put your access rules (a.k.a. whitelists) into squid.conf? Was you 
squid.conf overwritten? With some default configuration??


FWIW, native Squid "make install" does not install or update squid.conf file 
AFAICT. It installs squid.cond.documented and squid.conf.default.


HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://protect.checkpoint.com/v2/___https://lists.squid-cache.org/listinfo/squid-users___.YzJ1OnZlc3RhY29ycG9yYXRpb246YzpvOjU5ODI0ZTdkN2QwZTBkN2RkMjVjYTEwYzhmYjRjOWVlOjY6YTk0MTpmMWUzNjA2ZjY4ZDVmYmNlYTc3OTNhNTUyMjg5Njc0YmU2NDgyZTBjNjIzMzRhMjQ0ZTRiZmNlY2MyNWIxOWZmOnA6VDpO
Notice: This e-mail message and any attachments are the property of Vesta, are 
confidential, and may contain Vesta proprietary information. If you have 
received this message in error, please notify the sender and delete this 
message immediately. Any other use of this message is strictly prohibited.
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] After upgrade from 5.7 to 5.9 the whitelists were not listed , we had to readd them.

2024-07-11 Thread Alan Long
We did an upgrade from 5.7 to 5.9 and after the upgrade the whitelists we had 
were gone. We had to recreate them and set them up under the access control 
section.
Anyone seen this? I have another one in queue for upgrade, and will get more 
info once we run the upgrade, but wanted to ask if this is a known issue.
Also our delay pool had to be recreated as well.


Alan Long | Senior Network Engineer I

[A blue and white logo  Description automatically generated with medium 
confidence]
Web www.vesta.io<http://www.vesta.io/>

Notice: This e-mail message and any attachments are the property of Vesta, are 
confidential, and may contain Vesta proprietary information. If you have 
received this message in error, please notify the sender and delete this 
message immediately. Any other use of this message is strictly prohibited.
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] dynamic messages from acl helper program

2016-01-20 Thread Alan
On Wed, Jan 20, 2016 at 3:34 AM, Sreenath BH  wrote:
>
> We are using acl helper to authenticate users. Squid allows a template
> file that will be used to send a custom error message when the ACL
> sends an "ERR" string back to squid.
>
> In our case the acl helper contacts another web service for authentication.
> Is there a way to send the message we get from the web service (or any
> thing that changes from request to request) back to the client.
>
> Essentially what we are looking for is a way to change the error
> message at run time.

If I understood correctly, what you want is covered by the external
acl keyword "message", which will be replaced into %o in the error
pages.

Please read the docs:
http://www.squid-cache.org/Doc/config/external_acl_type/
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 3.5.6 crash

2015-07-29 Thread Alan
This might be the same crash reported in another thread, or a new one.
I noticed it after upgrading from 3.3.11 to 3.5.6.

I'm using negotiate authentication (kerberos).

Auth settings:
auth_param negotiate program /usr/lib/squid/negotiate_kerberos_auth
auth_param negotiate children 80 startup=5 idle=5
auth_param negotiate keep_alive on

Logging setting:
debug_options ALL,1 29,3

Syslog:
Squid Parent: (squid-1) process 24158 exited due to signal 6 with status 0

cache.log:
2015/07/29 19:32:09.728 kid1| User.cc(305) addIp: user 'LinArlene@TUV.GROUP'
has been seen at a new IP address (10.144.138.48:58283)
2015/07/29 19:32:11.677 kid1| User.cc(185) cacheCleanup: Cleaning the user
cache now
2015/07/29 19:32:11.677 kid1| User.cc(186) cacheCleanup: Current time:
1438169531
2015/07/29 19:32:23 kid1| Set Current Directory to /var/cache/squid
2015/07/29 19:32:23 kid1| Starting Squid Cache version 3.5.6 for
i686-pc-linux-gnu...


Backtrace:

Core was generated by `(squid-1)'.
Program terminated with signal 6, Aborted.
#0  0xb7792424 in __kernel_vsyscall ()
(gdb) bt
#0  0xb7792424 in __kernel_vsyscall ()
#1  0xb7372bf1 in raise () from /lib/libc.so.6
#2  0xb73743ce in abort () from /lib/libc.so.6
#3  0xb736b798 in __assert_fail () from /lib/libc.so.6
#4  0x08460627 in hash_remove_link (hid=0x872f098, hl=0xe739580) at
hash.cc:240
#5  0x0831dc4a in Auth::User::cacheCleanup (datanotused=0x0) at User.cc:208
#6  0x08226795 in EventDialer::dial (this=0x14f535dc) at event.cc:41
#7  0x08226a2d in AsyncCallTEventDialer::fire (this=0x14f535c0) at
../src/base/AsyncCall.h:145
#8  0x0835fa28 in AsyncCall::make (this=0x14f535c0) at AsyncCall.cc:40
#9  0x08362f9f in AsyncCallQueue::fireNext (this=0x872f100) at
AsyncCallQueue.cc:56
#10 0x08362d32 in AsyncCallQueue::fire (this=0x872f100) at
AsyncCallQueue.cc:42
#11 0x08226f5f in EventLoop::dispatchCalls (this=0xbfaa66c8) at
EventLoop.cc:143
#12 0x08226dad in EventLoop::runOnce (this=0xbfaa66c8) at EventLoop.cc:108
#13 0x08226c96 in EventLoop::run (this=0xbfaa66c8) at EventLoop.cc:82
#14 0x0827f753 in SquidMain (argc=1, argv=0xbfaa6844) at main.cc:1511
#15 0x0827eb9f in SquidMainSafe (argc=1, argv=0xbfaa6844) at main.cc:1243
#16 0x0827eb84 in main (argc=1, argv=0xbfaa6844) at main.cc:1236
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid 3.5.2 and Avast free anti-virus

2015-03-02 Thread Alan Palmer

Squid 3.5.2 intercept mode and Avast free antivirus 2015 on windows 7
aren't playing well together.  Chrome returns a ca invalid error, 
details reveal
its the avast web/mail shield cert that its not being trusted. 
Everything works if
I turn the webshield off, or on a very strange note, works fine on a 
Windows XP

(I know, old/bad, upgrade blah blah) machine also running avast 2015.  The
windows XP version does have a difference cert than the windows 7 version
however.  Avast seems to be doing a sslbump on its own between the client
and the squid proxy.  Does anyone else have a similar setup working, and 
if so

whats the magic incantation to make it play nice?

 squid -v
Squid Cache: Version 3.5.2
Service Name: squid
configure options:  '--disable-strict-error-checking' 
'--disable-arch-native' '--enable-shared' 
'--datadir=/usr/local/share/squid' 
'--libexecdir=/usr/local/libexec/squid' '--disable-loadable-modules' 
'--enable-arp-acl' '--enable-auth' '--enable-delay-pools' 
'--enable-follow-x-forwarded-for' '--enable-forw-via-db' 
'--enable-http-violations' '--enable-icap-client' '--enable-ipv6' 
'--enable-referer-log' '--enable-removal-policies=lru heap' 
'--enable-ssl' '--with-openssl=/usr/local/ssl' '--enable-storeio=aufs 
ufs diskd' '--with-default-user=_squid' '--with-filedescriptors=8192' 
'--with-krb5-config=no' '--with-pidfile=/var/run/squid.pid' 
'--with-pthreads' '--with-swapdir=/var/squid/cache' 
'--disable-pf-transparent' '--enable-ipfw-transparent' 
'--enable-external-acl-helpers=LDAP_group SQL_session file_userip 
time_quota session  unix_group wbinfo_group LDAP_group 
eDirectory_userip' '--prefix=/usr/local' '--sysconfdir=/etc/squid' 
'--mandir=/usr/local/man' '--infodir=/usr/local/info' 
'--localstatedir=/var/squid' '--disable-silent-rules' 'CC=cc' 
'CFLAGS=-O2 -pipe' 'LDFLAGS=-L/usr/local/lib' 
'CPPFLAGS=-I/usr/local/include' 'CXX=c++' 'CXXFLAGS=-O2 -pipe' 
'--enable-ssl-crtd' '--enable-ltdl-convenience'


 uname -a
OpenBSD jarosz-fw 5.6 GENERIC.MP#299 i386

squid.conf
...
https_port [::1]:3127 intercept ssl-bump \

generate-host-certificates=on \
dynamic_cert_mem_cache_size=16MB \
cert=/etc/squid/ssl_cert/Test2.pem
#
#   SSL intercept configuration
#
sslcrtd_program /usr/local/libexec/squid/ssl_crtd -s /data/squid/ssl_db 
-M 16MB

sslcrtd_children 10
always_direct allow all
acl step1 at_step SslBump1
ssl_bump peek step1
ssl_bump bump all
sslproxy_cafile /etc/ssl/ca-bundle.crt

https_port[127.0.0.1]:3127 same config lines as the IPv6 port.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.2 and Avast free anti-virus

2015-03-02 Thread Alan Palmer

 This is roughly what the inter-tubes usually look like:
 
 browser - AV === router === NAT - Squid === Internet
 
In this configuration, chrome gives the error

In
Browser-AV-router-NAT-Inet
Or
Browser-router-NAT(redirect)-squid-Inet

Things work just fine, chrome will trust the av or squid on their own but not 
the av when both are inline. That's the oddness. Even odder is winXP working 
with both inline. 

 You may be able to get the Avast CA cert to be trusted by Chrome.
 Otherwise a reasonable backup is to drop the WebShield on-machine and
 using ICAP in Squid to pass traffic to an AV scanner instead.

Maybe I'll go that route. Have to find an AV Scanner. 

Alan
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Different squid-3.5.2 compile error on OpenBSD 5.6

2015-02-26 Thread Alan Palmer
While waiting with baited breath for --with-libressl support, I 
installed openssl-1.02 on openbsd-5.6 to

get squid to compile, but got this error in the final linking:

MemStore.o(.text+0x4fe0): In function 
`MemStore::copyFromShm(StoreEntry, int, Ipc::StoreMapAnchor const)':

: undefined reference to `__sync_fetch_and_add_8'
MemStore.o(.text+0x5197): more undefined references to 
`__sync_fetch_and_add_8' follow


Now this is a bit odd because, from the config.log:

configure:20105: inlining optimizations enabled: yes
configure:20124: checking for GNU atomic operations support
configure:20151: c++ -o conftest -O2 -pipe -I/usr/local/include 
-L/usr/local/lib

 conftest.cpp  5
configure:20151: $? = 0
configure:20151: ./conftest
configure:20151: $? = 0
configure:20156: result: yes

now configure only checks for __sync_fetch_and_add not 
__sync_fetch_and_add_8.


The issue, I believe, is libc++ hasnt been ported to openbsd so it uses 
libstdc++


the compiler in question:
[apalmer]:/data/src/squid-3.5.2# g++ -v
Reading specs from /usr/lib/gcc-lib/i386-unknown-openbsd5.6/4.2.1/specs
Target: i386-unknown-openbsd5.6
Configured with: OpenBSD/i386 system compiler
Thread model: posix
gcc version 4.2.1 20070719

soo, manually editing autoconf.h to:
#define HAVE_ATOMIC_OPS 0

Is there a more graceful way to deal with this?

Alan
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] tlsv1 alert errors

2015-02-23 Thread Alan Palmer
So I got squid to intercept http and https traffic, but I get the 
following error on any https access


2015/02/23 12:50:15 kid1| clientNegotiateSSL: Error negotiating SSL 
connection o
n FD 28: error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown 
ca (1/0

)

This of course leads to all kinds of site untrusted/compromised errors 
in client browsers.


From looking in the archives this usually occurs because of a 
missing/outdated root CA file.

I have the following liness in squid.conf

https_port 127.0.0.1:3127 intercept ssl-bump \
  generate-host-certificates=on \
  dynamic_cert_mem_cache_size=16MB \
  cert=/etc/squid/ssl_cert/MyCA.pem\
  cafile=/etc/ssl/cert.pem # tried without the cafile cirective here as 
well



https_port [::1]:3127 intercept ssl-bump \
  generate-host-certificates=on \
  dynamic_cert_mem_cache_size=16MB \
  cert=/etc/squid/ssl_cert/MyCA.pem\
  cafile=/etc/ssl/cert.pem #tried without the cafile directive here as well

#
sslcrtd_program /usr/local/libexec/squid/ssl_crtd -s /data/squid/ssl_db 
-M 16MB

sslcrtd_children 10
always_direct allow all
sslproxy_cert_error allow all
ssl_bump server-first all
sslproxy_cafile /etc/ssl/cert.pem
#sslproxy_cert_error allow all
#sslproxy_flags DONT_VERIFY_PEER

The /etc/ssl/cert.pem file distributed with openbsd 5.6 has 44 root ca's 
listed (see below).


Is there anyway to get squid to tell me which CA is unknown? If so I can 
get that CA file and add it in.  Or is there a place to get a good 
rootca.pem file? Or is something else wrong?


Thanks muchly for helping the newbie.

Alan

the openbsd5.6 cert.pem contains the following issuers/certificates:
# grep Issuer /etc/ssl/cert.pem
Issuer: C=US, O=GTE Corporation, OU=GTE CyberTrust Solutions, 
Inc., CN=G

TE CyberTrust Global Root
Issuer: C=US, O=Equifax, OU=Equifax Secure Certificate Authority
Issuer: C=US, O=VeriSign, Inc., OU=Class 3 Public Primary 
Certification
Authority - G2, OU=(c) 1998 VeriSign, Inc. - For authorized use only, 
OU=VeriSig

n Trust Network
Issuer: C=BE, O=GlobalSign nv-sa, OU=Root CA, CN=GlobalSign Root CA
Issuer: OU=GlobalSign Root CA - R2, O=GlobalSign, CN=GlobalSign
Issuer: OU=GlobalSign Root CA - R3, O=GlobalSign, CN=GlobalSign
Issuer: C=ZA, ST=Western Cape, L=Cape Town, O=Thawte Consulting 
cc, OU=C
ertification Services Division, CN=Thawte Premium Server 
CA/emailAddress=premium

-ser...@thawte.com
Issuer: C=ZA, ST=Western Cape, L=Cape Town, O=Thawte Consulting 
cc, OU=C
ertification Services Division, CN=Thawte Server 
CA/emailAddress=server-certs@th

awte.com
Issuer: C=US, O=VeriSign, Inc., OU=Class 3 Public Primary 
Certification

Authority
Issuer: C=US, O=VeriSign, Inc., OU=VeriSign Trust Network, 
OU=(c) 2006 V
eriSign, Inc. - For authorized use only, CN=VeriSign Class 3 Public 
Primary Cert

ification Authority - G5
Issuer: C=US, O=VeriSign, Inc., OU=VeriSign Trust Network, 
OU=(c) 1999 V
eriSign, Inc. - For authorized use only, CN=VeriSign Class 3 Public 
Primary Cert

ification Authority - G3
Issuer: C=US, O=VeriSign, Inc., OU=VeriSign Trust Network, 
OU=(c) 2007 V
eriSign, Inc. - For authorized use only, CN=VeriSign Class 3 Public 
Primary Cert

ification Authority - G4
Issuer: C=US, O=VeriSign, Inc., OU=VeriSign Trust Network, 
OU=(c) 2008 V
eriSign, Inc. - For authorized use only, CN=VeriSign Universal Root 
Certificatio

n Authority
Issuer: C=US, O=VeriSign, Inc., OU=VeriSign Trust Network, 
OU=(c) 1999 V
eriSign, Inc. - For authorized use only, CN=VeriSign Class 4 Public 
Primary Cert

ification Authority - G3
Issuer: C=IL, O=StartCom Ltd., OU=Secure Digital Certificate 
Signing, CN

=StartCom Certification Authority
Issuer: L=ValiCert Validation Network, O=ValiCert, Inc., 
OU=ValiCert Class 2 Policy Validation Authority, 
CN=http://www.valicert.com//emailAddress=i...@valicert.com
Issuer: C=US, O=Entrust.net, OU=www.entrust.net/CPS incorp. by 
ref. (limits liab.), OU=(c) 1999 Entrust.net Limited, CN=Entrust.net 
Secure Server Certification Authority
Issuer: C=US, O=DigiCert Inc, OU=www.digicert.com, CN=DigiCert 
High Assurance EV Root CA
Issuer: C=US, O=DigiCert Inc, OU=www.digicert.com, CN=DigiCert 
Assured ID Root CA
Issuer: C=US, O=DigiCert Inc, OU=www.digicert.com, CN=DigiCert 
Global Root CA
Issuer: C=US, O=Equifax Secure Inc., CN=Equifax Secure Global 
eBusiness CA-1
Issuer: C=US, O=Equifax Secure Inc., CN=Equifax Secure 
eBusiness CA-1

Issuer: C=US, O=GeoTrust Inc., CN=GeoTrust Global CA
Issuer: C=US, O=GeoTrust Inc., CN=GeoTrust Global CA 2
Issuer: C=US, O=GeoTrust Inc., CN=GeoTrust Primary 
Certification Authority
Issuer: C=US, O=GeoTrust Inc., OU=(c) 2008 GeoTrust Inc. - For 
authorized use only, CN=GeoTrust Primary Certification Authority - G3

Issuer: C=US, O=GeoTrust Inc., CN=GeoTrust Universal CA
Issuer: C

Re: [squid-users] squid 3.5.2 compile error on openbsd5.6

2015-02-21 Thread Alan Palmer
Surely


Alan Palmer
DO NOT SPAM

 On Feb 21, 2015, at 20:02, Amos Jeffries squ...@treenet.co.nz wrote:
 
 On 22/02/2015 1:31 p.m., Eliezer Croitoru wrote:
 Hey Alan,
 
 I am unsure but is this SSL library headers files are compatible with
 OpenSSL or it would require some existing OpenSSL APIs calls changes?
 
 Eliezer
 
 On 21/02/2015 17:00, Alan Palmer wrote:
 [apalmer]:/data/src/squid-3.5.2# openssl version
 LibreSSL 2.0
 
 LibreSSL should be compatible, but they have been purging a lot of
 unsafe or obsolete API. Give that SSL/TLS compression is a security
 vulnerability I can imagine quiet easily that it walked into the firing
 line and got dropped now.
 
 This is probably a good sign that its time for a separate
 --with-libressl option.
 
 Alan, would you be interested in helping to test that?
 
 Amos
 
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 3.5.2 compile error on openbsd5.6

2015-02-21 Thread Alan Palmer
[apalmer]:/data/src/squid-3.5.2# openssl version
LibreSSL 2.0

Alan Palmer
DO NOT SPAM

 On Feb 21, 2015, at 09:27, Amos Jeffries squ...@treenet.co.nz wrote:
 
 On 22/02/2015 2:03 a.m., Alan Palmer wrote:
 So I get the following error building squid 3.5.2 on openbsd 5.6-release
 
 libtool: compile:  c++ -DHAVE_CONFIG_H -I../.. -I../../include
 -I../../lib -I../
 ../src -I../../include -I/usr/local/include -I/usr/local/include -Wall
 -Wpointer
 -arith -Wwrite-strings -Wcomments -Wshadow -pipe -D_REENTRANT
 -I/usr/include -I/
 usr/local/include -I/usr/local/include/p11-kit-1 -I/usr/include -O2
 -pipe -MT bi
 o.lo -MD -MP -MF .deps/bio.Tpo -c bio.cc  -fPIC -DPIC -o .libs/bio.o
 bio.cc: In function 'bool adjustSSL(SSL*, Ssl::Bio::sslFeatures)':
 bio.cc:328: error: 'struct ssl_ctx_st' has no member named 'comp_methods'
 bio.cc: In member function 'bool Ssl::Bio::sslFeatures::get(const SSL*)':
 bio.cc:672: error: 'struct ssl_session_st' has no member named
 'compress_meth'
 bio.cc:673: error: 'struct ssl_session_st' has no member named
 'compress_meth'
 *** Error 1 in src/ssl (Makefile:894 'bio.lo')
 *** Error 1 in src (Makefile:7126 'all-recursive')
 *** Error 1 in src (Makefile:5989 'all')
 *** Error 1 in /data/src/squid-3.5.2 (Makefile:592 'all-recursive')
 
 details:
 [apalmer]:/data/src/squid-3.5.2# uname -a
 OpenBSD jarosz-fw 5.6 GENERIC.MP#299 i386
 [apalmer]:/data/src/squid-3.5.2# gcc -v
 Reading specs from /usr/lib/gcc-lib/i386-unknown-openbsd5.6/4.2.1/specs
 Target: i386-unknown-openbsd5.6
 Configured with: OpenBSD/i386 system compiler
 Thread model: posix
 gcc version 4.2.1 20070719
 
 
 What version of OpenSSL are you building against?
 
 
 $ ./configure --disable-strict-error-checking --disable-arch-native
 --enable-shared --datadir=/usr/local/share/squid
 --libexecdir=/usr/local/libexec/squid --disable-loadable-modules
 --enable-arp-acl --enable-auth --enable-delay-pools
 --enable-follow-x-forwarded-for --enable-forw-via-db
 --enable-http-violations --enable-icap-client --enable-ipv6
 --enable-referer-log --enable-removal-policies=lru heap --enable-ssl
 --with-openssl --enable-storeio=aufs ufs diskd
 --with-default-user=_squid --with-filedescriptors=8192
 --with-krb5-config=no --with-pidfile=/var/run/squid.pid --with-pthreads
 --with-swapdir=/var/squid/cache --disable-pf-transparent
 --enable-ipfw-transparent --enable-external-acl-helpers=LDAP_group
 SQL_session file_userip time_quota session  unix_group wbinfo_group
 LDAP_group eDirectory_userip --prefix=/usr/local --sysconfdir=/etc/squid
 --mandir=/usr/local/man --infodir=/usr/local/info
 --localstatedir=/var/squid --disable-silent-rules CC=cc CFLAGS=-O2 -pipe
 LDFLAGS=-L/usr/local/lib CPPFLAGS=-I/usr/local/include CXX=c++
 CXXFLAGS=-O2 -pipe --enable-ssl-crtd --enable-ltdl-convenience
 
 I borrowed the configure options from squid-v of the squid 3.4 binary
 packages from openbsd.  Did I throw ina bad configure option?
 
 The configure options are missing quotation marks around these options
 parameters:
 
 --enable-removal-policies=lru heap
 --enable-storeio=aufs ufs diskd
 --enable-external-acl-helpers=LDAP_group SQL_session file_userip
 time_quota session  unix_group wbinfo_group LDAP_group eDirectory_userip
 CFLAGS=-O2 -pipe
 CXXFLAGS=-O2 -pipe
 
 These options are useless. All they do is turn ON features which are
 default-ON anyway:
 
 --enable-auth
 --enable-http-violations
 --enable-ipv6
 --enable-icap-client
 
 
 These options are obsolete:
 
 --enable-arp-acl
 --enable-referer-log
 --enable-ssl
 --with-krb5-config=no
 
 Amos
 ___
 squid-users mailing list
 squid-users@lists.squid-cache.org
 http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid 3.5.2 compile error on openbsd5.6

2015-02-21 Thread Alan Palmer

So I get the following error building squid 3.5.2 on openbsd 5.6-release

libtool: compile:  c++ -DHAVE_CONFIG_H -I../.. -I../../include 
-I../../lib -I../
../src -I../../include -I/usr/local/include -I/usr/local/include -Wall 
-Wpointer
-arith -Wwrite-strings -Wcomments -Wshadow -pipe -D_REENTRANT 
-I/usr/include -I/
usr/local/include -I/usr/local/include/p11-kit-1 -I/usr/include -O2 
-pipe -MT bi

o.lo -MD -MP -MF .deps/bio.Tpo -c bio.cc  -fPIC -DPIC -o .libs/bio.o
bio.cc: In function 'bool adjustSSL(SSL*, Ssl::Bio::sslFeatures)':
bio.cc:328: error: 'struct ssl_ctx_st' has no member named 'comp_methods'
bio.cc: In member function 'bool Ssl::Bio::sslFeatures::get(const SSL*)':
bio.cc:672: error: 'struct ssl_session_st' has no member named 
'compress_meth'
bio.cc:673: error: 'struct ssl_session_st' has no member named 
'compress_meth'

*** Error 1 in src/ssl (Makefile:894 'bio.lo')
*** Error 1 in src (Makefile:7126 'all-recursive')
*** Error 1 in src (Makefile:5989 'all')
*** Error 1 in /data/src/squid-3.5.2 (Makefile:592 'all-recursive')

details:
[apalmer]:/data/src/squid-3.5.2# uname -a
OpenBSD jarosz-fw 5.6 GENERIC.MP#299 i386
[apalmer]:/data/src/squid-3.5.2# gcc -v
Reading specs from /usr/lib/gcc-lib/i386-unknown-openbsd5.6/4.2.1/specs
Target: i386-unknown-openbsd5.6
Configured with: OpenBSD/i386 system compiler
Thread model: posix
gcc version 4.2.1 20070719

 $ ./configure --disable-strict-error-checking --disable-arch-native 
--enable-shared --datadir=/usr/local/share/squid 
--libexecdir=/usr/local/libexec/squid --disable-loadable-modules 
--enable-arp-acl --enable-auth --enable-delay-pools 
--enable-follow-x-forwarded-for --enable-forw-via-db 
--enable-http-violations --enable-icap-client --enable-ipv6 
--enable-referer-log --enable-removal-policies=lru heap --enable-ssl 
--with-openssl --enable-storeio=aufs ufs diskd 
--with-default-user=_squid --with-filedescriptors=8192 
--with-krb5-config=no --with-pidfile=/var/run/squid.pid --with-pthreads 
--with-swapdir=/var/squid/cache --disable-pf-transparent 
--enable-ipfw-transparent --enable-external-acl-helpers=LDAP_group 
SQL_session file_userip time_quota session  unix_group wbinfo_group 
LDAP_group eDirectory_userip --prefix=/usr/local --sysconfdir=/etc/squid 
--mandir=/usr/local/man --infodir=/usr/local/info 
--localstatedir=/var/squid --disable-silent-rules CC=cc CFLAGS=-O2 -pipe 
LDFLAGS=-L/usr/local/lib CPPFLAGS=-I/usr/local/include CXX=c++ 
CXXFLAGS=-O2 -pipe --enable-ssl-crtd --enable-ltdl-convenience


I borrowed the configure options from squid-v of the squid 3.4 binary 
packages from openbsd.  Did I throw ina bad configure option?


Alan

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ssl proxy error: No valid signing SSL certificate configured for https_port [::]:3127

2015-02-16 Thread Alan Palmer

Tried the two links provided, still no luck.

details:
squid -v
Squid Cache: Version 3.4.11
configure options:  '--disable-strict-error-checking' 
'--disable-arch-native' '--enable-shared' 
'--datadir=/usr/local/share/squid' 
'--libexecdir=/usr/local/libexec/squid' '--disable-loadable-modules' 
'--enable-arp-acl' '--enable-auth' '--enable-delay-pools' 
'--enable-follow-x-forwarded-for' '--enable-forw-via-db' 
'--enable-http-violations' '--enable-icap-client' '--enable-ipv6' 
'--enable-referer-log' '--enable-removal-policies=lru heap' 
'--enable-ssl' '--with-openssl' '--enable-storeio=aufs ufs diskd' 
'--with-default-user=_squid' '--with-filedescriptors=8192' 
'--with-krb5-config=no' '--with-pidfile=/var/run/squid.pid' 
'--with-pthreads' '--with-swapdir=/var/squid/cache' 
'--disable-pf-transparent' '--enable-ipfw-transparent' 
'--enable-external-acl-helpers=LDAP_group SQL_session file_userip 
time_quota session  unix_group wbinfo_group LDAP_group 
eDirectory_userip' '--prefix=/usr/local' '--sysconfdir=/etc/squid' 
'--mandir=/usr/local/man' '--infodir=/usr/local/info' 
'--localstatedir=/var/squid' '--disable-silent-rules' 'CC=cc' 
'CFLAGS=-O2 -pipe' 'LDFLAGS=-L/usr/local/lib' 
'CPPFLAGS=-I/usr/local/include' 'CXX=c++' 'CXXFLAGS=-O2 -pipe' 
'--enable-ssl-crtd' --enable-ltdl-convenience


tail -10 squid.conf
https_port 3127 intercept ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=16MB cert=/etc/squid/ssl_cert/server1.crt
sslcrtd_program /usr/local/libexec/squid/ssl_crtd -s 
/usr/local/squid/var/lib/ssl_db -M 16MB

sslcrtd_children 10
ssl_bump server-first all

cert generation
openssl genrsa -des3 -passout pass:x -out server.pass.key 2048
openssl rsa -passin pass:x -in server.pass.key -out server.key
rm server.pass.key
openssl req -new -key server.key -out server.csr
openssl req -new -key server.key -out server.csr
openssl x509 -req -days 730 -in server.csr -signkey server.key
openssl x509 -req -days 730 -in server.csr -signkey server.key -out 
server.crt

cat server.key server.crt  server1.crt

squid -z
FATAL: No valid signing SSL certificate configured for https_port 
0.0.0.0:3127

Squid Cache (Version 3.4.11): Terminated abnormally.
CPU Usage: 0.080 seconds = 0.060 user + 0.020 sys
Maximum Resident Size: 6752 KB
Page faults with physical i/o: 0

cert generation ala 
http://wiki.squid-cache.org/EliezerCroitoru/Drafts/SSLBUMP (squid.conf 
changed to cert=/etc/squid/ssl_cert/myCA.pem)


openssl req -new -newkey rsa:1024 -days 365 -nodes -x509 -keyout 
myCA.pem -out myCA.pem


squid -z
FATAL: No valid signing SSL certificate configured for https_port [::]:3127
Squid Cache (Version 3.4.11): Terminated abnormally.
CPU Usage: 0.040 seconds = 0.010 user + 0.030 sys
Maximum Resident Size: 6288 KB
Page faults with physical i/o: 0

In Reply To:

Hey Alan,

What is the full output of squid -v?

I am unsure about the akadia tutorial.
Please take a look at:
http://wiki.squid-cache.org/EliezerCroitoru/Drafts/SSLBUMP

It contains some hints on how to create the certificate and contains a 
snippet of squid configuration to make a basic ssl-bump work(the echo 
command code might not be right)


I am pretty sure the certificate you have created is not the right type 
for the task.


Eliezer

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


In reply to:
















On 2/15/2015 4:49 PM, Eliezer Croitoru wrote:

On 15/02/2015 23:36, Alan Palmer wrote:

I'm trying to get squid 3.4.11 on openbsd 5.6 to act as a transparent
ssl proxy.

I've rebuilt squid with --enable-ssl-crtd, generated my own self signed
cert (ala http://www.akadia.com/services/ssh_test_certificate.html) and
have the following config lines:


Hey Alan,

What is the full output of squid -v?

I am unsure about the akadia tutorial.
Please take a look at:
http://wiki.squid-cache.org/EliezerCroitoru/Drafts/SSLBUMP

It contains some hints on how to create the certificate and contains a 
snippet of squid configuration to make a basic ssl-bump work(the echo 
command code might not be right)


I am pretty sure the certificate you have created is not the right 
type for the task.


Eliezer

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] ssl proxy error: No valid signing SSL certificate configured for https_port [::]:3127

2015-02-15 Thread Alan Palmer
I'm trying to get squid 3.4.11 on openbsd 5.6 to act as a transparent 
ssl proxy.


I've rebuilt squid with --enable-ssl-crtd, generated my own self signed 
cert (ala http://www.akadia.com/services/ssh_test_certificate.html) and 
have the following config lines:


https_port 3127 transparent ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=4MB cert='/etc/squid/ssl_cert/my-cert.crt'

ssl_bump server-first all
always_direct allow all
sslproxy_flags DONT_VERIFY_PEER
sslcrtd_program /usr/local/libexec/squid/ssl_crtd -s 
/usr/local/squid/var/lib/ssl_db -M 4MB

sslcrtd_children 5

I've read all the notes, hints, email list archives, to not avail.
No matter what I do I get:

FATAL: No valid signing SSL certificate configured for https_port [::]:3127

I get the same error with the 3.4.6.p1 package from openbsd.org (sans 
ssl_crtd config lines)


ideas? solutions? help?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] why isn't Squid listening for tcpv4 connections?

2015-02-10 Thread Alan Boba
Can't reach any web pages when browsers set to proxy with Squid.
Squid's running but doesn't appear to be listening for tcpv4 connections. This 
is a default install on Ubuntu 14.04 server, sudo apt-get install squid3.
Firewall is not blocking access. Here's command output showing that and 
apparently showing Squid not listening for tcpv4 connections.
server:~$ sudo service ufw status
ufw stop/waitingserver:~$ sudo service squid3 status
squid3 start/running, process 4048
server:~$ sudo netstat -plunt | grep squid
tcp6   0  0 :::3128 :::*    LISTEN  
4048/squid3 
udp    0  0 0.0.0.0:47020   0.0.0.0:*   
4048/squid3 
udp6   0  0 :::44475    :::*    
4048/squid3
Why is there no tcp line? What needs to change so there is one, and if there 
is one would the server be accepting connections?


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Compiling Squid without log messages

2015-01-15 Thread Alan
Hello,

I want to have a minimal Squid installation.
So I compiled disabling everything I don't need.
The resulting /usr/sbin/squid is 3.4 Mb.

Since I don't need logging, I decided to remove that as well, but its
not easy to do with sed since sometimes log messages span multiple
lines.

So I changed the definition for debugs() in Debug.h like this:


 /* Debug stream */
+#ifdef NODEBUG
+#define debugs(SECTION, LEVEL, CONTENT) ((void)0)
+#else
 #define debugs(SECTION, LEVEL, CONTENT) \
do { \
 if ((Debug::level = (LEVEL)) = Debug::Levels[SECTION]) { \
@@ -116,6 +119,7 @@
 Debug::finishDebug(); \
 } \
} while (/*CONSTCOND*/ 0)
+#endif


And compiled with -DNODEBUG.
The resulting binary is 2.1 Mb, a 60% size reduction!

But it doesn't work properly, and since there is no log, its hard to debug.

A trace shows it accepts requests, makes them to the HTTP server, but
after that it closes the connection to the HTTP client.

Any ideas?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Web/URL categorisation list

2014-07-25 Thread Alan Dawson
Hi, 

Apologies if this is not completely on topic, but it does concern squid use!

I'm working with an UK Academic institution who are researching whether squid
can provide a usable web filtering solution.

Whilst they are pretty confident that squid will be able to perform at the
required level they are wondering where they can purchase a subscription to
a maintained list of categorised web sites and urls, that could be used to 
develop a bunch
of allow/deny acl's.

Does anyone on this list use squid in this way, and knows of such service ?

Please reply off list, thanks


Alan Dawson
-- 
The introduction of a coordinate system to geometry is an act of violence


signature.asc
Description: Digital signature


Re: [squid-users] Squid 3.4.3 is available

2014-03-17 Thread Alan
On Mon, Feb 3, 2014 at 4:18 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 * Fix external_acl_type async loop failures

 This issue shows up as extra failed authentication checks if the
 external ACL uses %LOGIN format code and credentials are not already
 known. This can have nasty side effects when combined with NTLM in older
 3.4 releases.

Will this be backported to the 3.3 series? How bad is it when using
Kerberos instead of NTLM?

I can't update to 3.4 because of the cpu usage problem described in
the thread titled squid 3.4. uses 100% cpu with ntlm_auth.

Would it be too optimistic to think that this fix solves the cpu usage problem?


Re: [squid-users] A very low level question regarding performance of helpers.

2014-02-12 Thread Alan
On Thu, Feb 13, 2014 at 7:40 AM, Alex Rousskov
rouss...@measurement-factory.com wrote:
 On 02/09/2014 06:48 AM, Eliezer Croitoru wrote:
 I have helpers in all sort of languages and it seems to me that there is
 a limit that do exist on the interface between squid and the helpers by
 the nature of the code as code.

 For sequential helpers, the throughput is a function of response time:

   requests per second = number of helpers / mean response time

 where the response time is the time from the creation of the helper
 request (by Squid) to the end of processing of the helper response (by
 Squid). Thus, it includes overheads of all components participating in
 helper transaction (Squid, helper, OS, etc).

It would be interesting to compare the time it takes for Squid to
process the helper transaction vs the time it takes to the helper to
give back a result.  Some helpers can be complicated, so this can also
help us decide good values for the helper's ttl and cache limit.

And Amos is right about the code, it doesn't exit on eof (I thought
Squid sent it a signal to end it).  But feof only works on streams, so
here is another version in case anybody (Eliezer?) is interested in
measuring this:

#include unistd.h
int main(int argc, char *argv[]) {
char in[32768];
char out[3] = OK\n;
while (0  read(0, in, 256)) {
write(1, out, 3);
fsync(1);
}
}


Re: [squid-users] A very low level question regarding performance of helpers.

2014-02-11 Thread Alan
Hi Eliezer,

I know you have been testing fake helpers in a variety of languages.
How about this one in C?
Save it to helper-trivial.c and then compile it like this:
gcc -O3 trivial.c -o trivial
strip trivial

#include unistd.h
int main(int argc, char *argv[]) {
char in[256];
char out[3] = OK\n;
while (1) {
read(0, in, 256);
write(1, out, 3);
fsync(1);
}
}

It can't get much faster than this.  If you don't see a significant
difference with interpreted languages, it probably means that the
bottleneck is not the helper, but Squid.

As you can see, I don't disable buffering for stdin / stdout, instead
I flush stdout explicitly.  This prevents reading one byte at a time,
as most (all?) helpers currently do.
I don't have any numbers to quantify the difference, but it has to be
faster than completely disabling buffering.

I posted a message to the mailing list about this on July 31st 2013.
It had a patch for the negotiate-kerberos-auth helper.

Regards,

Alan Mizrahi



On Sun, Feb 9, 2014 at 10:48 PM, Eliezer Croitoru elie...@ngtech.co.il wrote:
 I have tried for a very long time to understand what are the limits of the
 interface between squid and the helpers.
 I have tested it with perl, python, ruby, java and other codes.
 I am not yet sure if the STDIN\OUT is the relevant culprit in couple issues.
 I have helpers in all sort of languages and it seems to me that there is a
 limit that do exist on the interface between squid and the helpers by the
 nature of the code as code.

 I have reached a limit of about 50 requests per second on a very slow Intel
 Atom CPU per helper which is not slowing down the whole squid instance else
 then the code startup.

 If you do have couple statistics please feel free to share them.

 * (I have managed to build a I686 squid on CentOS 6.5)

 Eliezer


Re: AW: [squid-users] squid 3.4. uses 100% cpu with ntlm_auth

2014-01-26 Thread Alan
On Wed, Jan 8, 2014 at 1:05 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 7/01/2014 10:21 p.m., Rietzler, Markus (RZF, SG 324 /
 RIETZLER_SOFTWARE) wrote:
 thanxs,

 our assumption is, that it is related to helper management. with 3.4. there 
 is a new helper protocol, right?

 Right. That is the big user-visible bit in 3.4.

 But there are other background changes involving TCP connection
 management, authentication management, ACL behaviours and some things in
 3.3 series also potentially affecting NTLM.

 The feature changes just give us a direction to look in. We still have
 to diagnose each new bug in detail to be sure. There are others already
 using NTLM in older 3.3/3.4 versions without seing this problem for example.

 our environment worked with 3.2 without problems. now with the jump to 3.4. 
 it will not work anymore. so number of requests are somehow important but as 
 it worked in the past...

 if we go without ntlm_auth we can't see any high cpu load. so the first 
 thought ACL and eg. regex problems can be
 discarded. maybe there are some cross influences. but we think it lies 
 somewhere in helpers/auth.

 Did you get any better cache.log trace with the debug_options 29,9 84,9?

 Amos


I have the same problem here, I noticed it when I went from 3.3.8 to 3.4.2.
I assumed the problem was introduced with 3.4.x, so I went back to
3.3.11 and it is working fine.
I'm using aufs, negotiate_kerberos_auth and a custom external acl helper.

Unfortunately these are production servers, so I can't strace or
increase logging as suggested.


[squid-users] Immediate This page can't be displayed on HTTPS requests (UNCLASSIFIED)

2014-01-15 Thread Raczek, Alan J CTR USARMY SEC (US)
Classification: UNCLASSIFIED
Caveats: NONE

We are running Squid 2.7 on a Windows Server 2003 machine. With a few
different HTTPS URL's we are getting an instantaneous This page can't be
displayed in Internet Explorer, doesn't matter what version of IE. In
Mozilla Firefox we get the connection has timed out . Doesn't even think
about it. I tried a few registry hacks having to do with bad proxy timeouts
with Windows but no luck. From the cache log I see that the proxy is
allowing the URL and it gets a hit (we whitelist everything). I am at a
loss as to what is happening. Other HTTPS URL's work, many other websites
work through the proxy and they DO seem to taking a little  time to bring up
the site which tells me Squid is doing the handshaking for sure. 
 

WHAT'S UP??

 
PS our network setup:
LAN - proxy server - ASA 5510 - Internet


***
* Alan Raczek *

* Principal Network Engineer  *   
* CACI*
* Work: (443) 395-5133*
* Cell: (732) 245-4351*
* alan.racz...@us.army.mil*
***

 


Classification: UNCLASSIFIED
Caveats: NONE




smime.p7s
Description: S/MIME cryptographic signature


RE: [squid-users] Immediate This page can't be displayed on HTTPS requests (UNCLASSIFIED)

2014-01-15 Thread Raczek, Alan J CTR USARMY SEC (US)
Classification: UNCLASSIFIED
Caveats: NONE

Sir,

No that is not the same issue. Some HTTPS sites work, some don't. The
browser does not even try to think about a response, just throws
the This page can't be displayed message in IE. And outr proxy is the only
means for Internet access so we can't go without.

..ar

-Original Message-
From: Jakob Curdes [mailto:j...@info-systems.de] 
Sent: Wednesday, January 15, 2014 10:46 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Immediate This page can't be displayed on HTTPS
requests (UNCLASSIFIED)


 We are running Squid 2.7 on a Windows Server 2003 machine. With a few 
 different HTTPS URL's we are getting an instantaneous This page can't 
 be displayed in Internet Explorer, doesn't matter what version of IE. 
 In Mozilla Firefox we get the connection has timed out . Doesn't 
 even think about it.
I currently observe a similar problem in Firefox and sometimes also in IE
WITHOUT using a proxy: for some sites, e.g. Google, I sometimes get the
request is redirected in a way that it can never be terminated 
(translation from german). Then again, things work as expected. So, two
questions: a) is that the same problem you are seeing? and b) What's going
on?

JC

Classification: UNCLASSIFIED
Caveats: NONE




smime.p7s
Description: S/MIME cryptographic signature


[squid-users] Is there a precedence in the allowed sites ACL ? (UNCLASSIFIED)

2014-01-15 Thread Raczek, Alan J CTR USARMY SEC (US)
Classification: UNCLASSIFIED
Caveats: NONE


Just curious that if there is an order that Squid goes in to match a site in
the allowed sites
ACL. Top down?? 

...Alan



***
* Alan Raczek *

* Principal Network Engineer  *   
* CACI*
* Work: (443) 395-5133*
* Cell: (732) 245-4351*
* alan.racz...@us.army.mil*
***

 



Classification: UNCLASSIFIED
Caveats: NONE




smime.p7s
Description: S/MIME cryptographic signature


[squid-users] HTTP 302 with RST in same packet

2013-12-19 Thread Alan
There's a website with a Flash application that tries to fetch an XML.

Based on the headers, the server seems to be Apache 2.2.15 on CentOS.

Without Squid, the content is displayed normally in IE.
With Squid, the client gets an 502 Bad Gateway (ERR_READ_ERROR).

I think the problem is that server sends an HTTP 302 redirect with the
RST bit active in the same packet.
Am I missing something here?

The URL is:
http://vfmii.com/exc/aspquery?command=invokeipid=81014pop=Nripid=galileotpl=490ids=59465


Re: [squid-users] logformat codes

2013-12-09 Thread Alan
On Thu, Dec 5, 2013 at 9:41 AM, Brendan Kearney bpk...@gmail.com wrote:
 i am wondering if there is a logformat code that can be used to log the
 URL (domain.tld or host.domain.tld) independent of the URI
 (/path/to/file.ext?parameter)?  i am using %ru, which gives me the URL
 and URI in one string.  %rp seems to be the URI, but i am not using that
 right now and can only go by what i am reading in the docs.

 i am looking to log the URL in a separate field from the URI so that in
 a database of the log entries, the URL can be indexed for better search
 and reporting performance.  is there an easy way to accomplish this?

Why don't you use a trigger? That is what I do.


Re: [squid-users] squid external_acl_type ip authentication using mysql db

2013-08-28 Thread Alan
On Wed, Aug 28, 2013 at 10:17 PM, markcodes codesm...@gmail.com wrote:
 I recently setup a working squid proxy thru our ubuntu 12 server for testing
 in a possible future setup for our IT system. My boss would like all staff
 using the internet to have their pc's connected to this squid proxy server.
 Since my boss want an IP ONLY authentication, and since I dont have any idea
 about perl or bash scripting, can anyone point me into the right direction?

I think you are confusing authentication with access control.

 I have a mysql server running and I can connect to it using the proxy
 server. The database is sample_db, table name is user_check, column for ip
 is user_ip. I have some poorly written perl script at hand which is I think
 very wrong.

I haven't tested this, but try:

acl allowed_src src /etc/squid/allowed_src.txt
http_access allow allowed_src
http_access deny all

If you still want to go ahead with the poorly written perl thing, read this:
http://www.squid-cache.org/Doc/config/external_acl_type/

Alan


Re: [squid-users] External acl and authenticator mixup help

2013-08-18 Thread Alan
 I know about external_acl_type but this one assumes the user is logged
 in and it won't work since the user is prompted for password before
 being passed to may external acl program (post-auth).

Squid will only try to authenticate the user if your external_acl_type
format specifies %LOGIN.  In your case all you need is %SRC to provide
the source IP.

Alan


Re: [squid-users] Auth basic

2013-08-18 Thread Alan
On Sat, Aug 17, 2013 at 3:02 AM, Oliveiros Peixoto (Netinho)
olivei...@gmail.com wrote:
 Hi Jeffries!

 I created my own script auth_basic. This script checks the username and
 password, if correct it inserts the username and date in the table sessions
 and returns OK login = username for squid.
 I also created one helper with ttl = 60. This helper takes the username and
 password and check the sessions table if the field ip is empty. If not empty
 he updates the field.
 The problem is that when it spends 60 seconds a request is sent to the
 helper with %LOGIN empty, as the helper does not identify the username and
 return ERR to squid then open the popup window again.

Please explain it better, because what you just described looks like a
bug to me.


Fwd: [squid-users] negotiate_kerberos_auth helpers stay busy

2013-07-30 Thread Alan
On Wed, Jul 31, 2013 at 6:59 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 31/07/2013 9:18 a.m., Klaus Walter wrote:
 strace shows me that this helpers are blocked during a read-command:
 
 read(0, r, 1) = 1
 read(0, r, 1) = 1
 read(0, 7, 1) = 1
 read(0, +, 1) = 1
 read(0, a, 1) = 1
 read(0, G, 1) = 1
 read(0,  unfinished ...

 After this the process is never continued.


 That does not look blocked to me. The value arriving is changing, just
 eee ssslloowwwlllyyy, one byte per I/O cycle to be exact. Since
 kerberos credentials can be up to 32 KB large its easy to see why they are
 stuck in BUSY state for such long times.

I see the helper is reading one byte at a time.
The code for the kerberos explicitly disables buffering:

setbuf(stdout, NULL);
setbuf(stdin, NULL);

As a test, I removed setbuf(stdin, NULL), and it works well: the
helper no longer reads one byte at a time, but the whole thing at
once.

I tried to take it further and enable buffering for stdout as well,
but it doesn't work: the helpers don't write anything until the buffer
is full, so it just hangs.  I think glibc only flushes stdout on
newline when stdout is a terminal.  So I added explicit calls to
fflush() after each fprintf(stdout, ...), and now it works like a
charm.

I haven't measured the performance improvement with this, and I'm sure
it doesn't solve Klaus' issue, but I am very interested in improving
performance in every possible way, even if its a small improvement.

If anybody is interested, please see the attached patch against Squid 3.3.8.

Best regards,

Alan



I'm resending this message because it was rejected due to the attachment.
If you are interested in the patch, please run:

echo 
H4sICJB6+FEAA25lZ290aWF0ZV9rZXJiZXJvc19hdXRoLWJ1ZmZlcmluZy5wYXRjaADtmG1z0zgQgD9ff8XSm7Z246R2nHcuMw3XAJ22KZT05m6g43Ft2TEksrHkQo/jv99KdlLHcQqlMBzDaTqNR7Z2tY92Vyu5gedBNYFqDOxtErhVs2bWOrUwDvy9CZlGJGZ7lPghD2xOLDvhk703JL4kcZjvn3fJD2qOk5d1DzEb1Wr1a07rl7pumFW9XTUNMIxew+wZ3VrXrNe7zY5pQkXv6vpGpVL5StMvqmv1TL1mtHRdb7Zb3Uzd/j5UjW5Xa0FF/LRhf38D5u0jvJsEUwLKjPmWw9+rD2/eueQy8RXFmdgx7KqwucX+gS3Wg+HZ2elZDx/Bs3GsKx5f0U0NjkN/HMyIomrw7Oz0ydngRAMvoQ4PQqrBZeLlpXtRHFDuKYy7YcI12Hz0tERkcXhlMdybJmySjc7LDTxQpqGv3vQUtJE41m6MORw9Pu3BOSMx0JCDAEsoDxzE7RaMWpY4b5mlOAUBum7WNQRfN03kPQfthJRxSCgLfEpcSIGySKzrOHxDKPRhdH58nBnBgr+JxSH3/phQn0/wKx0/2ajKjwhHIAt4YjjOoPAqoDdvUtnZYlcNeNAHJYw4CvUJxwfFjn1HA/x/hXTcIGa9yaaqqvDhxmz2LuDORI5b6ndsRmDH3elJBKYufc3U2wVfEyuD8JeGfoafnV5hcCATF2aEMdsna3wt7wTrHKxMVt6rbvUs0XD+2TrkGy4wD2hCct0fUxZGWzMMhGG0NMNcoYHr9FK/gH4fdl7pO3cFc0iv7GngQkzeJoTxe2BZlXQnKKXmLxnKeDwlVNirwm9Qv6el8HKLXazNOIU8891sfoBGU2cWCatR5fPnON+6DCjpGS1DRkmrXhIlyyP/OpMjYXsbll8cHS1E/mw0MyYLmu2UZtssp3njfH0wv5TXYh/+JuRWpH9lhjRKuMXFdlKbzveTS0zcrYblEid0iZVBqpjZXmZ20kTeLbroOlNi4pDgCpMrv44IbLkwGh+fgNQpKJXvnwqKUWFXUQr7o7o0ZWSUEKiUi5g3kdlDDyifzp7FIQ+dcKreCSGuQAjOlNg0iW5JZiS+ChxiCQJOENnTuRc26h0BrGGWARMjebrbp9v9ihuK9glXHGGJQmiY+BPcwGZhfP2ZSX/dipXJKxL7JLVbyBXoiZb5HKHS5zA2Uypa4YVwxpUSCGswydlMOTeK0T7nPLNfhzHKtnnCYBuevHhhvbB+Px2ND0fnQ2s0HB4MD+6EP60SswizRUGK8UrclRLxTvzH43mVKwl8Y+4+Y1ZM8EtGLIxyj8TK9iygC04abKOORcRlOaDRlIm10SwWc3PUiaicf26PRj1OdC1JaJBnmGatQl+afBd4ZYpttFoleEXbjSAtDm+fwgqJwWNxmMq7lwZigndKh+uW7mD46PxJD5Z0lO2Amd7Yc4y22bEIc+yISE5qUVX5qa1g2tqTG07j1oObRN3WJep254dOGqPBD5A0Oi2Jutv8P2msI3ufpNFtCLxNvfEtk8bmYNDvb36HrDFX/N9IG82GrnUQttwHBWyUugEzO6CicIX01iS1Nb3mETcoLy9wRh++7KZGzl4KFLcDJ4M/rcH5+On49Gg4so6Ho4uHSzc5xlL4CByeTzhLT0lpPZwefKpgYIfQpOYjLzVRbkLNZllmJO8DruhqeTVcFktH2R1lHq1MfRhjLImiME5Jf/IyD/Xg36+EumjW3i48HfwxtDArD54dwu7exr8zzzqnWBYAAA==
| perl -MCompress::Zlib -MMIME::Base64 -e 'print
Compress::Zlib::memGunzip(decode_base64())' 
/tmp/negotiate_kerberos_auth-buffering.patch


Re: Fwd: [squid-users] store-id.pl doesnt cache youtube

2013-07-09 Thread Alan
On Tue, Jul 9, 2013 at 1:32 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 8/07/2013 6:34 p.m., Alan wrote:

 On Mon, Jul 8, 2013 at 6:25 AM, Eliezer Croitoru elie...@ngtech.co.il
 wrote:

 try this if you want to try something new.

 https://github.com/elico/squid-helpers/blob/master/squid_helpers/store-id.pl

 Eliezer

 Hi Eliezer,

 I read your script, and I have a suggestion.
 How about you store the matching urls in a separate file or database?
 That way the script would remain the same even if some website changes
 their url scheme.  When the squid admin wants to update the file/db he
 can just issue squid -k reconfigure and the script would reload the
 file/db.

 I just came up with this simple script, based on yours.  I haven't
 tested it though, since 3.4 head segfaults for me (btw, which revision
 are you using?).

 The invocation for my script should be, for example:
 store_id_program /etc/squid/storeid.pl /etc/squid/storeid.txt

 By the way, thanks for your contributions in this mailing list, they
 are very helpful.

 Best regards,

 Alan

 PS: Had to resend to the mailing list because it doesn't allow
 attachments.
 Here are the attachments:

 storeid.pl script:
 http://pastebin.ca/2420563

 storeid.txt file:
 http://pastebin.ca/2420565


 Nice.
  If you would like to nominate a GPLv2 compatible license to this .pl script
 I would be happy adding it to the Squid package.
 I have only been rejecting the store-URL scripts earlier only because they
 hard-coded the patterns.

 Amos

Hi Amos,

Feel free to include it in Squid using the GPLv2 license, that would
be an honor.

But I just noticed it is missing a $|=1 to prevent Perl's output buffering.

And in order to make the DB less Perl-centric, I wrote an alternative
version that you can see here:

http://pastebin.ca/2422099

The other version had Perl code in the DB, which can be quite
powerful, but less language independent. Which one is better? I guess
it depends more or less on what kind of things we will see in the DB.

This new version uses a DB that looks like this:

http://pastebin.ca/2422105

Best regards,

Alan


Fwd: [squid-users] store-id.pl doesnt cache youtube

2013-07-08 Thread Alan
On Mon, Jul 8, 2013 at 6:25 AM, Eliezer Croitoru elie...@ngtech.co.il wrote:
 try this if you want to try something new.
 https://github.com/elico/squid-helpers/blob/master/squid_helpers/store-id.pl

 Eliezer

Hi Eliezer,

I read your script, and I have a suggestion.
How about you store the matching urls in a separate file or database?
That way the script would remain the same even if some website changes
their url scheme.  When the squid admin wants to update the file/db he
can just issue squid -k reconfigure and the script would reload the
file/db.

I just came up with this simple script, based on yours.  I haven't
tested it though, since 3.4 head segfaults for me (btw, which revision
are you using?).

The invocation for my script should be, for example:
store_id_program /etc/squid/storeid.pl /etc/squid/storeid.txt

By the way, thanks for your contributions in this mailing list, they
are very helpful.

Best regards,

Alan

PS: Had to resend to the mailing list because it doesn't allow attachments.
Here are the attachments:

storeid.pl script:
http://pastebin.ca/2420563

storeid.txt file:
http://pastebin.ca/2420565


Re: [squid-users] Regarding url_regex acl

2013-07-04 Thread Alan
This looks wrong:
http_access deny !allowdomain

Try:
http_access deny allowdomain

On Fri, Jul 5, 2013 at 5:16 AM, kannan rbk kannanrb...@gmail.com wrote:
 Dear Team,

 I am using squid proxy 3.1 in centos machine. I want to restrict
 client request from particular domain and web context.

  #
 # Recommended minimum configuration:
 #
 acl manager proto cache_object
 acl localhost src 127.0.0.1/32 ::1
 acl to_localhost dst 127.0.0.0/8 ::1

 acl urlwhitelist url_regex -i ^http(s)://([a-zA-Z]+).zmedia.com/blog/.*$
 acl allowdomain dstdomain .zmedia.com

 acl Safe_ports port 80 8080 8500 7272
 # Example rule allowing access from your local networks.
 # Adapt to list your (internal) IP networks from where browsing
 # should be allowed

 acl SSL_ports port 443
 acl Safe_ports port 80  # http
 acl Safe_ports port 21  # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70  # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535  # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl SSL_ports  port 7272# multiling http
 acl CONNECT method CONNECT

 #
 # Recommended minimum Access Permission configuration:
 #
 # Only allow cachemgr access from localhost
 http_access allow manager localhost
 http_access deny manager
 http_access deny !allowdomain
 http_access allow urlwhitelist
 http_access allow CONNECT SSL_ports
 http_access deny CONNECT !SSL_ports
 # Deny requests to certain unsafe ports
 http_access deny !Safe_ports

 # Deny CONNECT to other than secure SSL ports
 http_access deny CONNECT !SSL_ports


 # We strongly recommend the following be uncommented to protect innocent
 # web applications running on the proxy server who think the only
 # one who can access services on localhost is a local user
 #http_access deny to_localhost

 #
 # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
 #

 # Example rule allowing access from your local networks.
 # Adapt localnet in the ACL section to list your (internal) IP networks
 # from where browsing should be allowed
 http_access allow localhost

 # And finally deny all other access to this proxy
 http_access deny all

 # Squid normally listens to port 3128
 http_port 3128

 # We recommend you to use at least the following line.
 hierarchy_stoplist cgi-bin ?

 # Uncomment and adjust the following to add a disk cache directory.
 #cache_dir ufs /var/spool/squid 100 16 256

 # Leave coredumps in the first cache dir
 coredump_dir /var/spool/squid
 append_domain .zmedia.com

 # Add any of your own refresh_pattern entries above these.
 refresh_pattern ^ftp:   144020% 10080
 refresh_pattern ^gopher:14400%  1440
 refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
 refresh_pattern .   0   20% 4320

 What am I trying here is?

 Restrict request only from(.zmedia.com) and it's working fine. But , I
 can able to access request from any context (url_regex  not working).

 URLS Should be allowed.

 https://accounts.zmedia.com/blog/aboutusers.html
 https://contacts.zmedia.com/blog/aboutusers.html
 https://shop.zmedia.com/blog/aboutusers.html
 ...

 URLS Should not be allowed

 https://shop.zmedia.com/admin/aboutusers.html

 But , now I can able to access(https://shop.zmedia.com/admin/aboutusers.html) 
 .


 Regards,

 Bharathikannan R


Re: [squid-users] low ttl in external_acl_type

2013-05-26 Thread Alan
I experienced something that looks consistent with what you described:

http://www.squid-cache.org/mail-archive/squid-users/201212/0065.html

Please tell us your squid version, config file, and provide some logs
when the error happens.

On Thu, May 23, 2013 at 8:35 PM, James Harper
james.har...@bendigoit.com.au wrote:
 I was testing an external_acl_type and set ttl=3 so my script would be called 
 often enough to see what was happening. This seemed to result in the acl 
 logging as denied fairly regularly, even though it definitely returns OK. 
 Putting ttl up to 30 seconds seems to make all the problems go away. 
 Obviously 3 seconds is a dumb ttl, even for testing, but is this expected?

 Thanks

 James


Re: [squid-users] Re: Kerberos with 2008/2003 DC

2013-05-08 Thread Alan
I didn't see your email with the error and solution.
Can you please post it to the list for future reference?

On Thu, May 9, 2013 at 5:20 AM, SPG spggps...@gmail.com wrote:
 Thanks Markus. I posted my error and the solution. Perhaps you didn't receive
 the mail

 A lot of thanks.



 --
 View this message in context: 
 http://squid-web-proxy-cache.1019090.n4.nabble.com/Kerberos-with-2008-2003-DC-tp4659198p4659861.html
 Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Squid and kerberos

2013-04-25 Thread Alan
On Thu, Apr 25, 2013 at 10:50 PM, Jürgen Obermeyer sq...@oegym.de wrote:

 My main idea is to try kerberos first, and if it fails, use basic
 authentication. I don't understand why this works fine with Firefox, but not
 with IE.

Based on what you wrote, I think the authentication that is working
for you is not negotiate, but basic.
If I were you I would sniff some traffic to see what is really going on there.


[squid-users] ACL based on auth type

2013-04-16 Thread Alan
Is there any way to construct an ACL that checks the authentication
mechanism used (eg: radius/kerberos)?

I want to allow radius authentication only for FTP users, since there
is no FTP client (that I know of) that works with Scalquid using
kerberos authentication, but I want to enable it only for FTP and not
HTTP.

Or even better, anybody knows of a graphical FTP client that can
authenticate to Squid using kerberos?


[squid-users] Order of authentication schemes in Proxy-Authenticate

2013-04-09 Thread Alan
Is there any way to influence the order in which Squid sends the
Proxy-Authenticate headers to the client?  I already tried changing
the order in the config file to no avail.

Background:
I have a squid 3.3.3 proxy using both kerberos and radius.  A capture
shows it offers both Basic and Negotiate authentication schemes, in
separate headers and in that order.
IE seems to try Negotiate first and Basic later, disregarding the
order in which the headers appear.
Firefox seems to be trying in the same order as the headers appear (I
haven't confirmed that changing the order would fix this).

The RFC doesn't seem to mention anything about which one should be
tried first, so both approaches seem reasonable.  I haven't been able
to find any configuration option in Firefox to change the order
either.


Re: [squid-users] Variables and external_acl_types

2013-01-17 Thread Alan Schmidt
Hi Amos,

I'm actually writing it from scratch, i've just taken squid_ldap_group as an
invocation example(???). I think macros is what i'm missing.
I'll be researching on your answers.

Thanks a lot for your time.

On Thu, Jan 17, 2013 at 3:39 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 17/01/2013 6:28 a.m., Alan Schmidt wrote:

 Hi list,

 Due to my employer's specific requirement, I'm writing an external_acl
 helper that allows us to query an LDAP server for valid dstdomains.
 It's actually working (not in the cleanest way :S), but i think i'm
 lacking squid basic knoledge to get it done properly.

 I can see from squid_ldap_group helper configuration

 external_acl_type ldap_group ttl=1 negative_ttl=1 %LOGIN
 /usr/sbin/squid_ldap_group -d -D $ADMIN_DN -w $PASS -b $SUFFIX -f
 ((memberUid=%u)(cn=%g)) -h 127.0.0.1 -v 3

 that it uses %LOGIN format and %u/%g variables.

 I don't fully understand this, is there any list of this squid's
 available variables??? where do they come from (squid environmental??)
 ???.


 Formats are listed in the directive documentation:
   http://www.squid-cache.org/Doc/config/external_acl_type/

 The %u/%g variables are macros specific to the helper program. For
 squid_ldap_group they are listed here:
 http://www.squid-cache.org/Versions/v3/3.1/manuals/squid_ldap_group.html



 Using %DST i managed to get the info i need from squid (requested url
 and name of the acl) via standard input. Helper works this way, but
 it's quite awkward.

 The question: is there any variable (like %u or %g from the example
 above) i could use to pass the requested url and acl via helper
 parameter?
 This way i could generate a much more flexible code.


 No the helper parameters are a raw command line characters.
 You could copy-n-paste the squid.conf contents from /usr/sbin... onwards
 including those %u/%g into a command line shell then manually type user
 group1 group2 group3 or whatever user/group combos you want as stdin input
 to the helper.


 What i want to do woud be something like:

 external_acl_type validsites ttl=1 negative_ttl=1 %DST
 /usr/sbin/squid_ldap_checksite -D $ADMIN_DN -w %PASS -b $SUFFIX -h


 %PASS is the password some HTTP client sent to Squid.

 -w in this helper is the LDAP password permitting the proxy access
 permission to do LDAP searches and find some users account details. You DO
 NOT want all your end-user accounts to be given LDAP search privileges.

 NP: In fact use of the lower-case -w option is not very good security
 practice. It is far better and very simple to use the upper case -W option
 which stores the password detail in a secure location and does not display
 it in cache.log and cachemgr config report.


 127.0.0.1 -f urlattribute=%something
 being %something a variable containing the requested url.


 You can replace %something with %u or %g.
  %u is the first token (expected to be %LOGIN) in the helper format string.
  %g is replaced by eaach of the additional tokens presented on the helper
 stdin. There can be multiple groups passed (as shown in my above example)
 and each is searched for individually until one matches or confirmed none
 match or something fails.


 I'm sorry if this is not the place to ask, or if the info is available
 somewhere already. I've been searching on manuals, faqs, etc, without
 any luck.
 I'm relatively new to this kind of stuff (both lists and
 external_acl_types :S). If someone coud point me at least at the right
 documentation i'll be very grateful.


 The helper you are testing with is written specifically as a helper to
 lookup a users group, with flexibility on where the account details may be
 stored in LDAP.

 FWIW: You may want to take the code for that helper and adjust it to suit
 your needs better than the existing one can. If you want to alter the
 behaviour of %g or add other filter macros you will need to do this.

 Amos



-- 
Alan


[squid-users] Variables and external_acl_types

2013-01-16 Thread Alan Schmidt
Hi list,

Due to my employer's specific requirement, I'm writing an external_acl
helper that allows us to query an LDAP server for valid dstdomains.
It's actually working (not in the cleanest way :S), but i think i'm
lacking squid basic knoledge to get it done properly.

I can see from squid_ldap_group helper configuration

external_acl_type ldap_group ttl=1 negative_ttl=1 %LOGIN
/usr/sbin/squid_ldap_group -d -D $ADMIN_DN -w $PASS -b $SUFFIX -f
((memberUid=%u)(cn=%g)) -h 127.0.0.1 -v 3

that it uses %LOGIN format and %u/%g variables.

I don't fully understand this, is there any list of this squid's
available variables??? where do they come from (squid environmental??)
???.
Using %DST i managed to get the info i need from squid (requested url
and name of the acl) via standard input. Helper works this way, but
it's quite awkward.

The question: is there any variable (like %u or %g from the example
above) i could use to pass the requested url and acl via helper
parameter?
This way i could generate a much more flexible code.

What i want to do woud be something like:

external_acl_type validsites ttl=1 negative_ttl=1 %DST
/usr/sbin/squid_ldap_checksite -D $ADMIN_DN -w %PASS -b $SUFFIX -h
127.0.0.1 -f urlattribute=%something
being %something a variable containing the requested url.

I'm sorry if this is not the place to ask, or if the info is available
somewhere already. I've been searching on manuals, faqs, etc, without
any luck.
I'm relatively new to this kind of stuff (both lists and
external_acl_types :S). If someone coud point me at least at the right
documentation i'll be very grateful.

Thanks in advance.

--
Alan


Re: [squid-users] RE: Memory leak in 3.2.5

2012-12-21 Thread Alan
On Fri, Dec 21, 2012 at 9:38 PM, Mike Mitchell mike.mitch...@sas.com wrote:
 The ident memory is properly freed only if the ident query suceeds.
 The ident memory is not freed if the ident query times out or if the
 client is no longer around for the result.

 I'm testing a patch and I should have something by the end of the day.

 Mike Mitchell

Hi Mike,

I don't want to get your hopes down, but my squid 3.2.5 installations
also have a memory leak and I compiled them without ident support.
The leak I have noticed is not as big as the one you have, but it is
still enough to run out of memory every day on a 3 Gb server.

Alan


RE: [squid-users] 3.2.4 build problem

2012-12-13 Thread Alan Lehman
 On 13.12.2012 11:48, Alan Lehman wrote:
  On 8/12/2012 11:02 a.m., Alan Lehman wrote:
   I'm having trouble building 3.2.4 on RHEL5.
  
   I configured with options :
   --enable-ssl --enable-useragent-log --enable-referer-log
   --with-filedescriptors=8192 --enable-delay-pools
  
   make all says:
   ext_file_userip_acl.cc: In function âint main(int, char**)â:
   ext_file_userip_acl.cc:254: error: âerrnoâ was not declared in
  this
   scope
   make[3]: *** [ext_file_userip_acl.o] Error 1
  
   Any ideas?
 
  Use the daily update package please. This was fixed a few hours
 after
  release.
 
  When I have time to confirm how that got past testing and that there
  are no others hiding anywhere else there will be a new release.
 
  HTH
  Amos
 
  Still having trouble building. I am trying 3.2.5-2012121-r11739, and
  it gives me the following errors. I've tried removing all the
  configure options, but the results look about the same regardless.
 
  Thanks for any help.
 
 
 
  /home/alehman/squid-3.2.5-20121212-
 r11739/src/../src/ipc/AtomicWord.h:47:
  undefined re ference to `__sync_fetch_and_add_4'

 What version of GCC/G++ are you using?

 Amos

4.1.2


CONFIDENTIALITY NOTICE: This e-mail message including attachments, if any, is 
intended for the person or entity to which it is addressed and may contain 
confidential and/or privileged material. Any unauthorized review, use, 
disclosure or distribution is prohibited. If you are not the intended 
recipient, please contact the sender by reply e-mail and destroy all copies of 
the original message. Thank you.



RE: [squid-users] 3.2.4 build problem

2012-12-12 Thread Alan Lehman
 On 8/12/2012 11:02 a.m., Alan Lehman wrote:
  I'm having trouble building 3.2.4 on RHEL5.
 
  I configured with options :
  --enable-ssl --enable-useragent-log --enable-referer-log
  --with-filedescriptors=8192 --enable-delay-pools
 
  make all says:
  ext_file_userip_acl.cc: In function âint main(int, char**)â:
  ext_file_userip_acl.cc:254: error: âerrnoâ was not declared in this
  scope
  make[3]: *** [ext_file_userip_acl.o] Error 1
 
  Any ideas?

 Use the daily update package please. This was fixed a few hours after
 release.

 When I have time to confirm how that got past testing and that there
 are no others hiding anywhere else there will be a new release.

 HTH
 Amos

Still having trouble building. I am trying 3.2.5-2012121-r11739, and it gives 
me the following errors. I've tried removing all the configure options, but the 
results look about the same regardless.

Thanks for any help.


/home/alehman/squid-3.2.5-20121212-r11739/src/../src/ipc/AtomicWord.h:47: 
undefined re ference to `__sync_fetch_and_add_4'
libIpcIo.a(IpcIoFile.o): In function `Ipc::Atomic::WordTint::operator+=(int)':
/home/alehman/squid-3.2.5-20121212-r11739/src/../src/ipc/AtomicWord.h:31: 
undefined re ference to `__sync_add_and_fetch_4'
libIpcIo.a(IpcIoFile.o): In function `Ipc::Atomic::WordTint::get() const':
/home/alehman/squid-3.2.5-20121212-r11739/src/../src/ipc/AtomicWord.h:47: 
undefined re ference to `__sync_fetch_and_add_4'
/home/alehman/squid-3.2.5-20121212-r11739/src/../src/ipc/AtomicWord.h:47: 
undefined re ference to `__sync_fetch_and_add_4'
/home/alehman/squid-3.2.5-20121212-r11739/src/../src/ipc/AtomicWord.h:47: 
undefined re ference to `__sync_fetch_and_add_4'
/home/alehman/squid-3.2.5-20121212-r11739/src/../src/ipc/AtomicWord.h:47: 
undefined re ference to `__sync_fetch_and_add_4'
/home/alehman/squid-3.2.5-20121212-r11739/src/../src/ipc/AtomicWord.h:47: 
undefined re ference to `__sync_fetch_and_add_4'
libIpcIo.a(IpcIoFile.o):/home/alehman/squid-3.2.5-20121212-r11739/src/../src/ipc/Atomi
 cWord.h:47: more undefined references to `__sync_fetch_and_add_4' 
follow
libIpcIo.a(IpcIoFile.o): In function `Ipc::Atomic::WordTint::operator+=(int)':
/home/alehman/squid-3.2.5-20121212-r11739/src/../src/ipc/AtomicWord.h:31: 
undefined re ference to `__sync_add_and_fetch_4'
libIpcIo.a(IpcIoFile.o): In function `Ipc::Atomic::WordTint::get() const':
/home/alehman/squid-3.2.5-20121212-r11739/src/../src/ipc/AtomicWord.h:47: 
undefined re ference to `__sync_fetch_and_add_4'
libIpcIo.a(IpcIoFile.o): In function `Ipc::Atomic::WordTint::swap_if(int, 
int)':
/home/alehman/squid-3.2.5-20121212-r11739/src/../src/ipc/AtomicWord.h:38: 
undefined re ference to `__sync_bool_compare_and_swap_4'
libIpcIo.a(IpcIoFile.o): In function `Ipc::Atomic::WordTint::get() const':
/home/alehman/squid-3.2.5-20121212-r11739/src/../src/ipc/AtomicWord.h:47: 
undefined re ference to `__sync_fetch_and_add_4'
/home/alehman/squid-3.2.5-20121212-r11739/src/../src/ipc/AtomicWord.h:47: 
undefined re ference to `__sync_fetch_and_add_4'
libIpcIo.a(IpcIoFile.o): In function `Ipc::Atomic::WordTint::swap_if(int, 
int)':
/home/alehman/squid-3.2.5-20121212-r11739/src/../src/ipc/AtomicWord.h:38: 
undefined re ference to `__sync_bool_compare_and_swap_4'
libIpcIo.a(IpcIoFile.o): In function `Ipc::Atomic::WordTint::get() const':
/home/alehman/squid-3.2.5-20121212-r11739/src/../src/ipc/AtomicWord.h:47: 
undefined re ference to `__sync_fetch_and_add_4'
libIpcIo.a(IpcIoFile.o): In function `Ipc::Atomic::WordTint::swap_if(int, 
int)':
/home/alehman/squid-3.2.5-20121212-r11739/src/../src/ipc/AtomicWord.h:38: 
undefined re ference to `__sync_bool_compare_and_swap_4'
libIpcIo.a(IpcIoFile.o): In function `Ipc::Atomic::WordTint::operator-=(int)':
/home/alehman/squid-3.2.5-20121212-r11739/src/../src/ipc/AtomicWord.h:32: 
undefined re ference to `__sync_sub_and_fetch_4'
libIpcIo.a(IpcIoFile.o): In function `Ipc::Atomic::WordTint::get() const':
/home/alehman/squid-3.2.5-20121212-r11739/src/../src/ipc/AtomicWord.h:47: 
undefined re ference to `__sync_fetch_and_add_4'
/home/alehman/squid-3.2.5-20121212-r11739/src/../src/ipc/AtomicWord.h:47: 
undefined re ference to `__sync_fetch_and_add_4'
/home/alehman/squid-3.2.5-20121212-r11739/src/../src/ipc/AtomicWord.h:47: 
undefined re ference to `__sync_fetch_and_add_4'
ipc/.libs/libipc.a(Queue.o): In function `Ipc::Atomic::WordTint::swap_if(int, 
int)':
/home/alehman/squid-3.2.5-20121212-r11739/src/ipc/../../src/ipc/AtomicWord.h:38:
 undef ined reference to `__sync_bool_compare_and_swap_4'
ipc/.libs/libipc.a(Queue.o): In function `Ipc::Atomic::WordTint::swap_if(int, 
int)':
/home/alehman/squid-3.2.5-20121212-r11739/src/ipc/Queue.cc:256: undefined 
reference to  `__sync_bool_compare_and_swap_4'
ipc/.libs/libipc.a(ReadWriteLock.o): In function 
`Ipc::Atomic::WordTint

Re: [squid-users] Timeout problem in 3.2.3

2012-12-12 Thread Alan
On Thu, Dec 13, 2012 at 11:26 AM, csn233 csn...@gmail.com wrote:

 On Thu, Dec 13, 2012 at 10:01 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 13.12.2012 14:45, csn233 wrote:

 I'm testing Squid 3.2.3 and seem to be encountering some sort of
 timeout problem on browsers. When left idle for long periods
 (hours), Firefox and IE9 seem to lose the connection, and can only
 be resolved by restarting the browser.

 The squid.conf is more or less the standard copy from the
 installation, and I have not added anything that looks like a
 timeout parameter. I've never seen this in previous versions I've
 used, 2.6 and 3.1.

 What might it be?


 * it might be browser loosing a connection object internally
 * it might be HTTP/1.1 handling issues in either browser or Squid
 * it might be TCP timeouts in the network stack of any device between
 browser and Squid
 * it might be NAT timeouts in any device between browser and Squid
 * it might be ARP table timeouts on any device between browser and Squid

 All of these are possibilities in 3.2 where HTTP/1.1 is use between Browser
 and Squid. But not in the older versions where HTTP/1.0 is used. As
 side-effects of HTTP 1.1 vs 1.0 keep-alive behaviour.

 Amos

 Thanks for replying.

 Since I'm testing on the same network/systems where I also have other
 Squid versions running (plus numerous other applications), it is
 probably reasonable to rule out TCP/NAT/ARP, since the problem is not
 seen elsewhere.

 If we narrow it down to the first 2 items (connection object or
 HTTP/1.1), what options are available to prevent the timeout?


Unless your other Squid versions are 3.2, they are not using
persistent connections, that means you can't rule out TCP or NAT
timeouts.

If I were you I would check this by writing a script that opens a
persistent HTTP/1.1 connection to some host on the Internet and see
what happens.

Alan

PS:
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?


RE: [squid-users] 3.2.4 build problem

2012-12-10 Thread Alan Lehman


CONFIDENTIALITY NOTICE: This e-mail message including attachments, if any, is 
intended for the person or entity to which it is addressed and may contain 
confidential and/or privileged material. Any unauthorized review, use, 
disclosure or distribution is prohibited. If you are not the intended 
recipient, please contact the sender by reply e-mail and destroy all copies of 
the original message. Thank you.

-Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz]
 Sent: Friday, December 07, 2012 5:36 PM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] 3.2.4 build problem

 On 8/12/2012 11:02 a.m., Alan Lehman wrote:
  I'm having trouble building 3.2.4 on RHEL5.
 
  I configured with options :
  --enable-ssl --enable-useragent-log --enable-referer-log
  --with-filedescriptors=8192 --enable-delay-pools
 
  make all says:
  ext_file_userip_acl.cc: In function âint main(int, char**)â:
  ext_file_userip_acl.cc:254: error: âerrnoâ was not declared in this
  scope
  make[3]: *** [ext_file_userip_acl.o] Error 1
 
  Any ideas?

 Use the daily update package please. This was fixed a few hours after
 release.

 When I have time to confirm how that got past testing and that there
 are no others hiding anywhere else there will be a new release.

 HTH
 Amos

Thanks. I grabbed the 1207 package but it's giving me a different error...

squid-3.2.4-20121207-r11738/src/ipc/../../src/ipc/AtomicWord.h:47: undefined 
reference to `__sync_fetch_and_add_4'
collect2: ld returned 1 exit status
libtool: link: rm -f .libs/squidS.o
make[3]: *** [squid] Error 1



[squid-users] 3.2.4 build problem

2012-12-07 Thread Alan Lehman
I'm having trouble building 3.2.4 on RHEL5.

I configured with options :
--enable-ssl --enable-useragent-log --enable-referer-log 
--with-filedescriptors=8192 --enable-delay-pools

make all says:
ext_file_userip_acl.cc: In function âint main(int, char**)â:
ext_file_userip_acl.cc:254: error: âerrnoâ was not declared in this scope
make[3]: *** [ext_file_userip_acl.o] Error 1

Any ideas?

Thanks,
Alan L


CONFIDENTIALITY NOTICE: This e-mail message including attachments, if any, is 
intended for the person or entity to which it is addressed and may contain 
confidential and/or privileged material. Any unauthorized review, use, 
disclosure or distribution is prohibited. If you are not the intended 
recipient, please contact the sender by reply e-mail and destroy all copies of 
the original message. Thank you.



[squid-users] Managing user http bandwidth with squid cache

2012-10-16 Thread Alan Dawson
Hi, 

I'm at an educational establishment, with approx 2500 desktops.
We have had a restrictive web access policy implemented with
a web cache/filtering proxy appliance. User browsers are configured 
by a PAC file and web proxy auto discovery.  They authenticate against
the appliance with NTLM

We plan on changing that policy to something much less restrictive, 
but one of the technical issues we are expecting is an increase in 
web traffic usage.

Currently we use 60Mb/s at peak times ( with 97% of that being http 
traffic ), with our network connection being rated at 100Mb/s

We'd like to manage the amount of bandwidth that we use at our site 
connecting to high traffic sites like youtube/vimeo/bbc, so that there is 
always capacity for critical web applications, for example online 
examinations. 

The filter/proxy appliance does not have any options for limiting bandwidth

One of the ways we are investigating would be to use a squid web cache and 
delay_pools. We would try to identify high bandwidth/popular sites, and 
either use a PAC file so clients chose the bandwidth restricting cache, or 
use a cache chaining rule on the filter/proxy appliance, to pass requests 
for particular sites to the bandwidth restricting cache.

If users connect to the squid cache directly we would authenticate using
Kerberos/NTLM for windows clients and Basic for others.

Does this approach seem valid ?

What kind of resource would the squid cache require ( RAM/CPU ... )

Regards,

Alan Dawson
-- 
The introduction of a coordinate system to geometry is an act of violence


signature.asc
Description: Digital signature


[squid-users] Improvements for basic_radius_auth

2012-08-16 Thread Alan Mizrahi
I have submitted a bugzilla entry with these improvements to the radius auth 
helper of squid 3.2:

- changed the default timeout from 30 to 10 seconds.  The docs say its 10, and
I think 30 is too long.
- added the possibility to specify the timeout in the config file, before it
was only read from the command line.
- added a new parameter which allows access when no response is received from
the RADIUS server, it is disabled by default to have a backward-compatible
behavior.
- added error messages to be shown in error templates via %m.

Please review it and merge into squid if it looks fine:
http://bugs.squid-cache.org/show_bug.cgi?id=3609

Best regards,

Alan


[squid-users] username in logformat and error template

2012-08-02 Thread Alan
I am having trouble the username tags in logformat and error templates.

The logformat documentation says that %ul is the username from
authentication, but in my experience when there is an authentication
failure this is filled with whatever the user tried to authenticate
with instead of being empty, is this intended?
On the other hand, %ue is only filled when the user has been
authenticated, which is what I want.

However it seems I am out of luck with custom errors substitutions,
there is no equivalent to the %ue from logformat.
%a is replaced by the username even when authentication failed, just
like %ul in logformat.
I need to distinguish quickly (from the error screen as opposed to the
logs) between authentication and authorization (acl) failures.

BTW I am aware of the %m and %o tags, I am still looking for the %ue
tag behavior.

Thanks.


[squid-users] Re: username in logformat and error template

2012-08-02 Thread Alan
 On the other hand, %ue is only filled when the user has been
 authenticated, which is what I want.

I spoke to fast, %ue is not being replaced at all.  The docs describe
it as username from external acl helper, but there is no information
on how to set it in the external acl helper.
Is it set via the tag= return value? What is the syntax?


[squid-users] strange dns cache problem and squid monitoring

2012-07-17 Thread Alan
I have experienced a strange situation in which squid repeatedly
returns dns resolution error messages even though I can resolve the
same names at the command line, and even fetch the same pages via
wget.
Running squid -k reconfigure fixes the problem immediately.

My first theory was that there is some sporadic network outage, but
once its fixed squid retains the negative dns cache of failed
requests.
So I tried lowering the negative_dns_ttl setting down to a few
seconds, but it didn't help.

This problem is hard to diagnose because this server is in production
and I can't afford cranking up the logs and wait until it happens
again, and I also have to be quick to fix it instead when this
situation happens instead of spending more time investigating the
cause.

I wrote a perl script that tails the access log to detect when this
error condition happens and then automatically runs squid -k
reconfigure and informs me via email.
This script is useful to warn me about error conditions, even if they
are unrelated to Squid, and I would like to retain it even if I can
solve the root problem.

But at the same time I would like to improve it by using the Cache
Manager interface, or SNMP, and somehow get the number of failed
requests and total requests served in the last x minutes.  Is it
possible to get this information in this way instead of reading the
access log?

So I would appreciate any help with both issues:
1. strange dns cache problem
2. improving my monitoring via Cache Manager or SNMP

Thanks.


Re: [squid-users] Only Debug/Log TCP_DENIED/403

2012-07-11 Thread Alan
See this:
http://wiki.squid-cache.org/KnowledgeBase/DebugSections

So you could use, for example:
debug_options ALL,2 28,4 82,4

That would log at level 4 for of access control and external acl
related things and level 2 for the rest.

On Wed, Jul 11, 2012 at 6:08 PM, ml ml mliebher...@googlemail.com wrote:
 Hello,

 okay, this actually works:

 acl DENY_ACCESS http_status 403
 access_log /tmp/squid.deny.log squid DENY_ACCESS


 Okay, but how can i now Debug which acl rule caused TCP_DENIED/403?
 I only want to set my debug_options for the TCP_DENIED/403 requests...

 Thanks,
 Mario


Re: [squid-users] Only Debug/Log TCP_DENIED/403

2012-07-10 Thread Alan
Its written clearly in the manual:
access_log module:place [logformat name [acl acl ...]]

In your case:
acl DENY_ACCESS http_status 403
access_log squid DENY_ACCESS

squid refers to a predefined logformat, see
http://www.squid-cache.org/Doc/config/logformat/


On Tue, Jul 10, 2012 at 10:23 PM, ml ml mliebher...@googlemail.com wrote:
 Hello Amos,

 thanks. I am using Squid Version 3.1.19 and those rules:

 acl DENY_ACCESS http_status 403
 access_log daemon:/var/log/squid/DENY.log squid DENY_ACCESS

 However i get:

 2012/07/10 15:18:13| aclParseAclList: ACL name 'DENY_ACCESS' not found.
 FATAL: Bungled squid.conf line 695: access_log
 daemon:/var/log/squid/DENY.log squid DENY_ACCESS
 Squid Cache (Version 3.1.19): Terminated abnormally.

 What am i doing wrong here?

 Thanks,
 Mario

 On Tue, Jul 10, 2012 at 2:45 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 11/07/2012 12:25 a.m., ml ml wrote:

 Hello List,

 can i only Log/Debug TCP_DENIED/403 hits? I have a LOT of traffic and
 i am only interested in TCP_DENIED

 Thanks,
 Mario


 http://www.squid-cache.org/Doc/config/access_log/

 Takes ACLs such as the http_status ACL.

 Amos


Re: [squid-users] external_acl_type helper problems

2012-07-10 Thread Alan
I suggest you to try with squid 2.7 or 3.2 series.
I had some strange problems with the 3.1 series, I think external acls
was one of those problems.
When I tested 2.7 and 3.2, all the strange problems were gone.  I know
2.7 sounds old, but it is incredibly faster than the rest.

Regarding your script, keep in mind that Squid is able to cache
results from external acls, so even if the script is not so efficient,
you can take advantage of that caching. Read the docs on external
acls.
But anyway, if you post your script someone might be able to help with
that as well.

On Mon, Jul 9, 2012 at 6:32 PM, ml ml mliebher...@googlemail.com wrote:
 Hello List,

 i am using a perl script for ACL like this:

 external_acl_type ldap_surfer negative_ttl=60  ttl=60 children=200
 %DST %SRC /etc/squid/ldap_default_allow.pl
 acl ldap_users external ldap_surfer
 http_access allow ldap_users

 However, after a squid upgrade from squid-3.1.0.14 to squid-3.1.19 i
 am getting DENIED request. When i turn on ACL Debug i seee this:
 ACL::ChecklistMatches: result for 'ldap_users' is -1

 My /etc/squid/ldap_default_allow.pl perl script might not be the best
 ( i am doing some ldap and mysql stuff in there), so i modified it to
 a very simple script:


 #!/usr/bin/perl
 use strict;

 $|=1;
 while(defined(my $INPUT = STDIN)) {
 print OK\n;
 next;
 }


 I have about 300 Clients and the traffic is quite high. I have the
 feeling that squid  or the script is not very efficent.
 Can i use concurrency=X here with this perl script? Am i using the
 syntax right? Or am i doing anything wrong?

 Thanks,
 Mario


Re: [squid-users] Squid 3.2.0.14 using 100% cpu and not responding

2012-07-09 Thread Alan
On Mon, Jul 9, 2012 at 12:24 PM, Will Roberts ironwil...@gmail.com wrote:
 On 06/17/2012 08:08 PM, Will Roberts wrote:

 strace is producing no output. Infinite loop without syscalls?

 I also tried attaching with gdb, but even as root I'm getting ptrace:
 Operation not permitted. Any ideas on what that means? Or other ways to
 get some information for you guys?


 I'm still having this issue. Any ideas why strace/gdb won't work for me in
 trying to debug this?

A quick search suggest that you are using some kernel security crap, I
don't know much about it but try this:
echo 0  /proc/sys/kernel/yama/ptrace_scope
Or simply start squid from gdb instead of attaching to the existing process.

Alan


[squid-users] Choppy audio stream with squid 3.2.0.17, but no problem with 3.1.19

2012-06-08 Thread Alan
When playing the audio stream from http://www.frequence3.fr/ via squid
3.2.0.17 the audio stream is very choppy.
With 3.1.19 there is no problem at all.

This does a GET of an infinite audio file, not a CONNECT.
If you want to test this please go to the website and click on the
first menu item: ECOUTER.

This test was done with a very basic config file, no strange things:

http_reply_access allow all
http_port x.x.x.x:8080
cache_mem 256 MB
maximum_object_size_in_memory 2 MB
cache_dir aufs /var/cache/squid 512 16 256
max_open_disk_fds 64
maximum_object_size 64 MB
refresh_pattern ^ftp:   144020% 10080
refresh_pattern (cgi-bin|\?)0   0%  0
refresh_pattern .   0   20% 4320
quick_abort_pct 80
negative_ttl 1 minutes
negative_dns_ttl 1 minutes
positive_dns_ttl 1 hours
via on
forwarded_for delete
httpd_suppress_version_string on
http_access allow all

I can't find any clue in the log files.
Any ideas?

Alan


Re: [squid-users] Custom error message woes

2012-06-06 Thread Alan
On Tue, May 29, 2012 at 7:39 PM, Amos Jeffries squ...@treenet.co.nz wrote:

 2. The %o tag (message returned by external acl helper) is not
 url-unescaped, so the error message reads: bla+bla+bla.


 Uh-oh bug. Thank you.

I have created a bug report as well as a possible solution here:

http://bugs.squid-cache.org/show_bug.cgi?id=3557

The bug report hasn't even been confirmed, but it would be great if
this could be incorporated in the next release.

Best regards,

Alan


Re: [squid-users] Custom error message woes

2012-06-06 Thread Alan
On Wed, Jun 6, 2012 at 10:47 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 6/06/2012 8:15 p.m., Alan wrote:

 On Tue, May 29, 2012 at 7:39 PM, Amos Jeffriessqu...@treenet.co.nz
  wrote:

 2. The %o tag (message returned by external acl helper) is not
 url-unescaped, so the error message reads: bla+bla+bla.


 Uh-oh bug. Thank you.

 I have created a bug report as well as a possible solution here:

 http://bugs.squid-cache.org/show_bug.cgi?id=3557

 The bug report hasn't even been confirmed, but it would be great if
 this could be incorporated in the next release.


 The patch seems not to have attached to the report.

 Amos

That is true, I just attached it, please check again.

Alan


[squid-users] Custom error message woes

2012-05-29 Thread Alan
Hello,

I am implementing customized error messages based on this information:
http://wiki.squid-cache.org/Features/CustomErrors

I have some problems though:
1. Is there any way to find out the proxy server's ip address that
handled the request?
At the beginning I thought it was %I, but now I think this refers to
the http server, not the proxy server.  I am aware of %L, but I would
rather not use that.
2. The %o tag (message returned by external acl helper) is not
url-unescaped, so the error message reads: bla+bla+bla.

Best regards,

Alan


[squid-users] Authentication bug in 3.1.19 solved in 3.2.0.17

2012-05-25 Thread Alan
Hello,

I'm implementing a proxy server that authenticates users via radius,
and then based on the source ip, login and the destination, grants
access or not to the requested objects.

The relevant section of squid.conf is:

auth_param basic program /usr/lib/squid/squid_radius_auth -f
/etc/squid/radius.conf -t 5
auth_param basic children 5
auth_param basic realm Web Proxy
auth_param basic credentialsttl 1 hour
external_acl_type my_acl_type %SRC %LOGIN %DST /var/www/htdocs/acl.php
acl my_acl external my_acl_type
http_access allow my_acl
http_access deny all

Both IE and Firefox have the same behavior: they popup the
authentication prompt, then they can make requests for a while, and
randomly popup the authentication prompt again.  I type the same
username and password, and it works fine.
In Konqueror there is no popup, I guess it tries again one more time
with the last username and password before prompting the user.

A network capture reveals that the client is always sending the right
Proxy-Authentication header with it's requests, but squid randomly
replies with a 407 status code, without even asking the radius server
(the authentication result is presumably still cached).

In squid 3.2.0.17 this problem is gone and I don't get the
authentication prompts anymore, but since it is labeled Beta instead
of Stable, I wonder if this can be solved in the 3.1 series.

Has anybody else been affected by this?

Best regards,

Alan


RE: [squid-users] DNS not resolving for one name

2011-11-27 Thread Alan Lehman
  On Mon, 21 Nov 2011 13:11:11 -0600, Alan Lehman wrote:
  I'm having trouble with Squid not resolving eldo.us or
 www.eldo.us
 
  The browser reports :
  Unable to determine IP address from host name www.eldo.us The DNS
  server returned:
  Server Failure: The name server was unable to process this query.

  In command line tests this displays as SERVFAIL from the DNS server.

 
  /etc/resolv.conf points to the local IP and to a dns server on
 another
  system on our network.
  nslookup on both DNS servers works properly.
 
  I've tried restarting squid and bind, but no change.
 
  squid-3.1.6
  bind-9.3.6
 
  Any ideas would be most appreciated.
 

  Interesting combo there. An IPv4-only domain being serviced by a CDN
 with IPv6 nameservers.

  Maybe 3.1.16 will help. There are a lot of stack changes later in the
  3.1 series.


  For testing, ensure that you have tried  record lookups in your
 DNS
  servers. Which is what Squid will be doing. Somehow one of them is
  presenting Squid with SERVFAIL responses instead of NXDOMAIN (on the
   lookup) or a usable IP (on the A lookup).

  Amos


Amos,
Thanks for the suggestions. I upgraded to 3.1.16, but no joy. Ran a few 
nslookups on my DNS servers.  lookups seem to work.

I'm wondering if the problem is that the browser always adds www. in front of 
eldo.us?  But it works if I bypass the proxy. Weird.


# nslookup eldo.us
Server: 172.16.4.59
Address:172.16.4.59#53

Non-authoritative answer:
Name:   eldo.us
Address: 72.47.224.77


# nslookup www.eldo.us
;; Got SERVFAIL reply from 172.16.4.59, trying next server
;; Got SERVFAIL reply from 172.16.4.59, trying next server
Server: 172.16.4.50
Address:172.16.4.50#53

** server can't find www.eldo.us: NXDOMAIN

# nslookup -q= eldo.us
Server: 172.16.4.59
Address:172.16.4.59#53

Non-authoritative answer:
*** Can't find eldo.us: No answer

Authoritative answers can be found from:
eldo.us
origin = ns1.mediatemple.net
mail addr = dnsadmin.mediatemple.net
serial = 2011092203
refresh = 10800
retry = 3600
expire = 1209600
minimum = 43200


# nslookup -q= www.eldo.us
;; Got SERVFAIL reply from 172.16.4.59, trying next server
;; Got SERVFAIL reply from 172.16.4.59, trying next server
Server: 172.16.4.50
Address:172.16.4.50#53

** server can't find www.eldo.us: NXDOMAIN



CONFIDENTIALITY NOTICE: This e-mail message including attachments, if any, is 
intended for the person or entity to which it is addressed and may contain 
confidential and/or privileged material. Any unauthorized review, use, 
disclosure or distribution is prohibited. If you are not the intended 
recipient, please contact the sender by reply e-mail and destroy all copies of 
the original message. Thank you


[squid-users] DNS not resolving for one name

2011-11-21 Thread Alan Lehman
I'm having trouble with Squid not resolving eldo.us or www.eldo.us

The browser reports :
Unable to determine IP address from host name www.eldo.us
The DNS server returned:
Server Failure: The name server was unable to process this query.

/etc/resolv.conf points to the local IP and to a dns server on another
system on our network.
nslookup on both DNS servers works properly.

I've tried restarting squid and bind, but no change.

squid-3.1.6
bind-9.3.6

Any ideas would be most appreciated.

Thanks,
Alan

CONFIDENTIALITY NOTICE: This e-mail message including attachments, if any, is 
intended for the person or entity to which it is addressed and may contain 
confidential and/or privileged material. Any unauthorized review, use, 
disclosure or distribution is prohibited. If you are not the intended 
recipient, please contact the sender by reply e-mail and destroy all copies of 
the original message. Thank you


RE: [squid-users] possible SOAP problem with 3.1.4

2010-08-12 Thread Alan Lehman
 From: Amos Jeffries [mailto:squ...@treenet.co.nz]
 Sent: Tuesday, August 10, 2010 6:48 PM
 To: Alan Lehman
 Cc: squid-users@squid-cache.org
 Subject: RE: [squid-users] possible SOAP problem with 3.1.4

 On Tue, 10 Aug 2010 09:14:05 -0500, Alan Lehman aleh...@gbateam.com
 wrote:
   From: Amos Jeffries [mailto:squ...@treenet.co.nz]
   Sent: Sunday, July 11, 2010 1:55 AM
   To: squid-users@squid-cache.org
   Subject: Re: [squid-users] possible SOAP problem with 3.1.4
  
   Alan Lehman wrote:
   We have particular application software license server for our
  office
   that is located behind a Squid proxy. It stopped working after
   upgrading
   Squid from 3.1.0.17 to 3.1.4. This server periodically goes to
 the
   software company's web site to verify the license is valid and
  upload
   user counts, etc. It appears to be some sort of SOAP
 application.
  The
   license server runs on a Windows server. From access.log:
  
   Running 3.1.0.17 (succeeds) -
   1278609155.802470 172.16.4.43 TCP_MISS/200 725 POST
   http://selectserver.bentley.com/bss/ws/Misc.asmx -
   DIRECT/64.90.235.78
   text/xml
   1278609157.482   1054 172.16.4.43 TCP_MISS/200 117679 POST
   http://selectserver.bentley.com/bss/ws/GatewayWS.asmx -
   DIRECT/64.90.235.78 text/xml
  
   Running 3.1.4 (fails) -
   1278607986.223   1138 172.16.4.43 TCP_MISS/500 838 POST
   http://selectserver.bentley.com/bss/ws/Misc.asmx -
   DIRECT/64.90.235.78
   application/soap+xml
   1278607987.128895 172.16.4.43 TCP_MISS/200 1178 POST
   http://selectserver.bentley.com/bss/ws/Misc.asmx -
   DIRECT/64.90.235.78
   text/xml
  
   I verified the situation by going back to 3.1.0.17 with the same
   config,
   whereupon it started working again. I tried adding cache deny
 for
   this
   domain but it didn't change anything.
  
   Any thoughts would be most appreciated.
   Thanks,
   Alan Lehman
   Don't know the problem.
   You are going to have to dig into the request/reply's a bit
 further
  to
   see what the problems is.
   The biggest difference between 3.1.0.17 and 3.1.4 is that
 HTTP/1.1
  is
   sent to the server by 3.1.4. It may be doing some broken magic,
 as
   evidenced by the different response type given to Squid now.
  
   Amos
   --
   Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.5
  
  
   So far I'm unable to determine a consistent pattern with
 Wireshark.
  Is there a way I can force 3.1.4 to use HTTP/1.0?
  
   Alan
 
  You can reverse the 1.1 enabling patch found here:
  http://www.squid-cache.org/Versions/v3/3.1/changesets/squid-3.1-
  9916.patch
 
  Amos
  --
  Please be using
 Current Stable Squid 2.7.STABLE9 or 3.1.5
 
 
  Using Wireshark, I recorded the following conversation between the
 license
  server and Squid-3.1.6. The capture with the patched version of squid
 is
  very similar. It appears to me that the license server is not
 responding
  correctly to Squid's 417, right?  But why is Squid 3.1.6 (unpatched)
  issuing the 417?

 Um, this is a little strange. The *server* is making these requests
 through Squid?
 The client-server model indicates the machine you are calling a server
 here is in fact a client.

 So, the workaround is to turn on the ignore_expect100 directive in
 Squid.
 Which suppresses the 417 response going to clients.

 
  POST http://selectserver.bentley.com/bss/ws/GatewayWS.asmx HTTP/1.1
  User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; MS Web Services Client
  Protocol 2.0.50727.3603)
  Content-Type: text/xml; charset=utf-8
  SOAPAction:
 http://bentley.com/selectserver/webservices/GetGatewayLicense;
  Host: selectserver.bentley.com
  Content-Length: 564
  Expect: 100-continue
  Proxy-Connection: Keep-Alive
 
  HTTP/1.0 417 Expectation Failed
  Server: squid/3.1.6
  Mime-Version: 1.0
  Date: Tue, 10 Aug 2010 13:40:31 GMT
  Content-Type: text/html
  Content-Length: 3944
  X-Squid-Error: ERR_INVALID_REQ 0
  Vary: Accept-Language
  Content-Language: en
  X-Cache: MISS from proxy2.gbateam.com
  Via: 1.0 proxy2.gbateam.com (squid/3.1.6)
  Proxy-Connection: close
 
  !DOCTYPE html PUBLIC -//W3C//DTD HTML 4.01//EN
  http://www.w3.org/TR/html4/strict.dtd;
  htmlhead
  /body/html

 So far so good.

 
  ?xml version=1.0 encoding=utf-8?soap:Envelope
  xmlns:soap=http://schemas.xmlsoap.org/soap/envelope/;
  xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
 
 xmlns:xsd=http://www.w3.org/2001/XMLSchema;soap:BodyGetGatewayLice
 nse
 
 xmlns=http://bentley.com/selectserver/webservices/;GatewayKey062D04
 32571D1748859E50B1CD98B9DE/GatewayKeyGatewaySiteKeysstring062D043
 2571D1748859E50B1CD98B9DE/string/GatewaySiteKeysComputerNameSUP1
 /ComputerNameSSHostNameselectserver.bentley.com/SSHostName/GetGat
 ewayLicense/soap:Body/soap:Envelope
 

 Um, Where is that garbage coming from? The POST?

 Assuming so, that would make the client broken and maybe a bug in Squid
 lettign that brokenness through.

 This type of behaviour is what the ignore_expect100 can help with.
 Making
 Squid suppress

RE: [squid-users] possible SOAP problem with 3.1.4

2010-08-10 Thread Alan Lehman
  From: Amos Jeffries [mailto:squ...@treenet.co.nz]
  Sent: Sunday, July 11, 2010 1:55 AM
  To: squid-users@squid-cache.org
  Subject: Re: [squid-users] possible SOAP problem with 3.1.4
 
  Alan Lehman wrote:
  We have particular application software license server for our
 office
  that is located behind a Squid proxy. It stopped working after
  upgrading
  Squid from 3.1.0.17 to 3.1.4. This server periodically goes to the
  software company's web site to verify the license is valid and
 upload
  user counts, etc. It appears to be some sort of SOAP application.
 The
  license server runs on a Windows server. From access.log:
 
  Running 3.1.0.17 (succeeds) -
  1278609155.802470 172.16.4.43 TCP_MISS/200 725 POST
  http://selectserver.bentley.com/bss/ws/Misc.asmx -
  DIRECT/64.90.235.78
  text/xml
  1278609157.482   1054 172.16.4.43 TCP_MISS/200 117679 POST
  http://selectserver.bentley.com/bss/ws/GatewayWS.asmx -
  DIRECT/64.90.235.78 text/xml
 
  Running 3.1.4 (fails) -
  1278607986.223   1138 172.16.4.43 TCP_MISS/500 838 POST
  http://selectserver.bentley.com/bss/ws/Misc.asmx -
  DIRECT/64.90.235.78
  application/soap+xml
  1278607987.128895 172.16.4.43 TCP_MISS/200 1178 POST
  http://selectserver.bentley.com/bss/ws/Misc.asmx -
  DIRECT/64.90.235.78
  text/xml
 
  I verified the situation by going back to 3.1.0.17 with the same
  config,
  whereupon it started working again. I tried adding cache deny for
  this
  domain but it didn't change anything.
 
  Any thoughts would be most appreciated.
  Thanks,
  Alan Lehman
  Don't know the problem.
  You are going to have to dig into the request/reply's a bit further
 to
  see what the problems is.
  The biggest difference between 3.1.0.17 and 3.1.4 is that HTTP/1.1
 is
  sent to the server by 3.1.4. It may be doing some broken magic, as
  evidenced by the different response type given to Squid now.
 
  Amos
  --
  Please be using
 Current Stable Squid 2.7.STABLE9 or 3.1.5
 
 
  So far I'm unable to determine a consistent pattern with Wireshark.
 Is there a way I can force 3.1.4 to use HTTP/1.0?
 
  Alan

 You can reverse the 1.1 enabling patch found here:
 http://www.squid-cache.org/Versions/v3/3.1/changesets/squid-3.1-
 9916.patch

 Amos
 --
 Please be using
Current Stable Squid 2.7.STABLE9 or 3.1.5


Using Wireshark, I recorded the following conversation between the license 
server and Squid-3.1.6. The capture with the patched version of squid is very 
similar. It appears to me that the license server is not responding correctly 
to Squid's 417, right?  But why is Squid 3.1.6 (unpatched) issuing the 417?


POST http://selectserver.bentley.com/bss/ws/GatewayWS.asmx HTTP/1.1
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; MS Web Services Client Protocol 
2.0.50727.3603)
Content-Type: text/xml; charset=utf-8
SOAPAction: http://bentley.com/selectserver/webservices/GetGatewayLicense;
Host: selectserver.bentley.com
Content-Length: 564
Expect: 100-continue
Proxy-Connection: Keep-Alive

HTTP/1.0 417 Expectation Failed
Server: squid/3.1.6
Mime-Version: 1.0
Date: Tue, 10 Aug 2010 13:40:31 GMT
Content-Type: text/html
Content-Length: 3944
X-Squid-Error: ERR_INVALID_REQ 0
Vary: Accept-Language
Content-Language: en
X-Cache: MISS from proxy2.gbateam.com
Via: 1.0 proxy2.gbateam.com (squid/3.1.6)
Proxy-Connection: close

!DOCTYPE html PUBLIC -//W3C//DTD HTML 4.01//EN 
http://www.w3.org/TR/html4/strict.dtd;
htmlhead
/body/html

?xml version=1.0 encoding=utf-8?soap:Envelope 
xmlns:soap=http://schemas.xmlsoap.org/soap/envelope/; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; 
xmlns:xsd=http://www.w3.org/2001/XMLSchema;soap:BodyGetGatewayLicense 
xmlns=http://bentley.com/selectserver/webservices/;GatewayKey062D0432571D1748859E50B1CD98B9DE/GatewayKeyGatewaySiteKeysstring062D0432571D1748859E50B1CD98B9DE/string/GatewaySiteKeysComputerNameSUP1/ComputerNameSSHostNameselectserver.bentley.com/SSHostName/GetGatewayLicense/soap:Body/soap:Envelope

POST http://selectserver.bentley.com/bss/ws/usagelogging.asmx HTTP/1.0
User-Agent: BSIlm/0.9.0.0
Host: selectserver.bentley.com
Content-Length: 0
Proxy-Connection: Keep-Alive
Pragma: no-cache

HTTP/1.0 500 Internal Server Error
Date: Tue, 10 Aug 2010 13:40:33 GMT
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
X-AspNet-Version: 2.0.50727
Cache-Control: private
Content-Type: application/soap+xml; charset=utf-8
Content-Length: 481
X-Cache: MISS from proxy2.gbateam.com
Via: 1.0 proxy2.gbateam.com (squid/3.1.6)
Proxy-Connection: keep-alive

?xml version=1.0 encoding=utf-8?soap:Envelope 
xmlns:soap=http://www.w3.org/2003/05/soap-envelope; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; 
xmlns:xsd=http://www.w3.org/2001/XMLSchema;soap:Bodysoap:Faultsoap:Codesoap:Valuesoap:Receiver/soap:Value/soap:Codesoap:Reasonsoap:Text
 xml:lang=enServer was unable to process request. ---gt; Root element is 
missing./soap:Text/soap:Reasonsoap:Detail 
//soap:Fault/soap:Body/soap:Envelope

POST http://selectserver.bentley.com/bss

RE: [squid-users] possible SOAP problem with 3.1.4

2010-07-23 Thread Alan Lehman

CONFIDENTIALITY NOTICE: This e-mail message including attachments, if any, is 
intended for the person or entity to which it is addressed and may contain 
confidential and/or privileged material. Any unauthorized review, use, 
disclosure or distribution is prohibited. If you are not the intended 
recipient, please contact the sender by reply e-mail and destroy all copies of 
the original message. Thank you
-Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz]
 Sent: Sunday, July 11, 2010 1:55 AM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] possible SOAP problem with 3.1.4

 Alan Lehman wrote:
  We have particular application software license server for our office
  that is located behind a Squid proxy. It stopped working after
 upgrading
  Squid from 3.1.0.17 to 3.1.4. This server periodically goes to the
  software company's web site to verify the license is valid and upload
  user counts, etc. It appears to be some sort of SOAP application. The
  license server runs on a Windows server. From access.log:
 
  Running 3.1.0.17 (succeeds) -
  1278609155.802470 172.16.4.43 TCP_MISS/200 725 POST
  http://selectserver.bentley.com/bss/ws/Misc.asmx -
 DIRECT/64.90.235.78
  text/xml
  1278609157.482   1054 172.16.4.43 TCP_MISS/200 117679 POST
  http://selectserver.bentley.com/bss/ws/GatewayWS.asmx -
  DIRECT/64.90.235.78 text/xml
 
  Running 3.1.4 (fails) -
  1278607986.223   1138 172.16.4.43 TCP_MISS/500 838 POST
  http://selectserver.bentley.com/bss/ws/Misc.asmx -
 DIRECT/64.90.235.78
  application/soap+xml
  1278607987.128895 172.16.4.43 TCP_MISS/200 1178 POST
  http://selectserver.bentley.com/bss/ws/Misc.asmx -
 DIRECT/64.90.235.78
  text/xml
 
  I verified the situation by going back to 3.1.0.17 with the same
 config,
  whereupon it started working again. I tried adding cache deny for
 this
  domain but it didn't change anything.
 
  Any thoughts would be most appreciated.
  Thanks,
  Alan Lehman

 Don't know the problem.
 You are going to have to dig into the request/reply's a bit further to
 see what the problems is.
 The biggest difference between 3.1.0.17 and 3.1.4 is that HTTP/1.1 is
 sent to the server by 3.1.4. It may be doing some broken magic, as
 evidenced by the different response type given to Squid now.

 Amos
 --
 Please be using
Current Stable Squid 2.7.STABLE9 or 3.1.5


So far I'm unable to determine a consistent pattern with Wireshark. Is there a 
way I can force 3.1.4 to use HTTP/1.0?

Alan


[squid-users] possible SOAP problem with 3.1.4

2010-07-08 Thread Alan Lehman
We have particular application software license server for our office
that is located behind a Squid proxy. It stopped working after upgrading
Squid from 3.1.0.17 to 3.1.4. This server periodically goes to the
software company's web site to verify the license is valid and upload
user counts, etc. It appears to be some sort of SOAP application. The
license server runs on a Windows server. From access.log:

Running 3.1.0.17 (succeeds) -
1278609155.802470 172.16.4.43 TCP_MISS/200 725 POST
http://selectserver.bentley.com/bss/ws/Misc.asmx - DIRECT/64.90.235.78
text/xml
1278609157.482   1054 172.16.4.43 TCP_MISS/200 117679 POST
http://selectserver.bentley.com/bss/ws/GatewayWS.asmx -
DIRECT/64.90.235.78 text/xml

Running 3.1.4 (fails) -
1278607986.223   1138 172.16.4.43 TCP_MISS/500 838 POST
http://selectserver.bentley.com/bss/ws/Misc.asmx - DIRECT/64.90.235.78
application/soap+xml
1278607987.128895 172.16.4.43 TCP_MISS/200 1178 POST
http://selectserver.bentley.com/bss/ws/Misc.asmx - DIRECT/64.90.235.78
text/xml

I verified the situation by going back to 3.1.0.17 with the same config,
whereupon it started working again. I tried adding cache deny for this
domain but it didn't change anything.

Any thoughts would be most appreciated.
Thanks,
Alan Lehman

CONFIDENTIALITY NOTICE: This e-mail message including attachments, if any, is 
intended for the person or entity to which it is addressed and may contain 
confidential and/or privileged material. Any unauthorized review, use, 
disclosure or distribution is prohibited. If you are not the intended 
recipient, please contact the sender by reply e-mail and destroy all copies of 
the original message. Thank you


RE: [squid-users] 403 Forbidden from apache server

2010-06-09 Thread Alan Lehman
 sön 2010-06-06 klockan 22:49 -0500 skrev Alan Lehman:
  Trying to access http://www.kswheat.com/ via Squid 3.1.4 as proxy, I get
  403 Forbidden, You don't have permission to access / on this server.
  The web site loads normally if I bypass Squid. It claims to be running
  Apache 1.3.41.

 Try configuring Squid with

 forwarded_for delete

 or alternatively depending on Squid version

 request_header_access X-Forwarded-For deny all

 Regards
 Henrik

Henrik,
Thanks for the reply. forwarded_for delete does not make any difference 
however.

Alan

CONFIDENTIALITY NOTICE: This e-mail message including attachments, if any, is 
intended for the person or entity to which it is addressed and may contain 
confidential and/or privileged material. Any unauthorized review, use, 
disclosure or distribution is prohibited. If you are not the intended 
recipient, please contact the sender by reply e-mail and destroy all copies of 
the original message. Thank you


[squid-users] 403 Forbidden from apache server

2010-06-06 Thread Alan Lehman
Trying to access http://www.kswheat.com/ via Squid 3.1.4 as proxy, I get
403 Forbidden, You don't have permission to access / on this server.
The web site loads normally if I bypass Squid. It claims to be running
Apache 1.3.41.

access.log:
1275882379.014131 x.x.x.x TCP_MISS/403 756 GET
http://www.kswheat.com/ - DIRECT/67.212.164.118 text/html

Nothing in cache.log

Thanks for any advice on this.
Alan

CONFIDENTIALITY NOTICE: This e-mail message including attachments, if any, is 
intended for the person or entity to which it is addressed and may contain 
confidential and/or privileged material. Any unauthorized review, use, 
disclosure or distribution is prohibited. If you are not the intended 
recipient, please contact the sender by reply e-mail and destroy all copies of 
the original message. Thank you


[squid-users] **xs**Javascript problem

2010-03-26 Thread Alan Lehman
My browser is hanging when trying to access a javascript page over https
via Squid proxy (squid-3.1.0.17). When I click a button with the
following code, the page won't load. 

input name=Begin type=button onClick=window.location =
'test.cfm?packetid=_2W90WFKBYtestid=_0QW0X6A3Luserid=_2W90WFCGZ'
value=Begin

It works if I bypass the proxy. 
Behavior is the same with Firefox and IE8.

Any thoughts on particular issues with Javascript I might be looking
for, or known issues with Squid and javascript?

Thanks,
Alan


RE: [squid-users] 3.1.0.3 aborting

2009-04-02 Thread Alan Lehman
 Hi Alan,
 
 Alan Lehman wrote:
  squid-3.1.0.3 is periodically aborting with 'signal 6'. This system
 is
  running both regular and reverse proxy functions. Any ideas?  What
is
  lost DNS error info?
 
 I think the DNS lookup failed.
 
 
  Thanks,
  Alan
 
 
  /var/log/messages:
  Mar 31 08:19:00 proxy3 squid[31952]: Squid Parent: child process
8977
  exited due to signal 6 with status 0
 
There are 1-2 related bug fixes in newer squid-3.1 releases
 (squid-3.1.0.5, squid-3.1.0.6). If the problem exist on latest squid-
 3.1
 release try to fill a bug report in squid bugzilla.
 
 Regards,
  Christos
 

Christos,
I upgraded to 3.1.0.6 and the problem seems to be corrected. Thank you.


RE: [squid-users] forward and reverse through one system

2009-02-22 Thread Alan Lehman
 Amos Jeffries wrote:
  Alan Lehman wrote:
  Specific to your loop-back problem:
 
  You need to adjust your reverse-proxy configuration to block the
  CONNECT
  method being used to access the peers.
  Sorry, but can you elaborate on this?
 
  The internal net - forward proxy step of the chain uses a
 CONNECT
  request.
 
cache_peer BLAH deny CONNECT
 
  is needed to force internal net - forward proxy -
  accelerator(self)
  Otherwise requests like CONNECT owa:443 will be optimized as
  internal
  net - accelerator - OWA . Even though OWA does not handle
 CONNECT.
 
  Blocking CONNECT to peer, forces config down to the forward-proxy
  config
  which _is_ allowed to do the looping back bit an de-tunneling the
  CONNECT.
 
  As far as I can see, cache_peer doesn't allow a deny parameter, so
 I
  tried the following and get the requested URL cannot be retried.
 At
  least it's not just hanging:
 
  cache_peer blah
 
  acl OWA dstdomain owa.domain.com
  http_access allow OWA
  miss_access allow OWA
  acl CONNECT method CONNECT
  cache_peer_access owa-server deny CONNECT
  cache_peer_access owa-server allow OWA
  never_direct allow OWA
 
  [normal forward proxy config below]
 
  Thanks,
  Alan
 
  With the configuration above, the logs look like this:
  access.log:
  1235235368.181  0 172.16.7.203 TCP_MISS/503 0 CONNECT
  owa.domain.com:443 - NONE/- -
  1235235368.428163 172.16.7.203 TCP_MISS/304 326 GET
  http://www.squid-cache.org/Artwork/SN.png - DIRECT/12.160.37.9 -
 
  cache.log:
  -END SSL SESSION PARAMETERS-
  2009/02/21 10:56:59| Failed to select source for '[null_entry]'
  2009/02/21 10:56:59|   always_direct = 0
  2009/02/21 10:56:59|never_direct = 1
  2009/02/21 10:56:59|timedout = 0
 
  '[null_entry]' is curious. Shouldn't that be URL for OWA?
 
  Playing with this same configuration, if I authenticate to OWA first
  via another proxy, then switch to this one, it will keep working
 until
  I restart the browser.
 
  Is there some other way to accomplish deny CONNECT?
 
  Drop the never_direct entry. It's cutting the loopback from
 happening.
 
 No forget that.  Add !CONNECT to it instead.
 

Perfect.  Thank you!
The apparently-working 3.1.0.5 configuration now looks like this:

#OWA config
https_port blah connection-auth=off
cache_peer blah name=owa-server
acl OWA dstdomain owa.domain.com
http_access allow OWA
miss_access allow OWA
cache_peer_access owa-server allow OWA
cache_peer_access owa-server deny all
acl CONNECT method CONNECT
never_direct allow OWA !CONNECT

#RPC over https config
https_port blah
cache_peer blah name=rpc-server
acl RPC dstdomain rpc.domain.com
http_access allow RPC
miss_access allow RPC
cache_peer_access rpc-server allow RPC
never_direct allow RPC

[normal forward proxy config below]


CONFIDENTIALITY NOTICE: This e-mail message including attachments, if any, is 
intended for the person or entity to which it is addressed and may contain 
confidential and/or privileged material. Any unauthorized review, use, 
disclosure or distribution is prohibited. If you are not the intended 
recipient, please contact the sender by reply e-mail and destroy all copies of 
the original message. Thank you.



RE: [squid-users] forward and reverse through one system

2009-02-21 Thread Alan Lehman
   Specific to your loop-back problem:
  
   You need to adjust your reverse-proxy configuration to block the
   CONNECT
   method being used to access the peers.
  
   Sorry, but can you elaborate on this?
 
 
  The internal net - forward proxy step of the chain uses a CONNECT
  request.
 
cache_peer BLAH deny CONNECT
 
  is needed to force internal net - forward proxy -
 accelerator(self)
 
  Otherwise requests like CONNECT owa:443 will be optimized as
  internal
  net - accelerator - OWA . Even though OWA does not handle CONNECT.
 
  Blocking CONNECT to peer, forces config down to the forward-proxy
  config
  which _is_ allowed to do the looping back bit an de-tunneling the
  CONNECT.
 
 
 As far as I can see, cache_peer doesn't allow a deny parameter, so I
 tried the following and get the requested URL cannot be retried. At
 least it's not just hanging:
 
 cache_peer blah
 
 acl OWA dstdomain owa.domain.com
 http_access allow OWA
 miss_access allow OWA
 acl CONNECT method CONNECT
 cache_peer_access owa-server deny CONNECT
 cache_peer_access owa-server allow OWA
 never_direct allow OWA
 
 [normal forward proxy config below]
 
 Thanks,
 Alan

With the configuration above, the logs look like this:
access.log:
1235235368.181  0 172.16.7.203 TCP_MISS/503 0 CONNECT owa.domain.com:443 - 
NONE/- -
1235235368.428163 172.16.7.203 TCP_MISS/304 326 GET 
http://www.squid-cache.org/Artwork/SN.png - DIRECT/12.160.37.9 -

cache.log:
-END SSL SESSION PARAMETERS-
2009/02/21 10:56:59| Failed to select source for '[null_entry]'
2009/02/21 10:56:59|   always_direct = 0
2009/02/21 10:56:59|never_direct = 1
2009/02/21 10:56:59|timedout = 0

'[null_entry]' is curious. Shouldn't that be URL for OWA?

Playing with this same configuration, if I authenticate to OWA first via 
another proxy, then switch to this one, it will keep working until I restart 
the browser.

Is there some other way to accomplish deny CONNECT? 

Thanks,
Alan


CONFIDENTIALITY NOTICE: This e-mail message including attachments, if any, is 
intended for the person or entity to which it is addressed and may contain 
confidential and/or privileged material. Any unauthorized review, use, 
disclosure or distribution is prohibited. If you are not the intended 
recipient, please contact the sender by reply e-mail and destroy all copies of 
the original message. Thank you.



RE: [squid-users] forward and reverse through one system

2009-02-15 Thread Alan Lehman
  Specific to your loop-back problem:
 
  You need to adjust your reverse-proxy configuration to block the
  CONNECT
  method being used to access the peers.
 
  Sorry, but can you elaborate on this?
 
 
 The internal net - forward proxy step of the chain uses a CONNECT
 request.
 
   cache_peer BLAH deny CONNECT
 
 is needed to force internal net - forward proxy -
accelerator(self)
 
 Otherwise requests like CONNECT owa:443 will be optimized as
 internal
 net - accelerator - OWA . Even though OWA does not handle CONNECT.
 
 Blocking CONNECT to peer, forces config down to the forward-proxy
 config
 which _is_ allowed to do the looping back bit an de-tunneling the
 CONNECT.
 

As far as I can see, cache_peer doesn't allow a deny parameter, so I
tried the following and get the requested URL cannot be retried. At
least it's not just hanging:

cache_peer blah

acl OWA dstdomain owa.domain.com
http_access allow OWA
miss_access allow OWA
acl CONNECT method CONNECT
cache_peer_access owa-server deny CONNECT
cache_peer_access owa-server allow OWA
never_direct allow OWA

[normal forward proxy config below]

Thanks,
Alan

Alan Lehman, PE
Associate
 aleh...@gbateam.com

creating remarkable solutions for a higher quality of life
http://www.gbateam.com

9801 Renner Boulevard | Lenexa, KS 66219-9745
913.577.8829 direct | 816.210.8785 mobile | 913.577.8264 fax

Please consider the environment before printing this email.

CONFIDENTIALITY NOTICE: This e-mail message including attachments, if any, is 
intended for the person or entity to which it is addressed and may contain 
confidential and/or privileged material. Any unauthorized review, use, 
disclosure or distribution is prohibited. If you are not the intended 
recipient, please contact the sender by reply e-mail and destroy all copies of 
the original message. Thank you.



RE: [squid-users] forward and reverse through one system

2009-02-08 Thread Alan Lehman
Amos,
See responses to your questions below.
Thanks.


  I have one instance of squid is configured for forward web proxy and
  accelerator for OWA (per the wiki). In order for users to avoid
changing
  their proxy settings, I need the forward proxy to be able to access
OWA
  going out and back in as follows:
 
  Host on internal net - forward proxy - accelerator - OWA server
on
  internal net
 
  It seems like this should work. When I try to access OWA from an
  internal host, the browser hangs and the following eventually
appears in
  access.log:
 
  1233516965.141  12567 [internal host IP] TCP_MISS/000 0 CONNECT
  owa.domain.com:443 - FIRST_UP_PARENT/[owa server IP] -
 
  Any ideas would be most appreciated.
 
  Thanks,
  Alan
 
 
 (Assuming you have squid-2.6 or later)

3.1.0.3
 
 The basic config:
 
 You can multi-mode squid. Ensure that the reverse-proxy settings are
all
 at the top of the squid.conf and any forward-proxy settings are
following
 at the bottom.
 Also, the http_access deny all detailed to finish the reverse-proxy
 config gets removed so that on non-reversed requests squid can drop
 through and run the forward-proxy settings.

Yup. That's the way it is. My complete config is posted on bug 2572.
 
 Specific to your loop-back problem:
 
 You need to adjust your reverse-proxy configuration to block the
CONNECT
 method being used to access the peers.

Sorry, but can you elaborate on this?

 Then check that the domain IP Squid resolves owa.domain.com to is its
own
 listening https_port.

It does: a.b.c.96 
 
 Amos
 



CONFIDENTIALITY NOTICE: This e-mail message including attachments, if any, is 
intended for the person or entity to which it is addressed and may contain 
confidential and/or privileged material. Any unauthorized review, use, 
disclosure or distribution is prohibited. If you are not the intended 
recipient, please contact the sender by reply e-mail and destroy all copies of 
the original message. Thank you.



[squid-users] forward and reverse through one system

2009-02-01 Thread Alan Lehman
I have one instance of squid is configured for forward web proxy and
accelerator for OWA (per the wiki). In order for users to avoid changing
their proxy settings, I need the forward proxy to be able to access OWA
going out and back in as follows:

Host on internal net - forward proxy - accelerator - OWA server on
internal net

It seems like this should work. When I try to access OWA from an
internal host, the browser hangs and the following eventually appears in
access.log:

1233516965.141  12567 [internal host IP] TCP_MISS/000 0 CONNECT
owa.domain.com:443 - FIRST_UP_PARENT/[owa server IP] -

Any ideas would be most appreciated.

Thanks,
Alan


RE: [squid-users] OWA accelerator authentication weirdness

2009-01-16 Thread Alan Lehman
 Yes. Multiple authentication methods, triggered from multiple sources,

 going via multiple paths can be confusing.
 
 Squid auth_param elided, which leaves:
 
 A user name and password are being requested by ...
 == basic challenge by ISA.
 
 Enter user name and password for ...
 == integrated/NTLM challenge by ISA.
 
 
 I'm now thinking we have two distinct configurations for Squid:
 
 Basic Auth (only) passed back
   cache_peer ... login=PASS connection-auth=off
 
 NTLM Auth (only) passed back:
   cache_peer ... connection-auth=on
 
 
 Which appear to be non-compatible auth methods at present.
 What happens if you re-enable the connection-auth on https_port and 
 remove the login=PASS from cache_peer?
 
 Amos
 

OWA is back to the previous double login with Firefox. Activesync PDA
won't accept login.


RE: [squid-users] OWA accelerator authentication weirdness

2009-01-14 Thread Alan Lehman
  That's terrific that it works, but I'm not sure I understand why.
 Does connection-auth=off disable pass-through of NTLM? My
 understanding of the Activesync devices is that they require NTLM.
 
 
 Yes it disables pass-thru for NTLM.
 
 Which for you blocks that first NTLM challenge (direct from the OWA?),
 and leaves the second (from your Squid auth_* setup?) to go through.
 
 Amos

But I have all of my auth_* commented out. 

Before adding connection-auth=off to my https_port config, Firefox would give 
me two authentication prompts. First: Enter user name and password for ..., 
which would not work. Then only after I hit CANCEL, I would get A user name 
and password are being requested by ..., which does work.

With connection-auth=off or with Windows integrated authentication disabled 
on the OWA server, Firefox would give me only the 2nd dialog, and it works. But 
Activesync devices don't work Windows integrated disabled. 

With Basic authentication and Windows integrated authentication enabled on 
the OWA server and connection-auth=off, everything works like it should. 

It's so confusing.

Alan

--
Please note our new email and website address!
Alan Lehman, PE
Associate
 mailto:aleh...@gbateam.com
creating remarkable solutions
for a higher quality of life
http://www.gbateam.com
9801 Renner Boulevard
Lenexa, KS 66219-9745
913.577.8829 direct
816.210.8785 mobile
913.577.8264 fax

CONFIDENTIALITY NOTICE: This e-mail message including attachments, if any, is 
intended for the person or entity to which it is addressed and may contain 
confidential and/or privileged material. Any unauthorized review, use, 
disclosure or distribution is prohibited. If you are not the intended 
recipient, please contact the sender by reply e-mail and destroy all copies of 
the original message. Thank you
 


RE: [squid-users] OWA accelerator authentication weirdness

2009-01-13 Thread Alan Lehman
  Try some of the settings to disable pass-thru on the specific
 ports
  and/or peer:
 
  http://wiki.squid-cache.org/Features/ConnPin
 
  My config pretty much follows the wiki example for OWA accelerator.
  Squid 3.1.0.3. I'm using the same port for OWA and Activesync. I
 just
  added connection-auth=off on https_port and removed all auth_param
  lines, and that took care of my problem.
  Before I go recommending this as a general fix in 3.1, are BOTH of
  those
  changes needed for it to work?
 
  I know there are people using Squid+OWA in multi-mode who may need
 auth
  for other things. Can we get away with just connection-auth=off on
  the
  port?
 
 
  Amos
 
  The auth_param lines don't seem to make any difference. It works for
 me with them in.
 
 
 Great. I'll get the wiki updated.
 Thanks for your help finding this and testing the solution.

That's terrific that it works, but I'm not sure I understand why. Does 
connection-auth=off disable pass-through of NTLM? My understanding of the 
Activesync devices is that they require NTLM.

Alan

--
Please note our new email and website address!
Alan Lehman, PE
Associate
 mailto:aleh...@gbateam.com
creating remarkable solutions
for a higher quality of life
http://www.gbateam.com
9801 Renner Boulevard
Lenexa, KS 66219-9745
913.577.8829 direct
816.210.8785 mobile
913.577.8264 fax

CONFIDENTIALITY NOTICE: This e-mail message including attachments, if any, is 
intended for the person or entity to which it is addressed and may contain 
confidential and/or privileged material. Any unauthorized review, use, 
disclosure or distribution is prohibited. If you are not the intended 
recipient, please contact the sender by reply e-mail and destroy all copies of 
the original message. Thank you
 


RE: [squid-users] OWA accelerator authentication weirdness

2009-01-10 Thread Alan Lehman
  The order in which our auth_param lines are configured can alter the
  first authentication method tried. You will need to look at the
  debugging trace in cache.log to see which is generating which
 question
 
  Amos
 
  Only basic is enabled:
  auth_param basic children 5
  auth_param basic realm Squid proxy-caching web server
  auth_param basic credentialsttl 2 hours
 
  Do I need to select a program for basic?
 
  found in cache.log:
  2009/01/08 14:38:19.713| CacheManager::registerAction: registering
 legacy basicauthenticator
  2009/01/08 14:38:19.713| CacheManager::findAction: looking for action
 basicauthenticator
  2009/01/08 14:38:19.713| CacheManager::registerAction: registered
 basicauthenticator
  2009/01/08 14:41:22.010| CacheManager::registerAction: registering
 legacy basicauthenticator
  2009/01/08 14:41:22.010| CacheManager::registerAction: registered
 basicauthenticator
 
  The OWA web server has both basic and Windows Integrated
 Authentication enabled. If I disable windows integrated, OWA works
 fine, but I need activesync also, which does not work without windows
 integrated enabled.
 
  Thanks,
  Alan
 
 Um, further on my other email.
 Try some of the settings to disable pass-thru on the specific ports
 and/or peer:
 
 http://wiki.squid-cache.org/Features/ConnPin


My config pretty much follows the wiki example for OWA accelerator. Squid 
3.1.0.3. I'm using the same port for OWA and Activesync. I just added 
connection-auth=off on https_port and removed all auth_param lines, and that 
took care of my problem.

Thanks!




RE: [squid-users] OWA accelerator authentication weirdness

2009-01-10 Thread Alan Lehman
  The order in which our auth_param lines are configured can alter
 the
  first authentication method tried. You will need to look at the
  debugging trace in cache.log to see which is generating which
  question
  Amos
  Only basic is enabled:
  auth_param basic children 5
  auth_param basic realm Squid proxy-caching web server
  auth_param basic credentialsttl 2 hours
 
  Do I need to select a program for basic?
 
  found in cache.log:
  2009/01/08 14:38:19.713| CacheManager::registerAction: registering
  legacy basicauthenticator
  2009/01/08 14:38:19.713| CacheManager::findAction: looking for
 action
  basicauthenticator
  2009/01/08 14:38:19.713| CacheManager::registerAction: registered
  basicauthenticator
  2009/01/08 14:41:22.010| CacheManager::registerAction: registering
  legacy basicauthenticator
  2009/01/08 14:41:22.010| CacheManager::registerAction: registered
  basicauthenticator
  The OWA web server has both basic and Windows Integrated
  Authentication enabled. If I disable windows integrated, OWA
 works
  fine, but I need activesync also, which does not work without
 windows
  integrated enabled.
  Thanks,
  Alan
  Um, further on my other email.
  Try some of the settings to disable pass-thru on the specific ports
  and/or peer:
 
  http://wiki.squid-cache.org/Features/ConnPin
 
 
  My config pretty much follows the wiki example for OWA accelerator.
 Squid 3.1.0.3. I'm using the same port for OWA and Activesync. I just
 added connection-auth=off on https_port and removed all auth_param
 lines, and that took care of my problem.
 
 
 Before I go recommending this as a general fix in 3.1, are BOTH of
 those
 changes needed for it to work?
 
 I know there are people using Squid+OWA in multi-mode who may need auth
 for other things. Can we get away with just connection-auth=off on
 the
 port?
 
 
 Amos

The auth_param lines don't seem to make any difference. It works for me with 
them in. 



RE: [squid-users] OWA accelerator authentication weirdness

2009-01-08 Thread Alan Lehman
 
 The order in which our auth_param lines are configured can alter the
 first authentication method tried. You will need to look at the
 debugging trace in cache.log to see which is generating which question
 
 Amos

Only basic is enabled:
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours

Do I need to select a program for basic?

found in cache.log:
2009/01/08 14:38:19.713| CacheManager::registerAction: registering legacy 
basicauthenticator
2009/01/08 14:38:19.713| CacheManager::findAction: looking for action 
basicauthenticator
2009/01/08 14:38:19.713| CacheManager::registerAction: registered 
basicauthenticator
2009/01/08 14:41:22.010| CacheManager::registerAction: registering legacy 
basicauthenticator
2009/01/08 14:41:22.010| CacheManager::registerAction: registered 
basicauthenticator

The OWA web server has both basic and Windows Integrated Authentication 
enabled. If I disable windows integrated, OWA works fine, but I need 
activesync also, which does not work without windows integrated enabled.

Thanks,
Alan


[squid-users] OWA accelerator authentication weirdness

2009-01-07 Thread Alan Lehman
Running 3.0 my accelerator for OWA works fine. 
The browser user authentication prompt is A user name and password are
being requested by ...

The same configuration on 2.6STABLE6 or 3.1.0.3 - 
The browser authentication prompt is now different: Enter user name and
password for 
When I enter attempt to log in with new dialog, the dialog is cleared
and redisplayed with the following in the log:

Tue Jan  6 23:00:03 2009.084  2 75.81.99.177 TCP_MISS/401 448 GET
https://abc.com/exchange/ - FIRST_UP_PARENT/owa-server text/html
Tue Jan  6 23:00:11 2009.349  2 75.81.99.177 TCP_MISS/401 661 GET
https://abc.com/exchange/ - FIRST_UP_PARENT/owa-server text/html

If I select CANCEL in the dialog, it changes back to  A user name and
password are being requested by ... and I can log in normally:

Tue Jan  6 23:00:42 2009.523  5 75.81.99.177 TCP_MISS/200 3578 GET
https://abc.com/exchange/alehman/? - FIRST_UP_PARENT/owa-server
text/html
etc.

Any ideas on what might be different on 3.0 than the other two? 

My config is basically as follows:
https_port ip:443 cert=/usr/share/ssl/owa-gbateam/owa-gbateam.pem
defaultsite=abc.com
cache_peer 172.16.4.64 parent 443 0 no-digest no-query originserver
login=PASS ssl sslflags=DONT_VERIFY_PEER
sslcert=/usr/share/ssl/exchange/exch-owa.pem name=owa-server

Thanks,
Alan

--
Please note our new email and website address!
Alan Lehman, PE
Associate
 mailto:aleh...@gbateam.com
creating remarkable solutions
for a higher quality of life
http://www.gbateam.com
9801 Renner Boulevard
Lenexa, KS 66219-9745
913.577.8829 direct
816.210.8785 mobile
913.577.8264 fax

CONFIDENTIALITY NOTICE: This e-mail message including attachments, if any, is 
intended for the person or entity to which it is addressed and may contain 
confidential and/or privileged material. Any unauthorized review, use, 
disclosure or distribution is prohibited. If you are not the intended 
recipient, please contact the sender by reply e-mail and destroy all copies of 
the original message. Thank you
 


RE: [squid-users] Extra Squid process?

2009-01-02 Thread Alan Lehman
On Wed, 29 Mar 2006 04:58:10 -0800, Henrik Nordstrom said:

With Squid you will see the following ports in use:

a) The TCP ports specified by http_port (and/or https_port) in LISTEN
state, and any client connections open to these..

b) UDP icp_port, snmp_port and htcp_port

c) One additional random UDP port for DNS

d) Random TCP connections over the loopback (127.0.0.1) interface to
each helper, all in CONNECTED state.


You should NOT see random TCP ports in LISTEN state. If you do then you
have probably set http_port 0 in your squid.conf..

Regards
Henrik


Re: [squid-users] Is it possible to have squid as do Proxy and OWA/RPCoHTTPS accelerator?

2009-01-01 Thread Alan Lehman
So I have OWA and RPCoHTTPS accelerator working on 3.0, with forward
proxy on a separate instance of 2.6. Now I'm building a new Redhat box
and I would like to handle both my normal LAN proxy and reverse proxy
for OWA, RPCoHTTPS and Activesync on one instance of Squid. It sounded
like 2.6 should be able to handle the chunked encoding and NTLM auth
required for Activesync. Can I/should I do all this on one instance of
Squid? Am I asking too much?

The latest Redhat comes with 2.6STABLE6, which I realize this is rather
old. But I decided to forge ahead and try it. 

I am directing two different public domains to the same Exchange server.
This basic configuration works on 3.0. Now trying to add it to the 2.6
forward proxy config, sometimes Squid seems to be redirecting forward
proxy requests to my OWA server, and I get:

The following error was encountered:
* Socket Failure 
The system returned:
(99) Cannot assign requested address
Squid is unable to create a TCP socket, presumably due to excessive
load. Please retry your request.


Config follows...

#OWA
https_port domain1-owa:443 cert=/usr/share/ssl/combined.crt
key=/usr/share/ssl/owa.key defaultsite=owa.domain1.com
https_port domain2-owa:443 cert=/usr/share/ssl/domain2/domain2-owa.pem
defaultsite=owa.domain2.com
cache_peer ip_of_exchange parent 443 0 no-query originserver login=PASS
ssl sslflags=DONT_VERIFY_PEER
sslcert=/usr/share/ssl/exchange/exch-owa.pem name=owa-server
acl OWA dstdomain owa.domain1.com
acl OWA dstdomain owa.domain2.com
cache_peer_access owa-server allow OWA
never_direct allow OWA
http_access allow OWA

#rpc_http
https_port domain1-rpc:443 cert=/usr/share/ssl/rpc/rpc.pem
defaultsite=rpc.domain1.com
https_port domain2-rpc:443 cert=/usr/share/ssl/domain2/domain2-rpc.pem
defaultsite=rpc.domain2.com
cache_peer ip_of_exchange parent 443 0 no-query originserver login=PASS
ssl sslflags=DONT_VERIFY_PEER
sslcert=/usr/share/ssl/exchange/exch-owa.pem name=rpc-server
acl RPC dstdomain rpc.domain1.com
acl RPC dstdomain rpc.domain2.com
cache_peer_access rpc-server allow RPC
never_direct allow RPC
http_access allow RPC

[typical stand-alone forward http proxy configuration follows]

Any thoughts would be most appreciated.

Thanks
Alan Lehman



[squid-users] Header Stripping of Header type other

2008-10-16 Thread WRIGHT Alan [UK]
Hi Folks,
I have had a look at the wiki and the docs and need a bit more help.

I am trying to look for and strip a request header X-MSISDN:

I could use ACL with request_header_access other deny, but this will
strip some other headers too which is not possible.

Is there a way to create custom header fields for stripping?

Regards

Alan




RE: [squid-users] Is it possible to have squid as do Proxy and OWA/RPCoHTTPS accelerator?

2008-06-15 Thread Alan Lehman
I am trying to do the same thing. OWA works, but so far no joy with RPCoHTTP. 
Do I have to do something in OL to make it accept the certificate? The cert's 
are purchased from godaddy.com. For each, I appended the bundled 
gd_intermediate to the domain cert.

Also, in the example config for OWA, I am confused by the following:

acl OWA dstdomain owa_hostname
cache_peer_access owa_hostname allow OWA

Doesn't the 2nd line just grant access from owa_hostname to owa_hostname ??


My current config (which works for OWA, but not RPCoHTTP):

extension_methods RPC_IN_DATA RPC_OUT_DATA

https_port public_ip_for_owa:443 cert=/usr/share/ssl/owa/combined.crt 
key=/usr/share/ssl/owa/owa.key defaultsite=owa.tld.com

https_port public_ip_for_rpc:443 cert=/usr/share/ssl/rpc/combined.crt 
key=/usr/share/ssl/rpc/rpc.key defaultsite=rpc.tld.com

cache_peer ip_of_exchange parent 80 0 no-query originserver 
front-end-https=auto login=PASS

acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl CONNECT method CONNECT

acl OWA dstdomain   owa.tld.com
acl RPC dstdomain   rpc.tld.com

http_access allow manager localhost
http_access allow OWA
http_access allow RPC
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny to_localhost

http_access allow localhost
http_access deny all

http_reply_access allow all
icp_access deny all

miss_access allow OWA
miss_access allow RPC
miss_access deny all

cache_peer_access ip_of_exhcange allow OWA
cache_peer_access ip_of_exhcange allow RPC
cache_peer_access ip_of_exhcange deny all

never_direct allow OWA
never_direct allow RPC


Thanks again,
Alan Lehman


 -Original Message-
 From: Odhiambo Washington [mailto:[EMAIL PROTECTED]
 Sent: Monday, June 02, 2008 11:41 AM
 To: Squid users
 Subject: Re: [squid-users] Is it possible to have squid as do Proxy and
 OWA/RPCoHTTPS accelerator?
 
 On Mon, Jun 2, 2008 at 7:27 PM, Henrik Nordstrom
 [EMAIL PROTECTED] wrote:
  On mån, 2008-06-02 at 13:41 +0300, Odhiambo Washington wrote:
  (actually, this is supposed to be the only entry for cache_peer I am
  goingto have?)
 
  If you only have one server, and that server is only talking http
 then
  yes there is only a single cache_peer..
 
 Understood.
 
  That has worked. It also requied a PEM passphrase. I hope this is
 not
  supposed to be another problem. These ssl stuff!
 
  You can configure the password in squid.conf if the PEM key is
  encrypted, or easily decrypt it with the openssl rsa command.
 
 Understood as well.
 
  In my case, I don't have a certificate for the external hostname,
  which brings me back to the confusing issue regarding the
 certificate:
  I can make a self-signed certificate for the external hostname. Not
 a
  problem. However, does this mean I really don't need the internal
  certifcate Exchange is using?
 
  Correct.
 
 Pooh! That was so confusing:-)
 
  Suppose:
 
  My Squid host is publicly known as mail.odhiambo.COM (IP of 1.2.3.4)
  My Exchange server is named msexch.msexch.odhiambo.BIZ (IP of
 192.168.0.26)
 
  Given that both OWA and RPCoHTTPS are directed at these...
 
  What values should I use for the following variables (from the
 wiki):
 
  (a) owa_hostname?
 
  In https_port defaultsite you should use mail.odhiambo.COM as this is
  what the clients are expected to connect to.
 
  (b) ip_of_owa_server?
 
  The ip of your exchange/owa server.
 
  (c) rpcohttp.url.com?
 
  Ignore. That example uses a setup with more Exchange servers, where
 OWA
  is running on a separarate server from Exchange.
 
  (d) the_exchange_server?
 
  Ignore as above.
 
  From there, I believe I will only get stuck at the ssl certificates
  step, which is where I am still a bit confused.
 
  Since you are not going to use a real certificate then issue yourself
 a
  self-signed one using OpenSSL.
 
   openssl req -new -x509 -days 1 -nodes -out
 mail.odhiambo.COM_selfsigned.pem -keyout mail.odhiambo.COM_key.pem
 
 Everything is all clear now.
 
 Will find good time to test this out and see how well it goes.
 
 Thank you very much, Amos and Henrik! That was quite some
 hand-holding. I really appreciate.
 
 --
 Best regards,
 Odhiambo WASHINGTON,
 Nairobi,KE
 +254733744121/+254722743223


RE: [squid-users] rpc over http problems

2008-06-08 Thread Alan Lehman
Finally getting back to this. Thanks for the earlier responses. 
I changed cache_peer to use front-end-https=auto, but no change in behavior.

This may be a stupid question. I'm wondering if my problem is due to the fact 
that I'm using the same squid as an accelerator for OWA to the same Exchange 
box:

https_port a.b.c.d:443 cert=/usr/share/ssl/combined.crt 
key=/usr/share/ssl/owa.key defaultsite=owa.xx.com
https_port a.b.c.e:443 cert=/usr/share/ssl/rpc.pem defaultsite=rpc.xx.com

cache_peer ip_of_exchange parent 80 0 no-query originserver front-end-https=on 
login=PASS
cache_peer ip_of_exchange parent 80 0 no-query originserver login=PASS 
front-end-https=auto name=exchange_rpc


The OWA config works and I'm trying to add rpc over http. OL will not connect 
and nothing shows up in access.log. Running tcpdump on the external port on the 
squid box, I see incoming connection attempts from the client, but squid seems 
to be ignoring. Nothing is passed to the exchange server.

Thanks,
Alan


-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: Monday, May 26, 2008 1:04 PM
To: Alan Lehman
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] rpc over http problems

On mån, 2008-05-26 at 12:48 -0500, Alan Lehman wrote:

 cache_peer ip_of_exchange parent 80 0 no-query originserver login=PASS 
 ssl sslcert=/usr/share/ssl/rpc.pem name=exchange_rpc

This tells Squid that it should use SSL encryption to connect to the peer on 
port 80. Looks wrongto me.

Remove the ssl and sslcert options, and replace them with front-end-https=auto 
instead. Should match yout requirements better..

Regards
Henrik


RE: [squid-users] rpc over http problems

2008-06-08 Thread Alan Lehman
1) are they going to one of the IP:port squid is listening on?
Yes, so far as I know. I'll verify again with the exchange admin.

2) is the firewall blocking/altering them at TCP-level?
Firewall passes all traffic to port 443 straight through to squid.
Blocks everything else.

3) what does cache.log have to say about the attempts?
Nothing.

Thanks,
Alan 
George Butler Associates, Inc. 
Creating Remarkable Solutions
for a Higher Quality of Life 

Alan Lehman, P.E. 
Electrical/Critical Facilities Group
One Renner Ridge
9801 Renner Boulevard
Lenexa, KS 66219-9745
T. 913.577.8829
M. 816.210.8785
F. 913.577.8264
[EMAIL PROTECTED]
www.gbutler.com


CONFIDENTIALITY NOTICE: This e-mail message including attachments, if any, is 
intended for the person or entity to which it is addressed and may contain 
confidential and/or privileged material. Any unauthorized review, use, 
disclosure or distribution is prohibited. If you are not the intended 
recipient, please contact the sender by reply e-mail and destroy all copies of 
the original message. Thank you.


[squid-users] mystery traffic

2008-05-15 Thread Alan Lehman
While diagnosing an unrelated network problem, I ran tcpdump on my Squid
(2.5-STABLE3) box. I found the following pattern repeating several times
per second. I don't know how long this has been going on, but at least
several days. If I kill Squid, it stops.

x.x.x.99 = DMZ network port on Squid system 
x.x.x.20 = Web server (IIS) on my DMZ

08:02:14.092144 x.x.x.20.https  x.x.x.99.42362: P 1759:1805(46) ack
1797 win 64233 nop,nop,timestamp 663266 2095770651 (DF)
08:02:14.092186 x.x.x.99.42362  x.x.x.20.https: . ack 1805 win 63712
nop,nop,timestamp 2095770651 663266 (DF)
08:02:14.092351 x.x.x.20.https  x.x.x.99.42359: P 850:896(46) ack 795
win 64233 nop,nop,timestamp 663266 2095770651 (DF)
08:02:14.092376 x.x.x.99.42359  x.x.x.20.https: . ack 896 win 63712
nop,nop,timestamp 2095770651 663266 (DF)
08:02:14.259571 x.x.x.99.42362  x.x.x.20.https: P 1797:2005(208) ack
1805 win 63712 nop,nop,timestamp 2095770668 663266 (DF)
08:02:14.259862 x.x.x.99.42359  x.x.x.20.https: P 795:1017(222) ack 896
win 63712 nop,nop,timestamp 2095770668 663266 (DF)
08:02:14.260994 x.x.x.20.https  x.x.x.99.42362: P 1805:2220(415) ack
2005 win 65535 nop,nop,timestamp 663269 2095770668 (DF)
08:02:14.261031 x.x.x.99.42362  x.x.x.20.https: . ack 2220 win 63712
nop,nop,timestamp 2095770668 663269 (DF)
08:02:14.450432 x.x.x.20.https  x.x.x.99.42359: . ack 1017 win 65535
nop,nop,timestamp 663271 2095770668 (DF)
08:02:14.450868 x.x.x.20.https  x.x.x.99.42359: P 896:1298(402) ack
1017 win 65535 nop,nop,timestamp 663271 2095770668 (DF)
08:02:14.450890 x.x.x.99.42359  x.x.x.20.https: . ack 1298 win 63712
nop,nop,timestamp 2095770687 663271 (DF)
08:02:14.581353 x.x.x.99.42362  x.x.x.20.https: P 2005:2291(286) ack
2220 win 63712 nop,nop,timestamp 2095770700 663269 (DF)
08:02:14.581737 x.x.x.20.https  x.x.x.99.42362: P 2220:2266(46) ack
2291 win 65249 nop,nop,timestamp 663272 2095770700 (DF)
08:02:14.581778 x.x.x.99.42362  x.x.x.20.https: . ack 2266 win 63712
nop,nop,timestamp 2095770700 663272 (DF)
08:02:14.755502 x.x.x.99.42362  x.x.x.20.https: P 2291:2513(222) ack
2266 win 63712 nop,nop,timestamp 2095770717 663272 (DF)
08:02:14.755917 x.x.x.99.42359  x.x.x.20.https: P 1017:1303(286) ack
1298 win 63712 nop,nop,timestamp 2095770718 663271 (DF)
08:02:14.756272 x.x.x.20.https  x.x.x.99.42359: P 1298:1344(46) ack
1303 win 65249 nop,nop,timestamp 663273 2095770718 (DF)
08:02:14.756315 x.x.x.99.42359  x.x.x.20.https: . ack 1344 win 63712
nop,nop,timestamp 2095770718 663273 (DF)
08:02:14.887740 x.x.x.20.https  x.x.x.99.42362: . ack 2513 win 65027
nop,nop,timestamp 663275 2095770717 (DF)

I have the following in squid.conf:
acl Local dst x.x.x.0/24
no_cache deny Local

It appears Squid is trying to access something on the web server, but I
don't know why. There is only very occasional traffic in access.log for
x.x.x.20. Any ideas would be most appreciated.

Alan Lehman



[squid-users] Re: cannot auth win 2003 users with squid ldap_auth

2008-02-21 Thread Alan Walker

Hi Sheldon,

When you run squid_ldap_auth by itself, it should sit there with no prompt.  At
this point you would type a username and password (separated by a space, such as
administrator pasword) and if it exists (or at least if the search is
successful), you should see OK.  If the search did not find that
username/password you see ERR, so you may have it already there.

Your details look basically OK. I found that when I had the -D details wrong I
would get messages such as credentials invalid

Alan.







RE: [squid-users] Transparent Proxy not working in 3.0 STable1

2008-02-20 Thread WRIGHT Alan
Totally correct Amos

I rebuilt with netfilter only and works great, thanks

Alan


-Original Message-
From: Amos Jeffries [mailto:[EMAIL PROTECTED] 
Sent: 14 February 2008 22:04
To: WRIGHT Alan
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Transparent Proxy not working in 3.0 STable1

 Hi Folks,

 I have installed squid 3.0 stable 1 and have configured it for
 transparent mode.

 Somehow it doesn't seem to work correctly.

 When it runs, it shows that it is running in transparent mode, but
then
 when HTTP requests hit the box it gives the WARNING: Transparent
 proxying not supported. The web browser shows an error page but from
the
 squid itself (Error: HTTP 400 Bad Request - Invalid URL.).

 When I configured the build, I used the tproxy and the netfilter
options
 for transparent proxying as I wasn't sure what one I needed.

At present only one transparency option will work and build. The tproxy
configure option is for kernels patched with the TROXY patch from
balabit.
The netfilter option is for standard kernels using iptables NAT
REDIRECT.

You will need to pick the one that applies to you and re-build squid.


 Does anyone have a clue why it will not run in transparent mode.

 I am pretty sure my iptables is OK

It probably is, but squid when configured with multiple transparency
options squid prefers the more transparent option (TPROXY is the only
completely transparent).

It sounds like you need to drop the tproxy.

Amos


 Here is what the trace shows:

 No. TimeSourceDestination
Protocol
 Info
  20 12.102354   192.168.26.128192.168.130.250   HTTP
 GET / HTTP/1.1

 Frame 20 (493 bytes on wire, 493 bytes captured)
 Ethernet II, Src: 00:0c:29:e8:3d:07, Dst: 00:0c:29:01:ce:bc
 Internet Protocol, Src Addr: 192.168.26.128 (192.168.26.128), Dst
Addr:
 192.168.130.250 (192.168.130.250)
 Transmission Control Protocol, Src Port: 44418 (44418), Dst Port: http
 (80), Seq: 1, Ack: 1, Len: 427
 Hypertext Transfer Protocol
 GET / HTTP/1.1\r\n
 Host: 192.168.130.250\r\n
 User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.1)
 Gecko/20060313 Fedora/1.5.0.1-9 Firefox/1.5.0.1 pango-text\r\n
 Accept:

text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plai
 n;q=0.8,image/png,*/*;q=0.5\r\n
 Accept-Language: en-us,en;q=0.5\r\n
 Accept-Encoding: gzip,deflate\r\n
 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\n
 Keep-Alive: 300\r\n
 Connection: keep-alive\r\n
 \r\n

 No. TimeSourceDestination
Protocol
 Info
  22 12.157274   192.168.130.250   192.168.26.128HTTP
 HTTP/1.0 400 Bad Request (text/html)[Short Frame]

 Frame 22 (1514 bytes on wire, 500 bytes captured)
 Ethernet II, Src: 00:0c:29:01:ce:bc, Dst: 00:0c:29:e8:3d:07
 Internet Protocol, Src Addr: 192.168.130.250 (192.168.130.250), Dst
 Addr: 192.168.26.128 (192.168.26.128)
 Transmission Control Protocol, Src Port: http (80), Dst Port: 44418
 (44418), Seq: 1, Ack: 428, Len: 1448
 Hypertext Transfer Protocol
 HTTP/1.0 400 Bad Request\r\n
 Server: squid/3.0.STABLE1\r\n
 Mime-Version: 1.0\r\n
 Date: Thu, 14 Feb 2008 04:44:37 GMT\r\n
 Content-Type: text/html\r\n
 Content-Length: 1447\r\n
 Expires: Thu, 14 Feb 2008 04:44:37 GMT\r\n
 X-Squid-Error: ERR_INVALID_URL 0\r\n
 X-Cache: MISS from localhost.localdomain\r\n
 Via: 1.0 localhost.localdomain (squid/3.0.STABLE1)\r\n
 Proxy-Connection: close\r\n
 \r\n

 TIA

 Alan










RE: [squid-users] Transparent Proxy not working in 3.0 Stable1

2008-02-15 Thread WRIGHT Alan
Problem solved, clearly I used the wrong option for build

I tried make clean and rebuild with --enable-linux-netfilter only and it
works fine

-Original Message-
From: WRIGHT Alan 
Sent: 14 February 2008 13:47
To: squid-users@squid-cache.org
Subject: [squid-users] Transparent Proxy not working in 3.0 Stable1


Hi Folks,

I have installed squid 3.0 stable 1 and have configured it for
transparent mode.

Somehow it doesn't seem to work correctly.

When it runs, it shows that it is running in transparent mode, but then
when HTTP requests hit the box it gives the WARNING: Transparent
proxying not supported. The web browser shows an error page but from the
squid itself (Error: HTTP 400 Bad Request - Invalid URL.). 

When I configured the build, I used the tproxy and the netfilter options
for transparent proxying as I wasn't sure what one I needed.

Does anyone have a clue why it will not run in transparent mode.

I am pretty sure my iptables is OK

Here is what the trace shows:

No. TimeSourceDestination   Protocol
Info
 20 12.102354   192.168.26.128192.168.130.250   HTTP
GET / HTTP/1.1

Frame 20 (493 bytes on wire, 493 bytes captured)
Ethernet II, Src: 00:0c:29:e8:3d:07, Dst: 00:0c:29:01:ce:bc
Internet Protocol, Src Addr: 192.168.26.128 (192.168.26.128), Dst Addr:
192.168.130.250 (192.168.130.250)
Transmission Control Protocol, Src Port: 44418 (44418), Dst Port: http
(80), Seq: 1, Ack: 1, Len: 427
Hypertext Transfer Protocol
GET / HTTP/1.1\r\n
Host: 192.168.130.250\r\n
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.1)
Gecko/20060313 Fedora/1.5.0.1-9 Firefox/1.5.0.1 pango-text\r\n
Accept:
text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plai
n;q=0.8,image/png,*/*;q=0.5\r\n
Accept-Language: en-us,en;q=0.5\r\n
Accept-Encoding: gzip,deflate\r\n
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\n
Keep-Alive: 300\r\n
Connection: keep-alive\r\n
\r\n

No. TimeSourceDestination   Protocol
Info
 22 12.157274   192.168.130.250   192.168.26.128HTTP
HTTP/1.0 400 Bad Request (text/html)[Short Frame]

Frame 22 (1514 bytes on wire, 500 bytes captured)
Ethernet II, Src: 00:0c:29:01:ce:bc, Dst: 00:0c:29:e8:3d:07
Internet Protocol, Src Addr: 192.168.130.250 (192.168.130.250), Dst
Addr: 192.168.26.128 (192.168.26.128)
Transmission Control Protocol, Src Port: http (80), Dst Port: 44418
(44418), Seq: 1, Ack: 428, Len: 1448
Hypertext Transfer Protocol
HTTP/1.0 400 Bad Request\r\n
Server: squid/3.0.STABLE1\r\n
Mime-Version: 1.0\r\n
Date: Thu, 14 Feb 2008 04:44:37 GMT\r\n
Content-Type: text/html\r\n
Content-Length: 1447\r\n
Expires: Thu, 14 Feb 2008 04:44:37 GMT\r\n
X-Squid-Error: ERR_INVALID_URL 0\r\n
X-Cache: MISS from localhost.localdomain\r\n
Via: 1.0 localhost.localdomain (squid/3.0.STABLE1)\r\n
Proxy-Connection: close\r\n
\r\n

TIA

Alan

 





[squid-users] Transparent Proxy not working in 3.0 STable1

2008-02-14 Thread WRIGHT Alan
Hi Folks,

I have installed squid 3.0 stable 1 and have configured it for
transparent mode.

Somehow it doesn't seem to work correctly.

When it runs, it shows that it is running in transparent mode, but then
when HTTP requests hit the box it gives the WARNING: Transparent
proxying not supported. The web browser shows an error page but from the
squid itself (Error: HTTP 400 Bad Request - Invalid URL.). 

When I configured the build, I used the tproxy and the netfilter options
for transparent proxying as I wasn't sure what one I needed.

Does anyone have a clue why it will not run in transparent mode.

I am pretty sure my iptables is OK

Here is what the trace shows:

No. TimeSourceDestination   Protocol
Info
 20 12.102354   192.168.26.128192.168.130.250   HTTP
GET / HTTP/1.1

Frame 20 (493 bytes on wire, 493 bytes captured)
Ethernet II, Src: 00:0c:29:e8:3d:07, Dst: 00:0c:29:01:ce:bc
Internet Protocol, Src Addr: 192.168.26.128 (192.168.26.128), Dst Addr:
192.168.130.250 (192.168.130.250)
Transmission Control Protocol, Src Port: 44418 (44418), Dst Port: http
(80), Seq: 1, Ack: 1, Len: 427
Hypertext Transfer Protocol
GET / HTTP/1.1\r\n
Host: 192.168.130.250\r\n
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.1)
Gecko/20060313 Fedora/1.5.0.1-9 Firefox/1.5.0.1 pango-text\r\n
Accept:
text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plai
n;q=0.8,image/png,*/*;q=0.5\r\n
Accept-Language: en-us,en;q=0.5\r\n
Accept-Encoding: gzip,deflate\r\n
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\n
Keep-Alive: 300\r\n
Connection: keep-alive\r\n
\r\n

No. TimeSourceDestination   Protocol
Info
 22 12.157274   192.168.130.250   192.168.26.128HTTP
HTTP/1.0 400 Bad Request (text/html)[Short Frame]

Frame 22 (1514 bytes on wire, 500 bytes captured)
Ethernet II, Src: 00:0c:29:01:ce:bc, Dst: 00:0c:29:e8:3d:07
Internet Protocol, Src Addr: 192.168.130.250 (192.168.130.250), Dst
Addr: 192.168.26.128 (192.168.26.128)
Transmission Control Protocol, Src Port: http (80), Dst Port: 44418
(44418), Seq: 1, Ack: 428, Len: 1448
Hypertext Transfer Protocol
HTTP/1.0 400 Bad Request\r\n
Server: squid/3.0.STABLE1\r\n
Mime-Version: 1.0\r\n
Date: Thu, 14 Feb 2008 04:44:37 GMT\r\n
Content-Type: text/html\r\n
Content-Length: 1447\r\n
Expires: Thu, 14 Feb 2008 04:44:37 GMT\r\n
X-Squid-Error: ERR_INVALID_URL 0\r\n
X-Cache: MISS from localhost.localdomain\r\n
Via: 1.0 localhost.localdomain (squid/3.0.STABLE1)\r\n
Proxy-Connection: close\r\n
\r\n

TIA

Alan

 




[squid-users] Transparent Proxy not working in 3.0 Stable1

2008-02-14 Thread WRIGHT Alan

Hi Folks,

I have installed squid 3.0 stable 1 and have configured it for
transparent mode.

Somehow it doesn't seem to work correctly.

When it runs, it shows that it is running in transparent mode, but then
when HTTP requests hit the box it gives the WARNING: Transparent
proxying not supported. The web browser shows an error page but from the
squid itself (Error: HTTP 400 Bad Request - Invalid URL.). 

When I configured the build, I used the tproxy and the netfilter options
for transparent proxying as I wasn't sure what one I needed.

Does anyone have a clue why it will not run in transparent mode.

I am pretty sure my iptables is OK

Here is what the trace shows:

No. TimeSourceDestination   Protocol
Info
 20 12.102354   192.168.26.128192.168.130.250   HTTP
GET / HTTP/1.1

Frame 20 (493 bytes on wire, 493 bytes captured)
Ethernet II, Src: 00:0c:29:e8:3d:07, Dst: 00:0c:29:01:ce:bc
Internet Protocol, Src Addr: 192.168.26.128 (192.168.26.128), Dst Addr:
192.168.130.250 (192.168.130.250)
Transmission Control Protocol, Src Port: 44418 (44418), Dst Port: http
(80), Seq: 1, Ack: 1, Len: 427
Hypertext Transfer Protocol
GET / HTTP/1.1\r\n
Host: 192.168.130.250\r\n
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.1)
Gecko/20060313 Fedora/1.5.0.1-9 Firefox/1.5.0.1 pango-text\r\n
Accept:
text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plai
n;q=0.8,image/png,*/*;q=0.5\r\n
Accept-Language: en-us,en;q=0.5\r\n
Accept-Encoding: gzip,deflate\r\n
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\n
Keep-Alive: 300\r\n
Connection: keep-alive\r\n
\r\n

No. TimeSourceDestination   Protocol
Info
 22 12.157274   192.168.130.250   192.168.26.128HTTP
HTTP/1.0 400 Bad Request (text/html)[Short Frame]

Frame 22 (1514 bytes on wire, 500 bytes captured)
Ethernet II, Src: 00:0c:29:01:ce:bc, Dst: 00:0c:29:e8:3d:07
Internet Protocol, Src Addr: 192.168.130.250 (192.168.130.250), Dst
Addr: 192.168.26.128 (192.168.26.128)
Transmission Control Protocol, Src Port: http (80), Dst Port: 44418
(44418), Seq: 1, Ack: 428, Len: 1448
Hypertext Transfer Protocol
HTTP/1.0 400 Bad Request\r\n
Server: squid/3.0.STABLE1\r\n
Mime-Version: 1.0\r\n
Date: Thu, 14 Feb 2008 04:44:37 GMT\r\n
Content-Type: text/html\r\n
Content-Length: 1447\r\n
Expires: Thu, 14 Feb 2008 04:44:37 GMT\r\n
X-Squid-Error: ERR_INVALID_URL 0\r\n
X-Cache: MISS from localhost.localdomain\r\n
Via: 1.0 localhost.localdomain (squid/3.0.STABLE1)\r\n
Proxy-Connection: close\r\n
\r\n

TIA

Alan

 




[squid-users] Cannot open HTTP Port on 3.0.STABLE1

2008-02-04 Thread Alan Strassberg
Squid 3.0R1 fails in daemon mode when binding to a privileged  port.
Works fine on ports  1023.

There is nothing running on the ports as verified with lsof -i and
netstat -a

Debug (squid -X) shows this:
2008/02/04 11:13:24.293| acl_access::containsPURGE: invoked for
'http_access allow manager localhost'
2008/02/04 11:13:24.293| acl_access::containsPURGE: can't create tempAcl
2008/02/04 11:13:24.293| acl_access::containsPURGE:   returning false
2008/02/04 11:13:24.293| leave_suid: PID 8500 called
2008/02/04 11:13:24.293| leave_suid: PID 8500 giving up root, becoming 'squid'
2008/02/04 11:13:24.293| command-line -X overrides: ALL,1

What is acl_access::containsPURGE: can't create tempAcl trying to
tell me to fix?

Incidentally runs fine on ports  1024 in non-daemon mode (squid -N)

This is FreeBSD 6.3-STABLE


RE: [squid-users] Question on 302 problem

2007-03-14 Thread WRIGHT Alan
Thanks Henrik,
Changed the helper to concurrency 0 and the 302 is now working.

Only issue now is that the Location value is empty.

However, when I test my script from the dos prompt in squid/sbin I see the 
floowing response from my script onto the cmd line

302:http://www.fancygear.co.uk

Any ideas?

Thanks again

Alan 

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: 12 March 2007 15:08
To: WRIGHT Alan
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Question on 302 problem

mån 2007-03-12 klockan 14:59 +0100 skrev WRIGHT Alan:

 2007/03/09 12:38:34| helperHandleRead: unexpected reply on channel 302 
 from url_rewriter #5 '302:www.rachalan.f2s.com'
  
 In the cache.log file.
 
 Any ideas what the problem is?

Looks like you have set url_rewrite_concurrency with a helper not supporting 
this modified helper protocol.

Regards
Henrik



RE: [squid-users] Question on 302 problem

2007-03-14 Thread WRIGHT Alan
 Exactly, however, I have checked my script and it seems to run slightly
differently from running the script at the cmd prompt.

I think it may have something to do with the way the helper script is
called by quid and seems to run with c:\squid\sbin\ as the running
directory.

Here is the script:

The file foundurl.txt is never populated if it runs via squid, which is
why it seems to put the 302 as empty. But if I run same script from cmd
line then it populates foundurl.txt.

Unfortunately, when it comes to Perl and scripting generally I am a
total Noob.

BEGIN {$|=1};
($sec,$min,$hour,$mday,$mon,$year,$wday,$yday) = gmtime;


while (defined($line = STDIN)) {
  @hdr = split (/\s+/, $line);
  $hdr[2] =~ s/\/-//;
  $url = $hdr[1];
  $user_ip = $hdr[2];
  
  open (prof, user_profile.txt);
  print prof $user_ip;
  while (prof) {
  @user_prof = split(/\s+/, $_) if /$user_ip/;
  }
  close (prof);
  
  $ran_num = rand (5);
  $ran_num = int ($ran_num);
  $cat = $user_prof[1];
  
  open (selurl, profile_$cat.txt);
  while (selurl) {
$sel_url = $_ if /$ran_num/;
  }
  close (selurl);
  
  substr($sel_url,0,2) = ;

  
($sec1,$min1,$hour1,$mday1,$mon1,$year1,$wday1,$yday1) = gmtime;

  open (info, foundurl.txt);
  print info $sel_url;
  close (info);

  if ($min1 = $min+1  $sec1  $sec) {
  print 302:$sel_url\n;
  ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday) = gmtime;
  }
  else {
  print $url\n;
}
}

-Original Message-
From: Henrik Nordstrom [mailto:[EMAIL PROTECTED] 
Sent: 14 March 2007 12:06
To: WRIGHT Alan
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] Question on 302 problem

ons 2007-03-14 klockan 11:01 +0100 skrev WRIGHT Alan:
 Thanks Henrik,
 Changed the helper to concurrency 0 and the 302 is now working.
 
 Only issue now is that the Location value is empty.

You mean that Squid is sending a 302 redirect with an empty Location:
header? Should not happen, and does not happen in my tests..

Regards
Henrik



[squid-users] Question on 302 problem

2007-03-12 Thread WRIGHT Alan
Folks,
I have used the procedure for 302 redirecting that the FAQ defines.
 
However, I get this:
 
2007/03/09 12:38:24| helperHandleRead: unexpected reply on channel 302
from url_rewriter #2 '302:www.rachalan.f2s.com'
2007/03/09 12:38:32| helperHandleRead: unexpected reply on channel 302
from url_rewriter #3 '302:www.rachalan.f2s.com'
2007/03/09 12:38:33| helperHandleRead: unexpected reply on channel 302
from url_rewriter #4 '302:www.rachalan.f2s.com'
2007/03/09 12:38:34| helperHandleRead: unexpected reply on channel 302
from url_rewriter #5 '302:www.rachalan.f2s.com'
 
In the cache.log file.

Any ideas what the problem is?

Have I forgot to change something in squid.conf?

TIA

Alan



[squid-users] Squid - Dans - squid

2007-03-06 Thread Alan Araujo

Hi,

I install a solution Squid - dansguardian - squid and the acces log is
not showing the user-logon-name and source IP.

When we put NTLM and ACLs we receive access denied.

Squid conf 1:

cache_effective_user squid
cache_effective_group squid
http_port 3128
cache_dir aufs /cache 28000 16 256
pid_filename /var/run/squid/squid.pid
cache_peer 127.0.0.1 parent 8080 0 no-query login=*:nopassword
forwarded_for on # PASSA IP DOS CLIENTES PARA O DANSGUARDIAM
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
#access_log /var/log/squid/access_barcelona.log squid
#useragent_log /var/log/squid/useragent.log # SOMENTE HABILITE PARA
AUDITAR USER-AGENT DE BROWSERS
log_mime_hdrs on
cache_store_log none
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow all
http_reply_access allow all
icp_access allow all
coredump_dir /usr/local/squid/var/cache
dns_nameservers 10.12.0.33 10.12.0.37

[EMAIL PROTECTED] ~]# clear
[EMAIL PROTECTED] ~]# cat /usr/local/squid/etc/squid.conf
cache_effective_user squid
cache_effective_group squid
http_port 3128
cache_dir aufs /cache 28000 16 256
pid_filename /var/run/squid/squid.pid
cache_peer 127.0.0.1 parent 8080 0 no-query login=*:nopassword
forwarded_for on # PASSA IP DOS CLIENTES PARA O DANSGUARDIAM
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
#access_log /var/log/squid/access_barcelona.log squid
#useragent_log /var/log/squid/useragent.log # SOMENTE HABILITE PARA
AUDITAR USER-AGENT DE BROWSERS
log_mime_hdrs on
cache_store_log none
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow all
http_reply_access allow all
icp_access allow all
coredump_dir /usr/local/squid/var/cache
dns_nameservers 10.12.0.33 10.12.0.37

Squid NTLM conf 2 :
cache_effective_user squid
cache_effective_group squid
http_port 127.0.0.1:3030
cache_dir null /dev/null
pid_filename /var/run/squid/squid-ntlm.pid
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache

# FAKE AUTH

auth_param ntlm program /usr/local/squid/libexec/fakeauth_auth
auth_param ntlm children 30

# NTLM

#auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
#auth_param ntlm children 30

access_log /var/log/squid/access_barcelona.log squid
cache_store_log none

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320


# Busca os grupos no AD
external_acl_type nt_group children=30 %LOGIN
/usr/local/squid/libexec/wbinfo_group.pl

# ACL Dos usuarios autenticados
acl usuarios_autenticados proxy_auth REQUIRED

# ACL de acesso

acl webbloqueada external nt_group WEBBLOQUEADA
acl webliberada external nt_group WEBLIBERADA

# ACL de teste para a SESRE

acl sesre external nt_group SESRE




acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl Safe_ports port 80  # http
acl Safe_ports port 21  # 

RE: [squid-users] Errors when Starting Squid

2007-03-02 Thread WRIGHT Alan
 Hi Amos,
I gave up on the windows port, I installed again on FC5 and then run
again.

Once I had setup permissions on the scripts, modified squid.conf to suit
the script name etc... Squid started and is now running perfectly with a
basic re-write script so all it does is put STDIN to STDOUT.

Now I need to understand what coming in from STDIN and manipulate it :-)

If I get some time I will try to work out why the windows port wont run
the .plx file.

If I open cmd line in windows, and type the script name, it runs
perfectly, so I am not entirely sure why its not running when called by
squid

Thanks

Alan

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: 02 March 2007 14:54
To: WRIGHT Alan
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Errors when Starting Squid

 Folks,
 When I start squid, i get the following errors:

 helperOpenServers: Starting 10 'sqred.plx' processes
 2007/03/01 21:45:50| ipcCreate: CHILD: c:/sqred/sqred.plx: (8) Exec 
 format error

snip many duplicates

 I have noticed that it is due to this command in the .conf file

 url_rewrite_program c:/sqred/sqred.plx

 When this line is commented, the proxy works fine.

 Does anyone have an idea as to what the Exec Format error is?

 Thanks

 Alan


This is squid attempting to start its child processes. As it does so
Windows returns the Exec format error error and causes squid to
abandon the startup procedure.

The Windows Server Documentation indicates this error is given out by
windows when a binary file cannot be executed. Usually on corrupt
binaries.

Check that the c:/sqred/sqred.plx file is actually an executable format
in
win32 acceptable format. The windows command line should be able to
execute it or give you a better description of the problem.

Amos





RE: [squid-users] Errors when Starting Squid

2007-03-02 Thread WRIGHT Alan
Yes Guido, your right, I missed that on the rel notes :-O

Thanks for the pointer

Regards
Alan 

-Original Message-
From: Guido Serassio [mailto:[EMAIL PROTECTED] 
Sent: 02 March 2007 16:27
To: WRIGHT Alan; squid-users@squid-cache.org
Subject: Re: [squid-users] Errors when Starting Squid

Hi,

At 22.52 01/03/2007, WRIGHT Alan wrote:
Folks,
When I start squid, i get the following errors:

helperOpenServers: Starting 10 'sqred.plx' processes
2007/03/01 21:45:50| ipcCreate: CHILD: c:/sqred/sqred.plx: (8) Exec 
format error

You should read the Windows Compatibility Notes in the release notes of
Squid 2.6:
On Windows you must also specify the command interpreter needed for the
execution of scripts.

Regards

Guido



-

Guido Serassio
Acme Consulting S.r.l. - Microsoft Certified Partner
Via Lucia Savarino, 1   10098 - Rivoli (TO) - ITALY
Tel. : +39.011.9530135  Fax. : +39.011.9781115
Email: [EMAIL PROTECTED]
WWW: http://www.acmeconsulting.it/




  1   2   >