RE: [squid-users] Squid Stops Responding Sporadically

2008-11-14 Thread Marcel Grandemange
 Hi,
   could you please give us a few more details? Squid version (squid
 -v), operating system, whether you got a binary package or rolled your
 own, a few info about the setup (forward proxy? Reverse? Transparent?)
 It's hard to tell from what I read so far, unless I missed something


He has said 3.0.stable10. With aufs problems as well, so probably running
on some form of BSD.

The rest of the Q's still need answering though. I'm particularly
interested in the configure options used to build, confirmation of the OS,
the rest of the backtrace from the core, and whether its a vanilla squid
or patched.

Amos

Ok for more info

System im running on...
FreeBSD thavinci.za.net 7.0-RELEASE FreeBSD 7.0-RELEASE #0: Mon Aug 25
16:03:40 SAST 2008
[EMAIL PROTECTED]:/usr/src/sys/amd64/compile/thavinci  amd64

Build options
Squid Cache: Version 3.0.STABLE10
configure options:  '--with-default-user=squid' '--bindir=/usr/local/sbin'
'--sbindir=/usr/local/sbin' '--datadir=/usr/local/etc/squid'
'--libexecdir=/usr/local/libexec/squid' '--localstatedir=/usr/local/squid'
'--sysconfdir=/usr/local/etc/squid' '--enable-removal-policies=lru heap'
'--disable-linux-netfilter' '--disable-linux-tproxy' '--disable-epoll'
'--enable-auth=basic ntlm digest' '--enable-basic-auth-helpers=DB NCSA PAM
MSNT SMB squid_radius_auth YP' '--enable-digest-auth-helpers=password'
'--enable-external-acl-helpers=ip_user session unix_group wbinfo_group'
'--enable-ntlm-auth-helpers=SMB' '--with-pthreads' '--enable-storeio=ufs
diskd null aufs' '--enable-delay-pools' '--enable-ssl' '--with-openssl=/usr'
'--enable-wccpv2' '--disable-ident-lookups' '--enable-arp-acl'
'--enable-ipfw-transparent' '--enable-kqueue' '--with-large-files'
'--enable-stacktraces' '--enable-err-languages=Armenian Azerbaijani
Bulgarian Catalan Czech Danish  Dutch English Estonian Finnish French German
Greek  Hebrew Hungarian Italian Japanese Korean Lithuanian  Polish
Portuguese Romanian Russian-1251 Russian-koi8-r  Serbian Simplify_Chinese
Slovak Spanish Swedish  Traditional_Chinese Turkish Ukrainian-1251
Ukrainian-koi8-u Ukrainian-utf8' '--enable-default-err-language=templates'
'--prefix=/usr/local' '--mandir=/usr/local/man' '--infodir=/usr/local/info/'
'--build=amd64-portbld-freebsd7.0' 'build_alias=amd64-portbld-freebsd7.0'
'CC=cc' 'CFLAGS=-O2 -fno-strict-aliasing -pipe  -I/usr/include -g' 'LDFLAGS=
-rpath=/usr/lib:/usr/local/lib -L/usr/lib' 'CPPFLAGS=' 'CXX=c++'
'CXXFLAGS=-O2 -fno-strict-aliasing -pipe -I/usr/include -g'

Built it from FreeBSD ports.
It is a pretty simple setup...
Config available from...

http://www.thavinci.za.net/Downloads/squid-new.conf




 On 11/13/08, Marcel Grandemange [EMAIL PROTECTED] wrote:
Good day.


Im wondering if anybody else has experienced this.
Since ive upgraded to squid3stable10 the proxy continuously stops
responding.
Firefox will say something along the lines of the proxy isn't setup to
accept connections.

I hit refresh and it loads page perfectly, then next page it loads all
 the
pics half  and so on..


This has only been introduced in stable10 and isn't the link as I tested
with a neighboring cache running stable9, no issues.

Under further investigation system log file presented following:

Nov 13 19:37:21 thavinci kernel: pid 66367 (squid), uid 100: exited on
signal 6 (core dumped)
Nov 13 19:37:21 thavinci squid[66118]: Squid Parent: child process 66367
exited due to signal 6
Nov 13 19:37:24 thavinci squid[66118]: Squid Parent: child process 66370
started

 Under Even further investigation cache.log revealed following...
 JUST before squid crashes each time there is the following entry...

 2008/11/14 00:03:55| assertion failed: client_side_reply.cc:1843:
 reqofs =
 HTTP_REQBUF_SZ || flags.headersSent


Also for those interested, a while back I had issues with the
 performance
 of
squid...
Objects retrieved out of cache never went faster than 400K, turned out
 when
I changed my cache_dir from aufs to ufs this was resolved, now objects
 some
down full speed on LAN.

 I am pretty desperate to get this problem solved as this is affecting us
 big
 time.





 --
 /kinkie





Re: [squid-users] large memory squid

2008-11-14 Thread Matus UHLAR - fantomas
  john Moylan wrote:
  I am about to take ownership of a new 2CPU, 4 core server with 32GB of
  RAM - I intend to add the server to my squid reverse proxy farm. My
  site is approximately 300GB including archives and I think 32GB of
  memory alone will suffice as cache for small, hot objects without
  necessitating any additional disk cache.
 
  Are there any potential bottlenecks if I set the disk cache to
  something like 500MB and cache_mem to  something like 22GB. I'm using
  Centos 5's Squid 2.6.
 
  I have a full set of monitoring scripts as per
  http://www.squid-cache.org/~wessels/squid-rrd/ (thanks again) and of
  course I will be able to benchmark this myself once I have the box -
  but any tips in advance would be appreciated.

 2008/11/13 Amos Jeffries [EMAIL PROTECTED]:
  Should run sweet. Just make sure its a 64-bit OS and Squid build or all that
  RAM goes to waste.

On 13.11.08 16:44, john Moylan wrote:
 Should I still leave 30% of my RAM for the OS's cache etc?

If you only have 500MB of disk cache, it's not needed.
If that's 500 GB, you should leave it...

-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Your mouse has moved. Windows NT will now restart for changes to take
to take effect. [OK]


Re: [squid-users] Not able to cache Streaming media

2008-11-14 Thread bijayant kumar
I have configured as explained in
this 
article(http://wiki.squid-cache.org/ConfigExamples/DynamicContent/YouTube/Discussion),
 but I am not sure that streaming media is being cached or
not. My cause of suspicion is based on the below log excerpts:
I am trying with this file 
http://www.youtube.com/watch?v=WFOjvZaXeQ8

/var/log/squid/access.log

1552 192.168.99.23 TCP_MISS/200 19289 GET http://www.youtube.com/watch? - 
DIRECT/208.117.236.72 text/html
http://i4.ytimg.com/vi/wseBgcexR_E/default.jpg - DIRECT/209.85.171.118 
image/jpeg
1196 192.168.99.23 TCP_MISS/200 5708 GET 
http://i2.ytimg.com/vi/QrSgaZyal-0/default.jpg - DIRECT/209.85.171.118 
image/jpeg
439 192.168.99.23 TCP_MISS/303 607 GET http://www.youtube.com/get_video? - 
DIRECT/208.117.236.72 text/html
91 192.168.99.23 TCP_HIT/200 1134908 GET
 http://v8.cache.googlevideo.com/get_video? - NONE/- video/flv
665 192.168.99.23 TCP_MISS/204 337 GET http://video-stats.video.google.com/s? - 
DIRECT/66.249.89.127 text/html
791 192.168.99.23 TCP_MISS/200 5225 GET http://www.youtube.com/set_awesome? - 
DIRECT/208.117.236.72 text/xml

/var/log/squid/store.log

 RELEASE
-1  27A4BF0FD30BB5AD58C059B14EB572B2 200 1226651030 -1 41629446
text/xml 4564/4564 GET
http://www.youtube.com/set_awesome?video_id=spIYQosBkGIm=l=28t=nullw=0.826178571428571

RELEASE
-1  682D8400CD1B9FAC0063B1DEAC521825 302 1226651104 -1 41629446
text/html -1/0 GET http://www.youtube.com/watch?v=spIYQosBkGI

RELEASE
-1  AFBB384E90F0A636BA732AC842100B3A 302 1226651157 -1 41629446
text/html -1/0 GET http://www.youtube.com/watch?v=spIYQosBkGI

RELEASE
-1  12082C1BF8BD1BB579B4283F1AA39D72 200 1226651254 -1 41629446
text/html 18587/18587 GET http://www.youtube.com/watch?v=spIYQosBkGI

SWAPOUT
00 053C 287CFE14B5B1B819D045FDF541050BDE 200 1226651255 -1
1226651855 text/html 9335/9335 GET
http://sb.google.com/safebrowsing/update?client=navclient-auto-ffoxappver=2.0.0.16version=goog-white-domain:1:481,goog-white-url:1:371,goog-black-url:1:25401,goog-black-enchash:1:63897

SWAPOUT
00 053D 57616EE8629371E6EDBB0ED7B7661FD4 200 1226651256 1226643292
1226752056 image/jpeg 4695/4695 GET
http://i4.ytimg.com/vi/wseBgcexR_E/default.jpg

SWAPOUT 00
053E ADEA3535CEAB72C1DC7F13FAEC39B0FA 200 1226651256 1226646955
1226752056 image/jpeg 5264/5264 GET
http://i2.ytimg.com/vi/QrSgaZyal-0/default.jpg

RELEASE -1  8F439F6C4512276F842949987B1225AB  303 1226651256-1  
41629446 text/html -1/0 GET
 
http://www.youtube.com/get_video?video_id=spIYQosBkGIt=OEgsToPDskJR7PfAC8Z2saIPuZmRVg6Uel=detailpageps=

RELEASE
-1  4F7C119B4A258E45961A01E96DE0BA67 200 1226651280 -1 41629446
text/xml 4564/4564 GET
http://www.youtube.com/set_awesome?video_id=spIYQosBkGIm=l=28t=nullw=0.826178571428571

Based on above logs I doubt that they are being cached. And from the command 
squidclient i am getting 

[EMAIL PROTECTED] ~ $ squidclient http://www.youtube.com/watch?v=WFOjvZaXeQ8
HTTP/1.0 302 Moved Temporarily
Date: Fri, 14 Nov 2008 08:42:32 GMT
Server: Apache
Set-Cookie: use_hitbox=72c46ff6cbcdb7c5585c36411b6b334edAEw; path=/; 
domain=.youtube.com
Set-Cookie: PREF=f1=4100gl=INhl=en; path=/; domain=.youtube.com; 
expires=Mon, 12-Nov-2018 08:42:32 GMT
Set-Cookie: GEO=f08ceb957972861b3045fa1cd42e3d57cwwySU47XJikAPg5HUk=; 
path=/; domain=.youtube.com;
 expires=Sun, 16-Nov-2008 08:42:32 GMT
Expires: Tue, 27 Apr 1971 19:44:06 EST
Cache-Control: no-cache
Location: http://in.youtube.com/watch?v=WFOjvZaXeQ8
Content-Type: text/html; charset=utf-8
X-Cache: MISS from bijayant.kavach.blr
X-Cache-Lookup: MISS from bijayant.kavach.blr:3128
Via: 1.1 bijayant.kavach.blr:3128 (squid/2.7.STABLE4)
Connection: close

Please guide/show me a direction so that I can able to do so.
I have done exactly the same settings as explained in the mentioned documents 
that is

http://wiki.squid-cache.org/ConfigExamples/DynamicContent/YouTube/Discussion

Please help me out.

Thanks  Regards,
Bijayant Kumar




  New Email addresses available on Yahoo!
Get the Email name you#39;ve always wanted on the new @ymail and @rocketmail. 
Hurry before someone else does!
http://mail.promotions.yahoo.com/newdomains/aa/


Re: [squid-users] About squid ICAP implementation

2008-11-14 Thread Henrik Nordstrom
On fre, 2008-11-14 at 15:49 +0900, Mikio Kishi wrote:
 Hi, Henrik
 
  Allow: 204 is sent if it's known the whole message can be buffered
  within the buffer limits (SQUID_TCP_SO_RCVBUF). It's not relaed to
  previews.
 
 Why is there such a limitation (SQUID_TCP_SO_RCVBUF) ?
 I hope that squid always send Allow: 204 to icap servers as much
 as possible...

Because to send Allow: 204 Squid must buffer the whole message. This
buffering is done in memory. Imagine what would happen if the message is
a dual layer DVD ISO image.. (6-8GB in size).

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Squid stops suddenly

2008-11-14 Thread Amos Jeffries

Luis Daniel Lucio Quiroz wrote:

After debugin ate level 3

I realize this error happens when analizin http_reply_access with user acl.


Luis Daniel Lucio Quiroz wrote:

Using squid 3 stable 9, with digest ldap auth, randomly i got this:

assertion failed: ACLProxyAuth.cc:146:
authenticateValidateUser(auth_user_request)

later, squid dies

Any comment?

Looks similar to one of the open bugs, but not the same one.

Can you report as a new bug with full stack trace of the assertion and a
detailed cache.log trace leading up to it please?


Are you able to get a stack trace when it occurs?

And what is your full config (without comments) please? If you need to 
you can send it privately.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.2


RE: [squid-users] Squid stops suddenly

2008-11-14 Thread Marcel Grandemange
 After debugin ate level 3
 
 I realize this error happens when analizin http_reply_access with user
acl.
 
 Luis Daniel Lucio Quiroz wrote:
 Using squid 3 stable 9, with digest ldap auth, randomly i got this:

 assertion failed: ACLProxyAuth.cc:146:
 authenticateValidateUser(auth_user_request)

 later, squid dies

 Any comment?
 Looks similar to one of the open bugs, but not the same one.

 Can you report as a new bug with full stack trace of the assertion and a
 detailed cache.log trace leading up to it please?

Are you able to get a stack trace when it occurs?

Apologies, not programmer so no idea what this is.

And what is your full config (without comments) please? If you need to 
you can send it privately.

http://www.thavinci.za.net/Downloads/squid-new.conf


Amos
-- 
Please be using
   Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
   Current Beta Squid 3.1.0.2


I think I need to find the 3.0stable9 port for FreeBSD and just downgrade.
Quicker solution till the stable 11.

Thanks For Help Though.



Re: [squid-users] Squid stops suddenly

2008-11-14 Thread Amos Jeffries

Marcel Grandemange wrote:

After debugin ate level 3

I realize this error happens when analizin http_reply_access with user

acl.

Luis Daniel Lucio Quiroz wrote:

Using squid 3 stable 9, with digest ldap auth, randomly i got this:

assertion failed: ACLProxyAuth.cc:146:
authenticateValidateUser(auth_user_request)

later, squid dies

Any comment?

Looks similar to one of the open bugs, but not the same one.

Can you report as a new bug with full stack trace of the assertion and a
detailed cache.log trace leading up to it please?



Are you able to get a stack trace when it occurs?


Apologies, not programmer so no idea what this is.



It's what you get from the core dump that shows whats happening inside 
Squid when it breaks.


Here's how to find it:
http://wiki.squid-cache.org/SquidFaq/TroubleShooting#head-7067fc0034ce967e67911becaabb8c95a34d576d



And what is your full config (without comments) please? If you need to 
you can send it privately.


http://www.thavinci.za.net/Downloads/squid-new.conf



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.2


RE: [squid-users] Squid stops suddenly

2008-11-14 Thread Marcel Grandemange
 After debugin ate level 3

 I realize this error happens when analizin http_reply_access with user
 acl.
 Luis Daniel Lucio Quiroz wrote:
 Using squid 3 stable 9, with digest ldap auth, randomly i got this:

 assertion failed: ACLProxyAuth.cc:146:
 authenticateValidateUser(auth_user_request)

 later, squid dies

 Any comment?
 Looks similar to one of the open bugs, but not the same one.

 Can you report as a new bug with full stack trace of the assertion and a
 detailed cache.log trace leading up to it please?
 
 Are you able to get a stack trace when it occurs?
 
 Apologies, not programmer so no idea what this is.


It's what you get from the core dump that shows whats happening inside 
Squid when it breaks.

Here's how to find it:
http://wiki.squid-cache.org/SquidFaq/TroubleShooting#head-7067fc0034ce967e6
7911becaabb8c95a34d576d

Thank you, will see if I can re-create this problem on another machine and
learn a little more ;

As for the problem went away entirely with current config but downgrade to
3.0stable9.

 
 And what is your full config (without comments) please? If you need to 
 you can send it privately.
 
 http://www.thavinci.za.net/Downloads/squid-new.conf
 



[squid-users] Squid in chroot jail reconfigure/rotate FATAL errors: SOLVED

2008-11-14 Thread Rudi Vankemmel
I have seen quite some postings indicating errors when issuing a
squid -k reconfigure or squid -k rotate from within a chroot jail.

I am running squid V2.7 Stable 2 in a chroot jail: /chroot/squid as
user.group = squid.squid  This is configured as such in the config file.
In the chroot jail i have a Squid and a SquidGuard directory (containing
the respective installs) besides the jail ./etc, ./lib and ./dev dirs.

The first error i encountered was when doing a squid -k rotate.
This should rotate the log files.  The following error was seen:

FATAL: Unable to open configuration file:
 /chroot/squid/Squid/etc/squid.conf: (2) No such file or directory

After this squid exits and if you are lucky then starts automagically.
It might also crash your system completely.

The reason in my case for this was that the config file is read as root.root
at start time from outside the chroot jail i.e.
/chroot/squid/Squid/etc/squid.conf
However, when rotating, squid runs as squid.squid and inside the jail:
/chroot/squid.  When it restarts now it looks for the file using the full path
in the chroot jail i.e. it looks for:
/chroot/squid/chroot/squid/Squid/etc/squid.conf.
And this file does not exist there !
Note that restarting automagically (from scratch) works fine as you
run root.root
from outside the jail again !

This is easily solved by creating the dirs ./chroot/squid/ again within the
/chroot/squid jail and placing there again a link to the Squid directory:
i.e. in Chroot jail /chroot/squid/:
 -) Mkdir ./chroot/squid  ;  i.e. we make a directory /chroot/squid/chroot/squid
 -) Cd ./chroot/squid/
 -) Ln -s ../../Squid ./Squid   ;  i.e. this is looping back to the entry point
just after the original chroot jail.
This becomes the new entry point when restarting after a rotation.

Make also sure the permissions are OK for both root and squid: i used root.squid
Note that this is safe to do so as we are staying within the chroot jail.

This solved the rotate problem but next the following error was seen:

FATAL: getgrnam failed to find groupid for effective group 'squid'

Now this is an easy one: using strace i found that it is due to the
fact that squid
cannot retrieve groupid info within the chroot jail.  This is easily
solved by creating
a passwd and group (or shadow versions) within the jail i.e.
/chroot/squid/etc/passwd
and /chroot/squid/etc/group.  In my case i took a copy from the normal
passwd and group
file and stripped everything away just leaving the squid user and group in it.

Note that you might need also a copy of the /etc/services into
/chroot/squid/etc/services

After these changes everything works fine.
Hope it is useful for you !

Rudi Vankemmel


RE: [squid-users] NTLM auth popup boxes Solaris 8 tuning for upgrade into 2.7.4

2008-11-14 Thread vincent.blondel


hello all,

I currently get some sun v210 boxes running solaris 8 and
squid-2.6.12
and samba 3.0.20b I will upgrade these proxies into 2.7.4/3.0.32 next
monday but before doing this I would like to ask you your advices
 and/or
experiences with tuning these kind of boxes.

the service is running well today except we regularly get
 authentication
popup boxes. This is really exasperating our Users. I already spent
lot
of times on the net in the hope finding a clear explanation about it
 but
i am still searching. I already configured starting 128 ntlm_auth
processes on each of my servers. This gives better results but
problem
still remains. I also made some patching in my new package I will
 deploy
next week by overwrting some samba values .. below my little patch ..



first of all, man thanks to enter this discussion in order to help me
solve my problems ..

Before digging deep into OS settings check your squid.conf auth, acl
and
http_access settings.

okay let's go concerning auth part of the squid.conf, I would like to
say, nothing special .. below the ntlm config part

auth_param ntlm program /usr/local/bin/ntlm_auth
--helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 128
auth_param ntlm keep_alive on
acl ntlmauth proxy_auth REQUIRED
...
http_access allow ntlmauth all
http_reply_access allow all
http_access deny all
deny_info TCP_RESET all

Check the TTL settings on your auth config. If it's not long enough
squid
will re-auth between request and reply.

not really sure to understand what setting you are speaking about ??


For the access controls there are a number of ways they can trigger
authentication popups. %LOGIN passed to external helper, proxy_auth
REQUIRED acl, and an auth ACL being last on an http_access line.


if I good understand you get requested config line above ..

Also, interception setups hacked with bad flags to (wrongly) permit
auth
can appear working but cause popups on every object request and also
leak
clients credentials to all remote sites that use auth.

what kind of interception are you speaking about ??


Amos
-
ATTENTION:
The information in this electronic mail message is private and
confidential, and only intended for the addressee. Should you
receive this message by mistake, you are hereby notified that
any disclosure, reproduction, distribution or use of this
message is strictly prohibited. Please inform the sender by
reply transmission and delete the message without copying or
opening it.

Messages and attachments are scanned for all viruses known.
If this message contains password-protected attachments, the
files have NOT been scanned for viruses by the ING mail domain.
Always scan attachments before opening them.
-




[squid-users] squid_ldap_auth and passwords in clear text

2008-11-14 Thread Johnson, S
Since this is going to be a public network, people will have the
ability to load wireshark or another sniffer program.  

I just got the squid_ldap_auth working ok on my segment but when
watching the protocol analyzer I see that the auth requests against the
AD are coming in as clear text passwords.  Is there anyway we can
encrypt the ldap domain requests?

 Thanks
 
   Scott


[squid-users] Multiple site example

2008-11-14 Thread Ramon Moreno
Hello,

I want to setup a reverse proxy to accelerate multiple sites using the
same squid instance -

i.e.

apples.mysite.com - origin would be (192.168.1.2)
oranges.mysite.com - origin would be (192.168.1.3)
bananas.mysite.com - origin would be (192.168.1.4)

I know how to accelerate for one site based on the faq, however not
too sure how to do multiple.

I only have one interface on my host, and want everything to listen
off of a single port if possible.

If anyone can give me some config tips that would be much appreciated.

Thanks!


Re: [squid-users] About squid ICAP implementation

2008-11-14 Thread Takashi Tochihara
From: Henrik Nordstrom [EMAIL PROTECTED]
Subject: Re: [squid-users] About squid ICAP implementation
Date: Fri, 14 Nov 2008 12:11:19 +0100

   Allow: 204 is sent if it's known the whole message can be buffered
   within the buffer limits (SQUID_TCP_SO_RCVBUF). It's not relaed to
   previews.
  
  Why is there such a limitation (SQUID_TCP_SO_RCVBUF) ?
  I hope that squid always send Allow: 204 to icap servers as much
  as possible...
 
 Because to send Allow: 204 Squid must buffer the whole message. This
 buffering is done in memory. Imagine what would happen if the message is
 a dual layer DVD ISO image.. (6-8GB in size).

I think to send Allow: 204  Preview: , squid must buffer not the
whole message, but the whole *Previewed* message. (part of the message)

tcp (or server) keeps the message, I think. 

e.g. 
squid doest not read the message (from socket), tcp keeps the message
until his buffer becomes full, and when tcp buffer is full, tcp sends
window size = 0 packet and then the server keeps the message...

I think it is better to send Allow: 204 header, when squid sends
Preview: header. (of course Preview: values needs some limitation)

Do I have the wrong idea?

best regrads

-- Takashi Tochihara







 


Re: [squid-users] Multiple site example

2008-11-14 Thread Henrik Nordstrom
On fre, 2008-11-14 at 12:19 -0800, Ramon Moreno wrote:

 I know how to accelerate for one site based on the faq, however not
 too sure how to do multiple.

It's also in the FAQ..

Squid FAQ Reverse Proxy - Sending different requests to different backend web 
servers
http://wiki.squid-cache.org/SquidFaq/ReverseProxy#head-7bd155a1a9919bda8ff10ca7d3831458866b72eb

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Multiple site example

2008-11-14 Thread Ramon Moreno
Henrik,

Thanks for the quick reply.

So I think this answers the cache peer question.

The other is what do I specify for the http_port section.

Currently I only am doing acceleration for one site:
http_port 80 accel defaultsite=bananas.mysite.com

How do I configure this parameter for 3 sites while using the same
port? I am guessing, but would it be something like this:
http_port 80 accel defaultsite=bananas.mysite.com vhost
http_port 80 accel defaultsite=apples.mysite.com vhost
http_port 80 accel defaultsite=oranges.mysite.com vhost




On Fri, Nov 14, 2008 at 1:12 PM, Henrik Nordstrom
[EMAIL PROTECTED] wrote:
 On fre, 2008-11-14 at 12:19 -0800, Ramon Moreno wrote:

 I know how to accelerate for one site based on the faq, however not
 too sure how to do multiple.

 It's also in the FAQ..

 Squid FAQ Reverse Proxy - Sending different requests to different backend web 
 servers
 http://wiki.squid-cache.org/SquidFaq/ReverseProxy#head-7bd155a1a9919bda8ff10ca7d3831458866b72eb

 Regards
 Henrik



RE: [squid-users] Multiple site example

2008-11-14 Thread Gregori Parker
You only need one http_port statement with one defaultsite...define
multiple cache_peer parents, like so, and make sure you're acl's are
straight (this is the tricky aspect of reverse-proxy imo, getting the
security right)


http_port 80 accel defaultsite=bananas.mysite.com vhost
cache_peer 10.10.10.1 parent 80 0 no-query no-digest originserver
name=mysite1
cache_peer 10.10.10.2 parent 80 0 no-query no-digest originserver
name=mysite2
cache_peer 10.10.10.3 parent 80 0 no-query no-digest originserver
name=mysite3
cache_peer_domain mysite1 apples.mysite.com
cache_peer_domain mysite2 oranges.mysite.com
cache_peer_domain mysite3 bananas.mysite.com

acl my_site1 dstdomain apples.mysite.com
acl my_site2 dstdomain oranges.mysite.com
acl my_site3 dstdomain bananas.mysite.com
acl myaccelport port 80

cache allow my_site1
cache allow my_site2
cache allow my_site3

http_access allow my_site1 myaccelport
http_access allow my_site2 myaccelport
http_access allow my_site3 myaccelport


Personally, I use a load balancer to direct traffic to Squid, and have
the hostnames redefined in /etc/hosts to get traffic to the backend
servers

Hope that helps, YMMV

- Gregori

-Original Message-
From: Ramon Moreno [mailto:[EMAIL PROTECTED] 
Sent: Friday, November 14, 2008 1:24 PM
To: Henrik Nordstrom
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Multiple site example

Henrik,

Thanks for the quick reply.

So I think this answers the cache peer question.

The other is what do I specify for the http_port section.

Currently I only am doing acceleration for one site:
http_port 80 accel defaultsite=bananas.mysite.com

How do I configure this parameter for 3 sites while using the same
port? I am guessing, but would it be something like this:
http_port 80 accel defaultsite=bananas.mysite.com vhost
http_port 80 accel defaultsite=apples.mysite.com vhost
http_port 80 accel defaultsite=oranges.mysite.com vhost




On Fri, Nov 14, 2008 at 1:12 PM, Henrik Nordstrom
[EMAIL PROTECTED] wrote:
 On fre, 2008-11-14 at 12:19 -0800, Ramon Moreno wrote:

 I know how to accelerate for one site based on the faq, however not
 too sure how to do multiple.

 It's also in the FAQ..

 Squid FAQ Reverse Proxy - Sending different requests to different
backend web servers

http://wiki.squid-cache.org/SquidFaq/ReverseProxy#head-7bd155a1a9919bda8
ff10ca7d3831458866b72eb

 Regards
 Henrik



Re: [squid-users] NTLM auth popup boxes Solaris 8 tuning for upgrade into 2.7.4

2008-11-14 Thread Amos Jeffries

[EMAIL PROTECTED] wrote:

hello all,

I currently get some sun v210 boxes running solaris 8 and

squid-2.6.12

and samba 3.0.20b I will upgrade these proxies into 2.7.4/3.0.32 next
monday but before doing this I would like to ask you your advices

and/or

experiences with tuning these kind of boxes.

the service is running well today except we regularly get

authentication

popup boxes. This is really exasperating our Users. I already spent

lot

of times on the net in the hope finding a clear explanation about it

but

i am still searching. I already configured starting 128 ntlm_auth
processes on each of my servers. This gives better results but

problem

still remains. I also made some patching in my new package I will

deploy

next week by overwrting some samba values .. below my little patch ..



first of all, man thanks to enter this discussion in order to help me
solve my problems ..


Before digging deep into OS settings check your squid.conf auth, acl

and

http_access settings.


okay let's go concerning auth part of the squid.conf, I would like to
say, nothing special .. below the ntlm config part

auth_param ntlm program /usr/local/bin/ntlm_auth
--helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 128
auth_param ntlm keep_alive on
acl ntlmauth proxy_auth REQUIRED
...
http_access allow ntlmauth all
http_reply_access allow all
http_access deny all
deny_info TCP_RESET all



Hmm, what those lines do is:
 - test the request for auth details (allow ntlmauth),
 - if correct details found, allow (allow ntlmauth all).
 - if none are found, or bad details ignore (allow ntlmauth all)
 - but send a RESET on the TCP link (deny all + TCP_RESET)

The clients will never get any correction when auth details are invalid. 
They will just get a completely new session, the browser will try to 
resend the same broken details until it gives up and re-asks the user.



The 'all' silencing hack is intended for situations where auth may be 
the preferred methods of access, but an alternative exists and can be 
taken easily when it fails. It prevents the browser being notified when 
credentials are wrong.


Does it work if you make that line just: http_access allow ntlmauth


Check the TTL settings on your auth config. If it's not long enough

squid

will re-auth between request and reply.


not really sure to understand what setting you are speaking about ??



auth_param ntlm ttl

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.2


Re: [squid-users] squid_ldap_auth and passwords in clear text

2008-11-14 Thread Amos Jeffries

Johnson, S wrote:

Since this is going to be a public network, people will have the
ability to load wireshark or another sniffer program.


Ah, okay.



I just got the squid_ldap_auth working ok on my segment but when
watching the protocol analyzer I see that the auth requests against the
AD are coming in as clear text passwords.  Is there anyway we can
encrypt the ldap domain requests?


digest auth is the best available for password encryption.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.2


Re: [squid-users] Multiple site example

2008-11-14 Thread Amos Jeffries

Gregori Parker wrote:

You only need one http_port statement with one defaultsite...define
multiple cache_peer parents, like so, and make sure you're acl's are
straight (this is the tricky aspect of reverse-proxy imo, getting the
security right)


http_port 80 accel defaultsite=bananas.mysite.com vhost
cache_peer 10.10.10.1 parent 80 0 no-query no-digest originserver
name=mysite1
cache_peer 10.10.10.2 parent 80 0 no-query no-digest originserver
name=mysite2
cache_peer 10.10.10.3 parent 80 0 no-query no-digest originserver
name=mysite3
cache_peer_domain mysite1 apples.mysite.com
cache_peer_domain mysite2 oranges.mysite.com
cache_peer_domain mysite3 bananas.mysite.com

acl my_site1 dstdomain apples.mysite.com
acl my_site2 dstdomain oranges.mysite.com
acl my_site3 dstdomain bananas.mysite.com
acl myaccelport port 80

cache allow my_site1
cache allow my_site2
cache allow my_site3

http_access allow my_site1 myaccelport
http_access allow my_site2 myaccelport
http_access allow my_site3 myaccelport



To keep the security straight and easy I prefer setting the ACL earlier 
and re-using the exact same condition like so:


 cache_peer  namePeerN
 acl aclname dstdomain fubar.example.com
 http_access allow aclname
 cache_peer_access peerN allow aclname
 cache_peer_access peerN deny aclname

That keeps each domain handling config separate and easily checked.
No fiddling around with ports or multiple lists of domains in simple setups.

Amos



Personally, I use a load balancer to direct traffic to Squid, and have
the hostnames redefined in /etc/hosts to get traffic to the backend
servers

Hope that helps, YMMV

- Gregori

-Original Message-
From: Ramon Moreno [mailto:[EMAIL PROTECTED] 
Sent: Friday, November 14, 2008 1:24 PM

To: Henrik Nordstrom
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Multiple site example

Henrik,

Thanks for the quick reply.

So I think this answers the cache peer question.

The other is what do I specify for the http_port section.

Currently I only am doing acceleration for one site:
http_port 80 accel defaultsite=bananas.mysite.com

How do I configure this parameter for 3 sites while using the same
port? I am guessing, but would it be something like this:
http_port 80 accel defaultsite=bananas.mysite.com vhost
http_port 80 accel defaultsite=apples.mysite.com vhost
http_port 80 accel defaultsite=oranges.mysite.com vhost




On Fri, Nov 14, 2008 at 1:12 PM, Henrik Nordstrom
[EMAIL PROTECTED] wrote:

On fre, 2008-11-14 at 12:19 -0800, Ramon Moreno wrote:


I know how to accelerate for one site based on the faq, however not
too sure how to do multiple.

It's also in the FAQ..

Squid FAQ Reverse Proxy - Sending different requests to different

backend web servers
http://wiki.squid-cache.org/SquidFaq/ReverseProxy#head-7bd155a1a9919bda8
ff10ca7d3831458866b72eb

Regards
Henrik




--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.2


[squid-users] Regex Problem - Squid 3.0STABLE10

2008-11-14 Thread Jeff Gerard
I have an acl that I use to block ad sites.  One of the regex's that I use is: 
_ad_.  I have discovered that I need to tweak this regex to fail when it 
finds the following pattern: http://something.com/path/some_ad_test.js

I modified my regex as follows:
_ad_(?!test) which should work as far as I can tell, however, when I reload 
squid, I get the following error:
squid.conf line 644: acl Deny_ADs url_regex -i /etc/squid/deny_ADs
aclParseRegexList: Invalid regular expression '_ad_(?!test)': Invalid 
preceding regular expression

In fact, anytime I ever try to use round brackets, I get that error.  As a 
workaround, I used _ad_[^t] which is vague but does allow the pattern to fail 
the query.

Any suggestions out there?

Thanks in advance...


[squid-users] very basic question on enforcing use of proxy

2008-11-14 Thread qqq1one @yahoo.com
Hi,

I have a very basic question.  I don't even know what to search on for this 
question.  I have squid installed and running, but my browser can freely get 
out to the internet without going through the proxy.  I know about specifying 
the proxy in the browser, but what prevents an unconfigured browser from going 
straight out to the internet?  Is a firewall the only way to prevent this?

Thanks in advance.



  


Re: [squid-users] very basic question on enforcing use of proxy

2008-11-14 Thread James Byrne
you can use a firewall or you can put squid in transparent mode, and  
set up a transparent proxy.



On Nov 14, 2008, at 9:58 PM, qqq1one @yahoo.com wrote:


Hi,

I have a very basic question.  I don't even know what to search on  
for this question.  I have squid installed and running, but my  
browser can freely get out to the internet without going through  
the proxy.  I know about specifying the proxy in the browser, but  
what prevents an unconfigured browser from going straight out to  
the internet?  Is a firewall the only way to prevent this?


Thanks in advance.








[squid-users] squid running problem with aufs

2008-11-14 Thread samk
See Thread at: http://www.techienuggets.com/Detail?tx=60849 Posted on behalf of 
a User

FATAL: Bungled squid.conf line 2: cache_dir aufs 
Squid Cache (Version 3.0.STABLE10): Terminated abnormally.
CPU Usage: 0.005 seconds = 0.001 user + 0.004 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 0


when i change aufs to ufs problem solved!!





Re: [squid-users] Regex Problem - Squid 3.0STABLE10

2008-11-14 Thread Henrik K
On Fri, Nov 14, 2008 at 10:00:24PM -0600, Jeff Gerard wrote:
 I have an acl that I use to block ad sites.  One of the regex's that I use 
 is: 
 _ad_.  I have discovered that I need to tweak this regex to fail when it 
 finds the following pattern: http://something.com/path/some_ad_test.js
 
 I modified my regex as follows:
 _ad_(?!test) which should work as far as I can tell, however, when I reload 
 squid, I get the following error:
 squid.conf line 644: acl Deny_ADs url_regex -i /etc/squid/deny_ADs
 aclParseRegexList: Invalid regular expression '_ad_(?!test)': Invalid 
 preceding regular expression

You are trying PERL compatible regexp. Squid doesn't support that by
default, it uses the basic system library.

Easiest workaround is to compile Squid with PCRE library:

LDFLAGS=-lpcreposix -lpcre

It may also reduce memory leaks and speed up things if you have old/bad
system regex.