[squid-users] Complex acl process - Many Ips, many different places, many logins, and many websites ...

2009-06-02 Thread Julien P.
Hi everyone,
I'm having some troubles to understand how the acl process is working.

I'm trying to link a mySQL database to my squid in order to allow me
to setup some specific access rights according to some specific users
from different places to different websites.

What I did is an acl that will check the domain and the source_ip
external_acl_type ExternalisBad ttl=20 %SRC %DST /etc/squid3/external_bad
acl isBad external ExternalisBad

And I also created my own auth_param block

auth_param basic program /etc/squid3/sql_auth
auth_param basic children 20
auth_param basic realm Username and password
auth_param basic credentialsttl 1 minute

Now, when someone's trying to to access a website, this is what I do
http_access allow sql_auth isBad

It is working, but the thing is: it doesn't care about if the username
is linked to the %SRC Ip or not... So basically, if you have are
registered with full access rights in another place, you will be able
to access to all the content even if you're access is supposed to be
denied. Does that make sense ?

I added the %IDENT to the externcal_acl_type rule. Since the sql_auth
process is called before I was thinking that maybe the %IDENT would be
stored somewhere somehow and be accessible in the isBad acl right
away...

external_acl_type ExternalisBad ttl=20 %SRC %IDENT %DST /etc/squid3/external_bad

Apparently this is not working.

Does any one have any idea on how to do what I want to do ?

If you want me to be more specific, let me know!

Thank you so much Guys,
Julien

PS:
debian:/squid3 -v
Squid Cache: Version 3.0.STABLE8
configure options:  '--build=i486-linux-gnu' '--prefix=/usr'
'--includedir=${prefix}/include' '--mandir=${prefix}/share/man'
'--infodir=${prefix}/share/info' '--sysconfdir=/etc'
'--localstatedir=/var' '--libexecdir=${prefix}/lib/squid3'
'--disable-maintainer-mode' '--disable-dependency-tracking'
'--srcdir=.' '--datadir=/usr/share/squid3' '--sysconfdir=/etc/squid3'
'--mandir=/usr/share/man' '--with-cppunit-basedir=/usr'
'--enable-inline' '--enable-async-io=8'
'--enable-storeio=ufs,aufs,coss,diskd,null'
'--enable-removal-policies=lru,heap' '--enable-delay-pools'
'--enable-cache-digests' '--enable-underscores' '--enable-icap-client'
'--enable-follow-x-forwarded-for' '--enable-auth=basic,digest,ntlm'
'--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SASL,SMB,YP,getpwnam,multi-domain-NTLM'
'--enable-ntlm-auth-helpers=SMB'
'--enable-digest-auth-helpers=ldap,password'
'--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group'
'--with-filedescriptors=65536' '--with-default-user=proxy'
'--enable-epoll' '--enable-linux-netfilter'
'build_alias=i486-linux-gnu' 'CC=cc' 'CFLAGS=-g -O2 -g -Wall -O2'
'LDFLAGS=' 'CPPFLAGS=' 'CXX=g++' 'CXXFLAGS=-g -O2 -g -Wall -O2'
'FFLAGS=-g -O2'


[squid-users] Exception for src client PC

2009-06-02 Thread Boniforti Flavio
Hello list,

following is my setup (in relation to ACLs):

acl localnet src 10.0.0.0/24
acl domini_bloccati dstdomain /etc/squid3/domini_bloccati.acl
http_access deny localnet domini_bloccati

How do I add an exception for one client of that network?
I thought to write it like:

acl localnet src 10.0.0.0/24
acl domini_bloccati dstdomain /etc/squid3/domini_bloccati.acl
acl super_users src myhostname
http_access allow super_users
http_access deny localnet domini_bloccati

Would this setup allow the rules to be read only until the http_access
allow super_users line, if the client connecting through squid would be
myhostname?

Thanks,
Flavio Boniforti

PIRAMIDE INFORMATICA SAGL
Via Ballerini 21
6600 Locarno
Switzerland
Phone: +41 91 751 68 81
Fax: +41 91 751 69 14
URL: http://www.piramide.ch
E-mail: fla...@piramide.ch 


[squid-users] allways MISS when using storeurl

2009-06-02 Thread Chudy Fernandez

the following url always miss even changing cdn server(i157,i200,i236)
http://i200.photobucket.com/albums/aa73/fs-layouts/friendster-layouts.com/2009/05/thinking-of-you/thinking-of-you.gif
http://i200.photobucket.com/albums/t72/gbbp/Images/jewel-heart-notes/images/today-tomorrow-forever.gif
http://i157.photobucket.com/albums/t72/gbbp/Images/Funny_Pics/images/25.gif
http://i236.photobucket.com/albums/ff302/volcom_queen_photos/Seniors%20class/Seniors09.jpg
http://i200.photobucket.com/albums/aa73/fs-layouts/fslybg/2007/09/l24.gif
http://i157.photobucket.com/albums/t72/gbbp/Images/Thanks_For_Add/images/thxs4ad_5.gif

while the following are working as it should even changing cdn 
server(i157,i200,i236)
http://i157.photobucket.com/albums/aa73/fs-layouts/friendster-layouts.com/2009/02/skull/skull.jpg
http://i200.photobucket.com/albums/aa73/fs-layouts/friendster-layouts.com/2009/05/butterfly/butterfly.jpg
http://i200.photobucket.com/albums/aa73/fs-layouts/friendster-layouts.com/2008/08/im-sick-of-getting-hurt/im-sick-of-getting-hurt.gif
http://i200.photobucket.com/albums/aa73/fs-layouts/fslybg/2007/09/l22.jpg
http://i200.photobucket.com/albums/aa73/fs-layouts/friendster-layouts.com/2008/09/cultskulls/cultskulls.jpg
http://i200.photobucket.com/albums/aa73/fs-layouts/friendster-layouts.com/2009/05/nf-song-woo-bin/nf-song-woo-bin.jpg
http://i200.photobucket.com/albums/aa73/fs-layouts/friendster-layouts.com/2009/04/beautiful-nature-15/beautiful-nature-15.jpg

cachemgr shows
KEY 0366A737DD8A68784A99C86469749813
GET 
http://i200.photobucket.com/albums/aa73/fs-layouts/friendster-layouts.com/2009/05/thinking-of-you/thinking-of-you.gif
Store lookup URL: 
http://cdn.photobucket.com/albums/aa73/fs-layouts/friendster-layouts.com/2009/05/thinking-of-you/thinking-of-you.gif
STORE_PENDING NOT_IN_MEMORY SWAPOUT_WRITING PING_DONE  
CACHABLE,DISPATCHED,VALIDATED
LV:1243927888 LU:1243927889 LM:1243755819 EX:1243949488
5 locks, 1 clients, 1 refs
Swap Dir 0, File 0X03784D
inmem_lo: 0
inmem_hi: 36816
swapout: 32768 bytes queued
swapout: 33162 bytes written
Client #0, 0x283f8010
copy_offset: 36816
seen_offset: 36816
copy_size: 4096
flags:


  


Re: [squid-users] Complex acl process - Many Ips, many different places, many logins, and many websites ...

2009-06-02 Thread Amos Jeffries

Julien P. wrote:

Hi everyone,
I'm having some troubles to understand how the acl process is working.

I'm trying to link a mySQL database to my squid in order to allow me
to setup some specific access rights according to some specific users
from different places to different websites.

What I did is an acl that will check the domain and the source_ip
external_acl_type ExternalisBad ttl=20 %SRC %DST /etc/squid3/external_bad
acl isBad external ExternalisBad

And I also created my own auth_param block

auth_param basic program /etc/squid3/sql_auth
auth_param basic children 20
auth_param basic realm Username and password
auth_param basic credentialsttl 1 minute



You forgot to mention this bit of the config:
  acl sql_auth proxy_auth REQUIRED


Now, when someone's trying to to access a website, this is what I do
http_access allow sql_auth isBad

It is working, but the thing is: it doesn't care about if the username
is linked to the %SRC Ip or not... So basically, if you have are
registered with full access rights in another place, you will be able
to access to all the content even if you're access is supposed to be
denied. Does that make sense ?


Yes it make sense. The ACL rules do not (yet) state the full conditions 
though.


The above rule states only if the user can login and also if IP + 
destination domain are paired. No specific three-way link.




I added the %IDENT to the externcal_acl_type rule. Since the sql_auth
process is called before I was thinking that maybe the %IDENT would be
stored somewhere somehow and be accessible in the isBad acl right
away...

external_acl_type ExternalisBad ttl=20 %SRC %IDENT %DST /etc/squid3/external_bad

Apparently this is not working.


Yes not working. %IDENT is the result of the IDENT protocol lookup.

You are wanting %LOGIN, which is the result of the proxy authentication 
(aka login).




Does any one have any idea on how to do what I want to do ?


You have the approach right. Just not the right tag. Make the above 
change and it should work just fine.




If you want me to be more specific, let me know!

Thank you so much Guys,
Julien

PS:
debian:/squid3 -v
Squid Cache: Version 3.0.STABLE8


Um, please use STABLE13+ as soon as possible. Major security risks in 
earlier releases.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE15
  Current Beta Squid 3.1.0.8 or 3.0.STABLE16-RC1


Re: [squid-users] Exception for src client PC

2009-06-02 Thread Amos Jeffries

Boniforti Flavio wrote:

Hello list,

following is my setup (in relation to ACLs):

acl localnet src 10.0.0.0/24
acl domini_bloccati dstdomain /etc/squid3/domini_bloccati.acl
http_access deny localnet domini_bloccati

How do I add an exception for one client of that network?
I thought to write it like:

acl localnet src 10.0.0.0/24
acl domini_bloccati dstdomain /etc/squid3/domini_bloccati.acl
acl super_users src myhostname
http_access allow super_users
http_access deny localnet domini_bloccati

Would this setup allow the rules to be read only until the http_access
allow super_users line, if the client connecting through squid would be
myhostname?


Yes. Assuming Squid can resolve 'myhostname' to an IP address during 
squid.conf loading which can be used by the 'src' type ACL.


Otherwise your guess is correct.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE15
  Current Beta Squid 3.1.0.8 or 3.0.STABLE16-RC1


Re: [squid-users] allways MISS when using storeurl

2009-06-02 Thread Amos Jeffries

Chudy Fernandez wrote:

the following url always miss even changing cdn server(i157,i200,i236)
http://i200.photobucket.com/albums/aa73/fs-layouts/friendster-layouts.com/2009/05/thinking-of-you/thinking-of-you.gif
http://i200.photobucket.com/albums/t72/gbbp/Images/jewel-heart-notes/images/today-tomorrow-forever.gif
http://i157.photobucket.com/albums/t72/gbbp/Images/Funny_Pics/images/25.gif
http://i236.photobucket.com/albums/ff302/volcom_queen_photos/Seniors%20class/Seniors09.jpg
http://i200.photobucket.com/albums/aa73/fs-layouts/fslybg/2007/09/l24.gif
http://i157.photobucket.com/albums/t72/gbbp/Images/Thanks_For_Add/images/thxs4ad_5.gif

while the following are working as it should even changing cdn 
server(i157,i200,i236)
http://i157.photobucket.com/albums/aa73/fs-layouts/friendster-layouts.com/2009/02/skull/skull.jpg
http://i200.photobucket.com/albums/aa73/fs-layouts/friendster-layouts.com/2009/05/butterfly/butterfly.jpg
http://i200.photobucket.com/albums/aa73/fs-layouts/friendster-layouts.com/2008/08/im-sick-of-getting-hurt/im-sick-of-getting-hurt.gif
http://i200.photobucket.com/albums/aa73/fs-layouts/fslybg/2007/09/l22.jpg
http://i200.photobucket.com/albums/aa73/fs-layouts/friendster-layouts.com/2008/09/cultskulls/cultskulls.jpg
http://i200.photobucket.com/albums/aa73/fs-layouts/friendster-layouts.com/2009/05/nf-song-woo-bin/nf-song-woo-bin.jpg
http://i200.photobucket.com/albums/aa73/fs-layouts/friendster-layouts.com/2009/04/beautiful-nature-15/beautiful-nature-15.jpg

cachemgr shows
KEY 0366A737DD8A68784A99C86469749813
GET 
http://i200.photobucket.com/albums/aa73/fs-layouts/friendster-layouts.com/2009/05/thinking-of-you/thinking-of-you.gif
Store lookup URL: 
http://cdn.photobucket.com/albums/aa73/fs-layouts/friendster-layouts.com/2009/05/thinking-of-you/thinking-of-you.gif
STORE_PENDING NOT_IN_MEMORY SWAPOUT_WRITING PING_DONE  
CACHABLE,DISPATCHED,VALIDATED

LV:1243927888 LU:1243927889 LM:1243755819 EX:1243949488
5 locks, 1 clients, 1 refs
Swap Dir 0, File 0X03784D
inmem_lo: 0
inmem_hi: 36816
swapout: 32768 bytes queued
swapout: 33162 bytes written
Client #0, 0x283f8010
copy_offset: 36816
seen_offset: 36816
copy_size: 4096
flags:



As far as I can tell they are fully cacheable for at least 6 hours 
except the one which is a 404 page.


Might be time to delve into higher debug levels with the cache_store_log 
enabled to see whats going on.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE15
  Current Beta Squid 3.1.0.8 or 3.0.STABLE16-RC1


Re: [squid-users] position of Proxy-Authorization header (ICAP)

2009-06-02 Thread Amos Jeffries
kdsaeoi...@yahoo.co.jp wrote:
 Dear Amos,
 
 Thank you for your advice,
 
 Then, Proxy-Authorization is in not the encapsulated message in my 2nd 
 mail(squid3.0 STABLE13) .
 I think this conforms to RFC and this is not bug, 
 and Your opinion is also the same ?
 
 I look for the report of bugs . 
 I guess the bug to have been corrected with STABLE2-STABLE4. 
 Because Proxy-Authorization was in the encapsulated message at STABLE1, 
 and not STABLE4.
 
 --
 Power up the Internet with Yahoo! Toolbar.
 http://pr.mail.yahoo.co.jp/toolbar/

Right. You are right. The current behavior is RFC compliant.
I don't know what I was thinking. Just have to plead too much work and
no sleep :(

Amos
-- 
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE15
  Current Beta Squid 3.1.0.8 or 3.0.STABLE16-RC1


[squid-users] Squid + Kerberos + Active Directory

2009-06-02 Thread Truth Seeker

Dear Pro's

I am trying to configure a squid proxy in Windows 2003 Active Directory 
Environment. I need to make the migration from MS ISA Proxy to Squid 3.0 
Stable13 on CentOS 5.2

My primary goal is;
1. authenticate users without asking username/password (i mean like how a 
normal windows client will behave when he connects to internet through MS ISA 
Proxy in a Active Directory environment - which will not prompt 
username/password because of the Kerberos) by using the kerberos to communicate 
with the Win 2k3 Domain Controller.

2. Without any downtime.


Am i dreaming about this... ??? is this a workable target??? Is there any issue 
in this environment???

Awaiting your quick feedbacks ...

-
--
---
Always try to find truth!!!


  



Re: [squid-users] Squid + Kerberos + Active Directory

2009-06-02 Thread Amos Jeffries

Truth Seeker wrote:

Dear Pro's

I am trying to configure a squid proxy in Windows 2003 Active
Directory Environment. I need to make the migration from MS ISA Proxy
to Squid 3.0 Stable13 on CentOS 5.2

My primary goal is; 1. authenticate users without asking
username/password (i mean like how a normal windows client will
behave when he connects to internet through MS ISA Proxy in a Active
Directory environment - which will not prompt username/password
because of the Kerberos) by using the kerberos to communicate with
the Win 2k3 Domain Controller.

2. Without any downtime.


Am i dreaming about this... ??? is this a workable target??? Is there
any issue in this environment???

Awaiting your quick feedbacks ...



Possible.
http://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos

maybe even easy of you know what you are doing regarding Kerberos.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE15
  Current Beta Squid 3.1.0.8 or 3.0.STABLE16-RC1


[squid-users] Logging bandwidth for streams?

2009-06-02 Thread chrisw23

Hi,
Is it possible to get squid to log the bandwidth used for a stream? In my
access.log I only get actual  intial file downloads and I need to limit
users' total bandwidth - including any streaming content they've downloaded. 

Thanks,
Chris
-- 
View this message in context: 
http://www.nabble.com/Logging-bandwidth-for-streams--tp23831376p23831376.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] Logging bandwidth for streams?

2009-06-02 Thread Amos Jeffries

chrisw23 wrote:

Hi,
Is it possible to get squid to log the bandwidth used for a stream? In my
access.log I only get actual  intial file downloads and I need to limit
users' total bandwidth - including any streaming content they've downloaded. 


Thanks,
Chris


You mean the size of the CONNECT request transfer?

Yes, but you currently need to use custom log formats.

Copy the demo format (squid or common etc) from 
squid.conf.default/squid.conf.documented or 
http://www.squid-cache.org/Doc/config/logformat/


Replace the %st tag with just %st.

That will log the full transfer size including outgoing rather than just 
the reply. You will notice a large increase in sizes for 
POST/PUT/CONNECT and smaller increase for all others.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE15
  Current Beta Squid 3.1.0.8 or 3.0.STABLE16-RC1


Re: [squid-users] Logging bandwidth for streams?

2009-06-02 Thread chrisw23

That's perfect, thanks a lot.

Chris

Amos Jeffries-2 wrote:
 
 chrisw23 wrote:
 Hi,
 Is it possible to get squid to log the bandwidth used for a stream? In my
 access.log I only get actual  intial file downloads and I need to limit
 users' total bandwidth - including any streaming content they've
 downloaded. 
 
 Thanks,
 Chris
 
 You mean the size of the CONNECT request transfer?
 
 Yes, but you currently need to use custom log formats.
 
 Copy the demo format (squid or common etc) from 
 squid.conf.default/squid.conf.documented or 
 http://www.squid-cache.org/Doc/config/logformat/
 
 Replace the %st tag with just %st.
 
 That will log the full transfer size including outgoing rather than just 
 the reply. You will notice a large increase in sizes for 
 POST/PUT/CONNECT and smaller increase for all others.
 
 Amos
 -- 
 Please be using
Current Stable Squid 2.7.STABLE6 or 3.0.STABLE15
Current Beta Squid 3.1.0.8 or 3.0.STABLE16-RC1
 
 

-- 
View this message in context: 
http://www.nabble.com/Logging-bandwidth-for-streams--tp23831376p23831981.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] reverse proxy with SSL offloader issue

2009-06-02 Thread Mario Remy Almeida
Hi All,

I downloaded SSL Certificate from verisign and exported pvt key from
windows 2003 server

in squid.conf I have this

https_port 10.200.22.49:443 accel \
cert=/etc/squid/keys/mail.airarabia.ae_cert.pem \
key=/etc/squid/keys/pvtkey.pem defaultsite=mail.airarabia.ae

when access https://mail.airarabia.ae 
browser gives error 

Secure Connection Failed
mail.airarabia.ae uses an invalid security certificate.

The certificate is not trusted because the issuer certificate is
unknown.

(Error code: sec_error_unknown_issuer)
* This could be a problem with the server's configuration, or it
could be someone trying to impersonate the server.

* If you have connected to this server successfully in the past, the
error may be temporary, and you can try again later.

and in cache.log I get this

clientNegotiateSSL: Error negotiating SSL connection on FD 23:
error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca (1/0)


What could be the problem please help

//Remy


--
Disclaimer and Confidentiality


This material has been checked for  computer viruses and although none has
been found, we cannot guarantee  that it is completely free from such problems
and do not accept any  liability for loss or damage which may be caused.
Please therefore  check any attachments for viruses before using them on your
own  equipment. If you do find a computer virus please inform us immediately
so that we may take appropriate action. This communication is intended  solely
for the addressee and is confidential. If you are not the intended recipient,
any disclosure, copying, distribution or any action  taken or omitted to be
taken in reliance on it, is prohibited and may be  unlawful. The views
expressed in this message are those of the  individual sender, and may not
necessarily be that of ISA.


[squid-users] http burst

2009-06-02 Thread RoLaNd RoLaNd



Hello,

i just configured two delay pools.

acl limitedto8 src 192.168.75.0/255.255.255.0
acl mySubnet url_regex -i 192.168.75
delay_pools 2
delay_class 1 2
delay_parameters 1 -1/-1 -1/-1

#magic_words1: 192.168 we have set before
delay_access 1 allow mySubnet
delay_class 2 3
delay_access 2 allow limitedto8
delay_access 2 deny all
delay_parameters 2 64000/64000 -1/-1 8000/64000


such config works great for downloads as they're limited and so on..
though browsing is understandably slow..
is there a way i could give a burst for new http sessions so a page could open 
faster?



_
Show them the way! Add maps and directions to your party invites. 
http://www.microsoft.com/windows/windowslive/products/events.aspx

[squid-users] client_side_request.cc

2009-06-02 Thread Gontzal
Hi Wong,

Wich version of squidGuard are you running? I had the same problem and
i resolved it updating from squidGuard 1.3 to 1.4. Never more that
error...

Gontzal


2009/6/2 Wong wongb...@telkom.net

 Wong wrote:

 Dear All,

 I experienced messages below and squid exiting abnormally. Squid version 
 3S15

 Need your advise  help.

 Thx  Rgds,

 Wong

 ---snip---

 2009/06/01 08:29:27| client_side_request.cc(825) redirecting body_pipe 
 0x85fd94c*1 from request 0x8525c90 to 0x886bcd0

 These are normal. Visible only because of the level of debug_options.

 snip

 2009/06/01 10:05:51| Preparing for shutdown after 67188 requests
 2009/06/01 10:05:51| Waiting 5 seconds for active connections to finish
 2009/06/01 10:05:51| FD 25 Closing HTTP connection
 2009/06/01 10:05:51| WARNING: redirector #1 (FD 10) exited

 snip

 2009/06/01 10:05:51| WARNING: redirector #9 (FD 18) exited
 2009/06/01 10:05:51| Too few redirector processes are running
 2009/06/01 10:05:51| Starting new helpers
 2009/06/01 10:05:51| helperOpenServers: Starting 9/15 'squidGuard' processes
 2009/06/01 10:05:52| WARNING: redirector #10 (FD 19) exited

 snip

 I assume the problem you are reporting is the redirectors starting up again 
 during a shutdown. Is this correct?

 Amos
 --

 Yes Amos, you're absolutely correct.

 How can I solve this problem? Now I increase the redirector and monitoring 
 progress.

 Thx  Rgds,

 Wong



[squid-users] Reporting tools ?

2009-06-02 Thread Maxime Gaudreault
Hi list

I tried Calamaris to know how much bandwidth have been saved using
squid. But I want to be able to know how much BW saved during the week
or month.

Example: On june 15th, X MB saved since june 1st.

What can I use ?


[squid-users] squid reduces download speed

2009-06-02 Thread Wilhelm Drossart

Dear All!

At our school we installed a configuration of an internet router with IP 
Cop. The German company Deutsche Telekom is our provider. Our download rate 
is as high as 50,000 Kbit/s when we do not start the proxy. When we, 
however,  use the proxy (squid 2.7) the download speed reduces to approx. 
20,000 kbit/s. We tested the router with different hardware; with no visible 
success so far.


Hopefully you can help us any further. I am looking forward to your esteemed 
reply.


Best regards

Wilhelm Drossart
Deputy Headmaster
Berufsbildungszentrum Neuss Weingartstrasse 



Re: [squid-users] http burst

2009-06-02 Thread Chris Robertson

RoLaNd RoLaNd wrote:


Hello,

i just configured two delay pools.

acl limitedto8 src 192.168.75.0/255.255.255.0
acl mySubnet url_regex -i 192.168.75
  


url_regex?  Really?  Why not a dstdom_regex?  Furthermore, why are you 
specifying that the match should be made without regards to character 
case, when there are no alpha characters in the pattern?



delay_pools 2
delay_class 1 2
delay_parameters 1 -1/-1 -1/-1
  


So why do you have a delay pool, if it's not delaying the traffic?  Is 
it in the interest of NOT delaying traffic with 192.168.75 in the URL*?



#magic_words1: 192.168 we have set before
delay_access 1 allow mySubnet
delay_class 2 3
delay_access 2 allow limitedto8
  


delay_access 2 allow limitedto8 !mySubnet

would perform the same function.


delay_access 2 deny all
delay_parameters 2 64000/64000 -1/-1 8000/64000


such config works great for downloads as they're limited and so on..
though browsing is understandably slow..
is there a way i could give a burst for new http sessions so a page could open 
faster?
  


You already are.  Individuals are given a 512kbit initial bucket and a 
64kbit/sec pipe.  The first 512kbit is only limited by the aggregate.


Chris

*using a regex, and not escaping the wild character (.) means that 
192Q168!72 will be a match.


Re: [squid-users] http burst

2009-06-02 Thread Leonardo Carneiro
why don't you try give your users a larger pool. here i have a 1mb link, 
so i created pools of 10mega bytes with the regen rate of 20 kilobytes/s 
if anyone try to download anything bigger than 10MB, it will slow down 
to only 20Kb/s (180kbps), but the web browsing is great.


Chris Robertson escreveu:

RoLaNd RoLaNd wrote:


Hello,

i just configured two delay pools.

acl limitedto8 src 192.168.75.0/255.255.255.0
acl mySubnet url_regex -i 192.168.75
  


url_regex?  Really?  Why not a dstdom_regex?  Furthermore, why are you 
specifying that the match should be made without regards to character 
case, when there are no alpha characters in the pattern?



delay_pools 2
delay_class 1 2
delay_parameters 1 -1/-1 -1/-1
  


So why do you have a delay pool, if it's not delaying the traffic?  Is 
it in the interest of NOT delaying traffic with 192.168.75 in the URL*?



#magic_words1: 192.168 we have set before
delay_access 1 allow mySubnet
delay_class 2 3
delay_access 2 allow limitedto8
  


delay_access 2 allow limitedto8 !mySubnet

would perform the same function.


delay_access 2 deny all
delay_parameters 2 64000/64000 -1/-1 8000/64000


such config works great for downloads as they're limited and so on..
though browsing is understandably slow..
is there a way i could give a burst for new http sessions so a page 
could open faster?
  


You already are.  Individuals are given a 512kbit initial bucket and a 
64kbit/sec pipe.  The first 512kbit is only limited by the aggregate.


Chris

*using a regex, and not escaping the wild character (.) means that 
192Q168!72 will be a match.




--

*Leonardo de Souza Carneiro*
*Veltrac - Tecnologia em Logística.*
lscarne...@veltrac.com.br mailto:lscarne...@veltrac.com.br
http://www.veltrac.com.br http://www.veltrac.com.br/
/Fone Com.: (43)2105-5601/
/Av. Higienópolis 1601 Ed. Eurocenter Sl. 803/
/Londrina- PR/
/Cep: 86015-010/





Re: [squid-users] Squid + Kerberos + Active Directory

2009-06-02 Thread Truth Seeker


Thanks Amos. I followed that link and done the steps completely. But it is not 
working for me. PLease look in to the following details and kindly guide me to 
achieve the goal.

the following informations are herewith;
1. squid.conf
2. debugged info from cache.log

contents of my squid.conf

 grep -v ^# /etc/squid/squid.conf | grep -v ^$
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
### For ACtive Directory Inegration
auth_param negotiate program  /usr/lib/squid/squid_kerb_auth
auth_param negotiate children 10
auth_param negotiate keep_alive on
acl auth proxy_auth REQUIRED
http_access deny !auth
http_access allow auth
http_access deny all
http_access allow localhost
http_access deny all
icp_access allow localnet
icp_access deny all
htcp_access allow localnet
htcp_access deny all
http_port 8080
hierarchy_stoplist cgi-bin ?
access_log /var/log/squid/access.log squid
debug_options ALL,1 33,2 28,9
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern (cgi-bin|\?)0   0%  0
refresh_pattern .   0   20% 4320
icp_port 3130
coredump_dir /var/spool/squid



contents of cache.log while accessing from a windows client who is a member of 
our domain.

2009/06/02 21:38:06.486| aclCheckFast: list: 0x8a8ff60
2009/06/02 21:38:06.486| ACLChecklist::preCheck: 0xbfb8ae94 checking 
'ident_lookup_access deny all'
2009/06/02 21:38:06.486| ACLList::matches: checking all
2009/06/02 21:38:06.486| ACL::checklistMatches: checking 'all'
2009/06/02 21:38:06.486| aclMatchIp: '192.168.4.139' found
2009/06/02 21:38:06.486| ACL::ChecklistMatches: result for 'all' is 1
2009/06/02 21:38:06.486| ACLList::matches: result is true
2009/06/02 21:38:06.486| aclmatchAclList: 0xbfb8ae94 returning true (AND list 
satisfied)
2009/06/02 21:38:06.486| ACLChecklist::markFinished: 0xbfb8ae94 checklist 
processing finished
2009/06/02 21:38:06.486| ACLChecklist::~ACLChecklist: destroyed 0xbfb8ae94
2009/06/02 21:38:06.487| ACLChecklist::preCheck: 0x8d9c188 checking 
'http_access allow manager localhost'
2009/06/02 21:38:06.487| ACLList::matches: checking manager
2009/06/02 21:38:06.487| ACL::checklistMatches: checking 'manager'
2009/06/02 21:38:06.487| ACL::ChecklistMatches: result for 'manager' is 0
2009/06/02 21:38:06.487| ACLList::matches: result is false
2009/06/02 21:38:06.487| aclmatchAclList: 0x8d9c188 returning false (AND list 
entry failed to match)
2009/06/02 21:38:06.487| aclmatchAclList: async=0 nodeMatched=0 
async_in_progress=0 lastACLResult() = 0 finished() = 0
2009/06/02 21:38:06.487| ACLChecklist::preCheck: 0x8d9c188 checking 
'http_access deny manager'
2009/06/02 21:38:06.487| ACLList::matches: checking manager
2009/06/02 21:38:06.487| ACL::checklistMatches: checking 'manager'
2009/06/02 21:38:06.487| ACL::ChecklistMatches: result for 'manager' is 0
2009/06/02 21:38:06.487| ACLList::matches: result is false
2009/06/02 21:38:06.487| aclmatchAclList: 0x8d9c188 returning false (AND list 
entry failed to match)
2009/06/02 21:38:06.487| aclmatchAclList: async=0 nodeMatched=0 
async_in_progress=0 lastACLResult() = 0 finished() = 0
2009/06/02 21:38:06.487| ACLChecklist::preCheck: 0x8d9c188 checking 
'http_access deny !Safe_ports'
2009/06/02 21:38:06.487| ACLList::matches: checking !Safe_ports
2009/06/02 21:38:06.487| ACL::checklistMatches: checking 'Safe_ports'
2009/06/02 21:38:06.487| ACL::ChecklistMatches: result for 'Safe_ports' is 1
2009/06/02 21:38:06.487| ACLList::matches: result is false
2009/06/02 21:38:06.488| aclmatchAclList: 0x8d9c188 returning false (AND list 
entry failed to match)
2009/06/02 21:38:06.488| aclmatchAclList: async=0 nodeMatched=0 
async_in_progress=0 lastACLResult() = 0 finished() = 0
2009/06/02 21:38:06.488| ACLChecklist::preCheck: 0x8d9c188 checking 
'http_access deny CONNECT !SSL_ports'
2009/06/02 21:38:06.488| ACLList::matches: checking CONNECT
2009/06/02 21:38:06.488| ACL::checklistMatches: checking 'CONNECT'
2009/06/02 21:38:06.488| ACL::ChecklistMatches: result for 'CONNECT' is 0
2009/06/02 21:38:06..488| ACLList::matches: result is false
2009/06/02 21:38:06.488| 

Re: [squid-users] is this possible?

2009-06-02 Thread botemout

Thanks for the hint.

I'm using the collapsed_forwarding and quick_abort_* options and seem to 
be having some luck.  There are, however, oddities.  It seems that 
sometimes additional requests are still being forwarded to the 
underlying web server.  Other than squidclient mgr:active_requests are 
there any other ways of monitoring the status of subsequent requests 
which should be deferred to wait for the original?


thanks

JR

Amos Jeffries wrote:

botem...@gmail.com wrote:

Greetings

We have an unusual problem.

One of our clients uses a program to request map data from one of our 
servers.  We have a reverse proxy squid cache in front of it.  Client 
hits squid, which gets data from web server and serves it back to client.


Here's the problem.  Client uses broken code that times out before the 
map data is finished processing.  It then resubmits the request.  
It'll do this about 10 times before finally dying.  Each new request, 
however, also times out so nothing is done and lots of time is wasted 
(as single one of these requests might take 5 minutes to return).


So, I've proposed that we use url_rewrite_program to pass the request 
to  a program which makes the request to the webserver (and DOESN'T 
timeout!), it then returns the same URL but by this time the object is 
in the cache and the original squid process returns the data from the 
cache.


Is this craziness?  Anyone do anything like this before?  Or is there 
some better, easier way to handle this?


There are a couple of things worth trying:

 * quick_abort_min/quick_abort_max. Make sure they are set to allow 
squid to finish the first request even after client is gone. This will 
at least let the retries have shorted generation times.


 * If you are using Squid-2 collapsed_forwarding is your very best 
friend. Cupled wit the above it will ensure only the first request gets 
to the backend and later ones get served whatever data it came up with.



 * Personally I'd also stick a 5-10 minute minimum cache time on all 
results if the backend takes 5 minutes to generate.


Short caching times still lets you update relatively quickly, but you 
want to have them stick around just long enough to catch followups and 
if all clients suddenly got this terrible software they won't kill your 
service with retries.


Amos


[squid-users] Re: Squid + Kerberos + Active Directory

2009-06-02 Thread Markus Moeller

Can you send me the following;

fqdn
hostname
klist -kt   squid.keytab  ( If you use MIT Kerberos)


Does you startup script set the KRB5_KTNAME environment variable ?

Can you do a successful kinit -k squid.keytab  HTTP/hostname ?

Can you add a -d to squid_kerb_auth and send me the output ?

Did you use the fqdn in IE  to point to squid ?

Regards
Markus


Truth Seeker truth_seeker_3...@yahoo.com wrote in message 
news:177962.48305...@web43409.mail.sp1.yahoo.com...



Thanks Amos. I followed that link and done the steps completely. But it is 
not working for me. PLease look in to the following details and kindly guide 
me to achieve the goal.


the following informations are herewith;
1. squid.conf
2. debugged info from cache.log

contents of my squid.conf

grep -v ^# /etc/squid/squid.conf | grep -v ^$
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
### For ACtive Directory Inegration
auth_param negotiate program  /usr/lib/squid/squid_kerb_auth
auth_param negotiate children 10
auth_param negotiate keep_alive on
acl auth proxy_auth REQUIRED
http_access deny !auth
http_access allow auth
http_access deny all
http_access allow localhost
http_access deny all
icp_access allow localnet
icp_access deny all
htcp_access allow localnet
htcp_access deny all
http_port 8080
hierarchy_stoplist cgi-bin ?
access_log /var/log/squid/access.log squid
debug_options ALL,1 33,2 28,9
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern (cgi-bin|\?) 0 0% 0
refresh_pattern . 0 20% 4320
icp_port 3130
coredump_dir /var/spool/squid



contents of cache.log while accessing from a windows client who is a member 
of our domain.


2009/06/02 21:38:06.486| aclCheckFast: list: 0x8a8ff60
2009/06/02 21:38:06.486| ACLChecklist::preCheck: 0xbfb8ae94 checking 
'ident_lookup_access deny all'

2009/06/02 21:38:06.486| ACLList::matches: checking all
2009/06/02 21:38:06.486| ACL::checklistMatches: checking 'all'
2009/06/02 21:38:06.486| aclMatchIp: '192.168.4.139' found
2009/06/02 21:38:06.486| ACL::ChecklistMatches: result for 'all' is 1
2009/06/02 21:38:06.486| ACLList::matches: result is true
2009/06/02 21:38:06.486| aclmatchAclList: 0xbfb8ae94 returning true (AND 
list satisfied)
2009/06/02 21:38:06.486| ACLChecklist::markFinished: 0xbfb8ae94 checklist 
processing finished

2009/06/02 21:38:06.486| ACLChecklist::~ACLChecklist: destroyed 0xbfb8ae94
2009/06/02 21:38:06.487| ACLChecklist::preCheck: 0x8d9c188 checking 
'http_access allow manager localhost'

2009/06/02 21:38:06.487| ACLList::matches: checking manager
2009/06/02 21:38:06.487| ACL::checklistMatches: checking 'manager'
2009/06/02 21:38:06.487| ACL::ChecklistMatches: result for 'manager' is 0
2009/06/02 21:38:06.487| ACLList::matches: result is false
2009/06/02 21:38:06.487| aclmatchAclList: 0x8d9c188 returning false (AND 
list entry failed to match)
2009/06/02 21:38:06.487| aclmatchAclList: async=0 nodeMatched=0 
async_in_progress=0 lastACLResult() = 0 finished() = 0
2009/06/02 21:38:06.487| ACLChecklist::preCheck: 0x8d9c188 checking 
'http_access deny manager'

2009/06/02 21:38:06.487| ACLList::matches: checking manager
2009/06/02 21:38:06.487| ACL::checklistMatches: checking 'manager'
2009/06/02 21:38:06.487| ACL::ChecklistMatches: result for 'manager' is 0
2009/06/02 21:38:06.487| ACLList::matches: result is false
2009/06/02 21:38:06.487| aclmatchAclList: 0x8d9c188 returning false (AND 
list entry failed to match)
2009/06/02 21:38:06.487| aclmatchAclList: async=0 nodeMatched=0 
async_in_progress=0 lastACLResult() = 0 finished() = 0
2009/06/02 21:38:06.487| ACLChecklist::preCheck: 0x8d9c188 checking 
'http_access deny !Safe_ports'

2009/06/02 21:38:06.487| ACLList::matches: checking !Safe_ports
2009/06/02 21:38:06.487| ACL::checklistMatches: checking 'Safe_ports'
2009/06/02 21:38:06.487| ACL::ChecklistMatches: result for 'Safe_ports' is 1
2009/06/02 21:38:06.487| ACLList::matches: result is false
2009/06/02 21:38:06.488| aclmatchAclList: 0x8d9c188 returning false (AND 
list entry failed to match)
2009/06/02 21:38:06.488| aclmatchAclList: async=0 nodeMatched=0 
async_in_progress=0 lastACLResult() = 0 

[squid-users] Sharepoint/SQUID

2009-06-02 Thread spookrat
 Recently setup SQUID and while was testing discovered that while
using the built in search for sharepoint that I would get a message
from it like this;

The Web application at http://mysharepointsite could not be found.
Verify that you have typed the URL correctly. If the URL should be
serving existing content, the system administrator may need to add a
new request URL mapping to the intended application.

 When I shutoff the SQUID proxy this functionality does work.  I
receive the following messages in the SQUID

1243961498.622 32 mymachinename.mydomainname.com TCP_MISS/404 1044
GET http://mysharepointsite/nwpcadm/KB/_themes/CustomJet/topnavselected_jet.gif
- NONE/- -
1243961498.627 34 mymachinename.mydomainname.com TCP_MISS/401 2239
GET http://mysharepointsite/nwpcadm/KB/_themes/CustomJet/navshape_jet.jpg
- DIRECT/10.0.2.135 text/html
1243961498.653 62 mymachinename.mydomainname.com TCP_MISS/404 1044
GET 
http://mysharepointsite/nwpcadm/KB/_themes/CustomJet/topnavunselected_jet.gif
- NONE/- -
1243961498.656 33 mymachinename.mydomainname.com TCP_MISS/401 2239
GET http://mysharepointsite/nwpcadm/KB/_themes/CustomJet/pagebackgrad_jet.gif
- NONE/- text/html
1243961498.657 62 mymachinename.mydomainname.com TCP_MISS/404 1044
GET 
http://mysharepointsite/nwpcadm/KB/_themes/CustomJet/siteactionsmenugrad_jet.gif
- DIRECT/10.0.2.135 -
1243961498.657 30 mymachinename.mydomainname.com TCP_MISS/401 2239
GET http://mysharepointsite/nwpcadm/KB/_themes/CustomJet/pageTitleBKGD_jet.gif
- DIRECT/10.0.2.135 text/html
1243961498.690 35 mymachinename.mydomainname.com TCP_MISS/404 1044
GET http://mysharepointsite/nwpcadm/KB/_themes/CustomJet/navshape_jet.jpg
- NONE/- -
1243961498.690 32 mymachinename.mydomainname.com TCP_MISS/404 1044
GET http://mysharepointsite/KB/_themes/CustomJet/pagebackgrad_jet.gif
- NONE/- -
1243961498.709 50 mymachinename.mydomainname.com TCP_MISS/404 1044
GET http://mysharepointsite/nwpcadm/KB/_themes/CustomJet/pageTitleBKGD_jet.gif
- NONE/- -

First I thought maybe it was because it was attempting to cache .aspx
pages from sharepoint.  So I threw an always_direct into my squid.conf
file without much luck.  Any thoughts on where I might be a bit on the
misguided side?

Thanks!
Dave


[squid-users] ACL based on child info

2009-06-02 Thread Evelio Vila
Hi everyone!

In my squid, I use 'login=*:password' to pass the username to a parent
proxy. The parent proxy needs the user login information to conform some
acls. 

My question is: wich acl type should I use in the parent proxy to match
the received login info.

i have my users in a plain text file.

I've tried proxy_auth but it doesnt match the received login against the
text file.


thanks in advance!

vila


VI Conferencia Internacional de Energía Renovable, Ahorro de Energía y 
Educación Energética
9 - 12 de Junio 2009, Palacio de las Convenciones
...Por una cultura energética sustentable
www.ciercuba.com 


[squid-users] Security of NTLM authentication

2009-06-02 Thread Leonardo Rodrigues


   Hello Guys,

   a simple question . i know that basic authentication schemas 
transmit username/password in cleartext over the wire. It' base64 
encoded, but it's trivially detected and decoded, which make them not 
the most secure ones to use.


   do NTLM authentication schemas are more secure than basic ones, i 
mean, do NTLM authentication schema transmit cleartext (or simply 
encoded) username/passwords over the wire ?



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] Squid + Kerberos + Active Directory

2009-06-02 Thread Amos Jeffries
On Tue, 2 Jun 2009 11:48:51 -0700 (PDT), Truth Seeker
truth_seeker_3...@yahoo.com wrote:
 Thanks Amos. I followed that link and done the steps completely. But it
is
 not working for me. PLease look in to the following details and kindly
 guide me to achieve the goal.
 
 the following informations are herewith;
 1. squid.conf
 2. debugged info from cache.log
 
 contents of my squid.conf
 
  grep -v ^# /etc/squid/squid.conf | grep -v ^$
 acl manager proto cache_object
 acl localhost src 127.0.0.1/32
 acl to_localhost dst 127.0.0.0/8
 acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
 acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
 acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
 acl SSL_ports port 443
 acl Safe_ports port 80  # http
 acl Safe_ports port 21  # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70  # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535  # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl CONNECT method CONNECT
 http_access allow manager localhost
 http_access deny manager
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 ### For ACtive Directory Inegration
 auth_param negotiate program  /usr/lib/squid/squid_kerb_auth
 auth_param negotiate children 10
 auth_param negotiate keep_alive on
 acl auth proxy_auth REQUIRED
 http_access deny !auth
 http_access allow auth

So only authenticated users can use the proxy from anywhere.

 http_access deny all

... nobody else can use it at all.
Following http_access are never matched.

 http_access allow localhost
 http_access deny all
 icp_access allow localnet
 icp_access deny all
 htcp_access allow localnet
 htcp_access deny all
 http_port 8080
 hierarchy_stoplist cgi-bin ?
 access_log /var/log/squid/access.log squid
 debug_options ALL,1 33,2 28,9
 refresh_pattern ^ftp: 144020% 10080
 refresh_pattern ^gopher:  14400%  1440
 refresh_pattern (cgi-bin|\?)  0   0%  0
 refresh_pattern . 0   20% 4320
 icp_port 3130
 coredump_dir /var/spool/squid
 
 
 
 contents of cache.log while accessing from a windows client who is a
member
 of our domain.
 

The trace shows two requests arriving and being checked, they get as far as
deny !auth and squid sends back a 407 auth-required challenge sent to the
browser. If these are the same request it looks like the browser does not
handle kerberos.

snip trace
 
 
 -
 --
 ---
 Always try to find truth!!!
 
 
 --- On Tue, 6/2/09, Amos Jeffries squ...@treenet.co.nz wrote:
 
 From: Amos Jeffries squ...@treenet.co.nz
 Subject: Re: [squid-users] Squid + Kerberos + Active Directory
 To: Truth Seeker truth_seeker_3...@yahoo.com
 Cc: Squid maillist squid-users@squid-cache.org
 Date: Tuesday, June 2, 2009, 2:53 PM
 Truth Seeker wrote:
  Dear Pro's
  
  I am trying to configure a squid proxy in Windows 2003
 Active
  Directory Environment. I need to make the migration
 from MS ISA Proxy
  to Squid 3.0 Stable13 on CentOS 5.2
  
  My primary goal is; 1. authenticate users without
 asking
  username/password (i mean like how a normal windows
 client will
  behave when he connects to internet through MS ISA
 Proxy in a Active
  Directory environment - which will not prompt
 username/password
  because of the Kerberos) by using the kerberos to
 communicate with
  the Win 2k3 Domain Controller.
  
  2. Without any downtime.
  
  
  Am i dreaming about this... ??? is this a workable
 target??? Is there
  any issue in this environment???
  
  Awaiting your quick feedbacks ...
  
 
 Possible.
 http://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos
 
 maybe even easy of you know what you are doing regarding
 Kerberos.
 
 Amos
 -- Please be using
   Current Stable Squid 2.7.STABLE6 or 3.0.STABLE15
   Current Beta Squid 3.1.0.8 or 3.0.STABLE16-RC1



Re: [squid-users] Sharepoint/SQUID

2009-06-02 Thread Amos Jeffries
On Tue, 2 Jun 2009 16:16:43 -0500, spookrat spook...@gmail.com wrote:
  Recently setup SQUID and while was testing discovered that while
 using the built in search for sharepoint that I would get a message
 from it like this;
 
 The Web application at http://mysharepointsite could not be found.
 Verify that you have typed the URL correctly. If the URL should be
 serving existing content, the system administrator may need to add a
 new request URL mapping to the intended application.
 
  When I shutoff the SQUID proxy this functionality does work.  I
 receive the following messages in the SQUID
 
 1243961498.622 32 mymachinename.mydomainname.com TCP_MISS/404 1044
 GET

http://mysharepointsite/nwpcadm/KB/_themes/CustomJet/topnavselected_jet.gif
 - NONE/- -
 1243961498.627 34 mymachinename.mydomainname.com TCP_MISS/401 2239
 GET http://mysharepointsite/nwpcadm/KB/_themes/CustomJet/navshape_jet.jpg
 - DIRECT/10.0.2.135 text/html
 1243961498.653 62 mymachinename.mydomainname.com TCP_MISS/404 1044
 GET

http://mysharepointsite/nwpcadm/KB/_themes/CustomJet/topnavunselected_jet.gif
 - NONE/- -
 1243961498.656 33 mymachinename.mydomainname.com TCP_MISS/401 2239
 GET
 http://mysharepointsite/nwpcadm/KB/_themes/CustomJet/pagebackgrad_jet.gif
 - NONE/- text/html
 1243961498.657 62 mymachinename.mydomainname.com TCP_MISS/404 1044
 GET

http://mysharepointsite/nwpcadm/KB/_themes/CustomJet/siteactionsmenugrad_jet.gif
 - DIRECT/10.0.2.135 -
 1243961498.657 30 mymachinename.mydomainname.com TCP_MISS/401 2239
 GET

http://mysharepointsite/nwpcadm/KB/_themes/CustomJet/pageTitleBKGD_jet.gif
 - DIRECT/10.0.2.135 text/html
 1243961498.690 35 mymachinename.mydomainname.com TCP_MISS/404 1044
 GET http://mysharepointsite/nwpcadm/KB/_themes/CustomJet/navshape_jet.jpg
 - NONE/- -
 1243961498.690 32 mymachinename.mydomainname.com TCP_MISS/404 1044
 GET http://mysharepointsite/KB/_themes/CustomJet/pagebackgrad_jet.gif
 - NONE/- -
 1243961498.709 50 mymachinename.mydomainname.com TCP_MISS/404 1044
 GET

http://mysharepointsite/nwpcadm/KB/_themes/CustomJet/pageTitleBKGD_jet.gif
 - NONE/- -
 
 First I thought maybe it was because it was attempting to cache .aspx
 pages from sharepoint.  So I threw an always_direct into my squid.conf
 file without much luck.  Any thoughts on where I might be a bit on the
 misguided side?

Well its very hard to tell whats going on since you omit any details of how
you setup squid.
'mysharepointsite' is not a proper domain name. That may be the problem.

Amos



Re: [squid-users] ACL based on child info

2009-06-02 Thread Amos Jeffries
On Tue, 02 Jun 2009 18:01:31 -0400, Evelio Vila v...@tesla.cujae.edu.cu
wrote:
 Hi everyone!
 
 In my squid, I use 'login=*:password' to pass the username to a parent
 proxy. The parent proxy needs the user login information to conform some
 acls. 
 
 My question is: wich acl type should I use in the parent proxy to match
 the received login info.
 
 i have my users in a plain text file.
 
 I've tried proxy_auth but it doesnt match the received login against the
 text file.

And the answer is

  proxy_auth

... but using an authentication helper that can authenticate against the
text file format you use.

If none of the bundled helpers do it right it's easy enough to write custom
basic auth helpers. Just adapt the dummy helper from
http://wiki.squid-cache.org/ConfigExamples/Authenticate/LoggingOnly

Don't forget that when passed login=*:password the parent proxy will
always received the text password as the password. To use the real
password you need the exact text 'login=PASS'.

Amos



[squid-users] next Squid 2.7 release?

2009-06-02 Thread Balaji Ganesan
Hi,
Can anyone please let me know when is the next stable 2.7 release
intended. I believe Windows 7 support is on the next release and I
would like to have that for my work. Also please let me know which
STABLE version will that one be.

Thanks
Balaji


Re: [squid-users] Security of NTLM authentication

2009-06-02 Thread Amos Jeffries
On Tue, 02 Jun 2009 19:44:03 -0300, Leonardo Rodrigues
leolis...@solutti.com.br wrote:
 Hello Guys,
 
 a simple question . i know that basic authentication schemas 
 transmit username/password in cleartext over the wire. It' base64 
 encoded, but it's trivially detected and decoded, which make them not 
 the most secure ones to use.
 
 do NTLM authentication schemas are more secure than basic ones, i 
 mean, do NTLM authentication schema transmit cleartext (or simply 
 encoded) username/passwords over the wire ?

NTLM uses a side channel directly between the domain control server and the
machine needing to check auth. I'm not sure how that is coded. The HTTP
side of the triangle includes a hash of the credentials.

One thing to be wary of is that NTLM hash strength is pretty much limited
by the Windows releases involved. The older versions used by Win9x are
hashes which are now trivially broken, none are completely secure. The
latest windows releases have deprecated it in favor of the much more secure
Kerberos (but that won't work with anything much older than XP and IE6).

There is also digest authentication, which is the IETF standard for secure
authentication over HTTP. Some people actually use it too. And it works
without needing windows or domain controllers.

Amos



Re: [squid-users] next Squid 2.7 release?

2009-06-02 Thread Amos Jeffries
On Tue, 2 Jun 2009 16:44:50 -0700, Balaji Ganesan
bgane...@venturiwireless.com wrote:
 Hi,
 Can anyone please let me know when is the next stable 2.7 release
 intended. I believe Windows 7 support is on the next release and I
 would like to have that for my work. Also please let me know which
 STABLE version will that one be.
 
 Thanks
 Balaji

Henrik who maintains Squid-2 and makes these decisions for that branch is
taking a long overdue break from squid at present. He will be back at some
undefined point in the future.

The next numerical release of 2.7 will be 2.7.STABLE7 if it comes out.
No release is timelined at present, though I have little doubt there will
be one eventually.

Meanwhile you should contact Acme Consulting
(http://squid.acmeconsulting.it/) about an updated build.

Amos



Re: [squid-users] squid reduces download speed

2009-06-02 Thread Amos Jeffries
On Tue, 2 Jun 2009 19:25:06 +0200, Wilhelm Drossart
w.dross...@googlemail.com wrote:
 Dear All!
 
 At our school we installed a configuration of an internet router with IP 
 Cop. The German company Deutsche Telekom is our provider. Our download
rate
 
 is as high as 50,000 Kbit/s when we do not start the proxy. When we, 
 however,  use the proxy (squid 2.7) the download speed reduces to approx.

 20,000 kbit/s. We tested the router with different hardware; with no
 visible 
 success so far.
 
 Hopefully you can help us any further. I am looking forward to your
 esteemed 
 reply.
 

Could be a number of things.

To assist anyone helping you tune your Squid are you able to specify the
specs of the box the proxy is running on and the configuration of the
proxy?

Amos



Re: [squid-users] Security of NTLM authentication

2009-06-02 Thread Leonardo Rodrigues

Amos Jeffries escreveu:


One thing to be wary of is that NTLM hash strength is pretty much limited
by the Windows releases involved. The older versions used by Win9x are
hashes which are now trivially broken, none are completely secure. The
latest windows releases have deprecated it in favor of the much more secure
Kerberos (but that won't work with anything much older than XP and IE6).

  
   supporting Win9x is not needed and, if i can do anything to really 
dissallow those to browser, i will :)


   basically my clients will be Win9x and Vista and Windows 2003/2008 
servers as well. There's absolutely no chance of having Win9x on my 
project, which seems to be good.



There is also digest authentication, which is the IETF standard for secure
authentication over HTTP. Some people actually use it too. And it works
without needing windows or domain controllers.

  


   having a domain controller is not a problem indeed. In fact i need 
squid to use AD username and passwords. Anyway, i'll look for digest 
authentication.


   thanks for the answer and for the hints.



--


Atenciosamente / Sincerily,
Leonardo Rodrigues
Solutti Tecnologia
http://www.solutti.com.br

Minha armadilha de SPAM, NÃO mandem email
gertru...@solutti.com.br
My SPAMTRAP, do not email it






Re: [squid-users] reverse proxy with SSL offloader issue

2009-06-02 Thread Amos Jeffries
On Tue, 02 Jun 2009 16:56:08 +0400, Mario Remy Almeida
malme...@isaaviation.ae wrote:
 Hi All,
 
 I downloaded SSL Certificate from verisign and exported pvt key from
 windows 2003 server
 
 in squid.conf I have this
 
 https_port 10.200.22.49:443 accel \
 cert=/etc/squid/keys/mail.airarabia.ae_cert.pem \
 key=/etc/squid/keys/pvtkey.pem defaultsite=mail.airarabia.ae
 
 when access https://mail.airarabia.ae 
 browser gives error 
 
 Secure Connection Failed
 mail.airarabia.ae uses an invalid security certificate.
 
 The certificate is not trusted because the issuer certificate is
 unknown.
 
 (Error code: sec_error_unknown_issuer)
 * This could be a problem with the server's configuration, or it
 could be someone trying to impersonate the server.
 
 * If you have connected to this server successfully in the past, the
 error may be temporary, and you can try again later.
 
 and in cache.log I get this
 
 clientNegotiateSSL: Error negotiating SSL connection on FD 23:
 error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca (1/0)
 
 
 What could be the problem please help
 

SSL chain of trust is broken on one of the SSL links.

Two things to try:
 1) adding sslflags=DONT_VERIFY_PEER  - If that works its the cache_peer
link broken. If still fails then its the https_port certificate.

Next look at the certificate itself, see if it contains the whole chain of
trust (concatenated certificate + signing authority cert).
I'm a bit hazy about whether the https_port needs the signing authority in
it or not when the certs are of the unlinked chain type (I forget what the
right name is even). But I think cache_peer needs the full chain to be in
the cert.

Amos



Re: [squid-users] http burst

2009-06-02 Thread giob...@gmail.com
IMHO, you should create acl base on file type, instead of src IP. It
will take care about web traffic.

-giobuon

On Wed, Jun 3, 2009 at 1:47 AM, Leonardo Carneiro
lscarne...@veltrac.com.br wrote:
 why don't you try give your users a larger pool. here i have a 1mb link, so
 i created pools of 10mega bytes with the regen rate of 20 kilobytes/s if
 anyone try to download anything bigger than 10MB, it will slow down to only
 20Kb/s (180kbps), but the web browsing is great.

 Chris Robertson escreveu:

 RoLaNd RoLaNd wrote:

 Hello,

 i just configured two delay pools.

 acl limitedto8 src 192.168.75.0/255.255.255.0
 acl mySubnet url_regex -i 192.168.75


 url_regex?  Really?  Why not a dstdom_regex?  Furthermore, why are you
 specifying that the match should be made without regards to character case,
 when there are no alpha characters in the pattern?

 delay_pools 2
 delay_class 1 2
 delay_parameters 1 -1/-1 -1/-1


 So why do you have a delay pool, if it's not delaying the traffic?  Is it
 in the interest of NOT delaying traffic with 192.168.75 in the URL*?

 #magic_words1: 192.168 we have set before
 delay_access 1 allow mySubnet
 delay_class 2 3
 delay_access 2 allow limitedto8


 delay_access 2 allow limitedto8 !mySubnet

 would perform the same function.

 delay_access 2 deny all
 delay_parameters 2 64000/64000 -1/-1 8000/64000


 such config works great for downloads as they're limited and so on..
 though browsing is understandably slow..
 is there a way i could give a burst for new http sessions so a page could
 open faster?


 You already are.  Individuals are given a 512kbit initial bucket and a
 64kbit/sec pipe.  The first 512kbit is only limited by the aggregate.

 Chris

 *using a regex, and not escaping the wild character (.) means that
 192Q168!72 will be a match.


 --

 *Leonardo de Souza Carneiro*
 *Veltrac - Tecnologia em Logística.*
 lscarne...@veltrac.com.br mailto:lscarne...@veltrac.com.br
 http://www.veltrac.com.br http://www.veltrac.com.br/
 /Fone Com.: (43)2105-5601/
 /Av. Higienópolis 1601 Ed. Eurocenter Sl. 803/
 /Londrina- PR/
 /Cep: 86015-010/






Re: [squid-users] http burst

2009-06-02 Thread giob...@gmail.com
Sorry for 2nd post, but I think I should mention you read carefully
Chris's advice. It's useful. Thanks Chris.

On Wed, Jun 3, 2009 at 11:06 AM, giob...@gmail.com giob...@gmail.com wrote:
 IMHO, you should create acl base on file type, instead of src IP. It
 will take care about web traffic.

 -giobuon

 On Wed, Jun 3, 2009 at 1:47 AM, Leonardo Carneiro
 lscarne...@veltrac.com.br wrote:
 why don't you try give your users a larger pool. here i have a 1mb link, so
 i created pools of 10mega bytes with the regen rate of 20 kilobytes/s if
 anyone try to download anything bigger than 10MB, it will slow down to only
 20Kb/s (180kbps), but the web browsing is great.

 Chris Robertson escreveu:

 RoLaNd RoLaNd wrote:

 Hello,

 i just configured two delay pools.

 acl limitedto8 src 192.168.75.0/255.255.255.0
 acl mySubnet url_regex -i 192.168.75


 url_regex?  Really?  Why not a dstdom_regex?  Furthermore, why are you
 specifying that the match should be made without regards to character case,
 when there are no alpha characters in the pattern?

 delay_pools 2
 delay_class 1 2
 delay_parameters 1 -1/-1 -1/-1


 So why do you have a delay pool, if it's not delaying the traffic?  Is it
 in the interest of NOT delaying traffic with 192.168.75 in the URL*?

 #magic_words1: 192.168 we have set before
 delay_access 1 allow mySubnet
 delay_class 2 3
 delay_access 2 allow limitedto8


 delay_access 2 allow limitedto8 !mySubnet

 would perform the same function.

 delay_access 2 deny all
 delay_parameters 2 64000/64000 -1/-1 8000/64000


 such config works great for downloads as they're limited and so on..
 though browsing is understandably slow..
 is there a way i could give a burst for new http sessions so a page could
 open faster?


 You already are.  Individuals are given a 512kbit initial bucket and a
 64kbit/sec pipe.  The first 512kbit is only limited by the aggregate.

 Chris

 *using a regex, and not escaping the wild character (.) means that
 192Q168!72 will be a match.


 --

 *Leonardo de Souza Carneiro*
 *Veltrac - Tecnologia em Logística.*
 lscarne...@veltrac.com.br mailto:lscarne...@veltrac.com.br
 http://www.veltrac.com.br http://www.veltrac.com.br/
 /Fone Com.: (43)2105-5601/
 /Av. Higienópolis 1601 Ed. Eurocenter Sl. 803/
 /Londrina- PR/
 /Cep: 86015-010/







Re: [squid-users] reverse proxy with SSL offloader issue

2009-06-02 Thread Mario Remy Almeida
Hi Amos,

I don't know how to check the chain of trust

I concatenated the csr and the certficate but how to do so i don't know
can you please tell me?

=== squid.conf 
https_port 10.200.22.49:443 accel \
cert=/etc/squid/keys/mail.airarabia.ae_cert.pem \
key=/etc/squid/keys/newpvtkey.pem defaultsite=mail.airarabia.ae

cache_peer 10.200.22.12 parent 80 0 no-query originserver login=PASS \
front-end-https=on name=owaServer sslflags=DONT_VERIFY_PEER

//Remy

On Wed, 2009-06-03 at 12:51 +1200, Amos Jeffries wrote:
 On Tue, 02 Jun 2009 16:56:08 +0400, Mario Remy Almeida
 malme...@isaaviation.ae wrote:
  Hi All,
  
  I downloaded SSL Certificate from verisign and exported pvt key from
  windows 2003 server
  
  in squid.conf I have this
  
  https_port 10.200.22.49:443 accel \
  cert=/etc/squid/keys/mail.airarabia.ae_cert.pem \
  key=/etc/squid/keys/pvtkey.pem defaultsite=mail.airarabia.ae
  
  when access https://mail.airarabia.ae 
  browser gives error 
  
  Secure Connection Failed
  mail.airarabia.ae uses an invalid security certificate.
  
  The certificate is not trusted because the issuer certificate is
  unknown.
  
  (Error code: sec_error_unknown_issuer)
  * This could be a problem with the server's configuration, or it
  could be someone trying to impersonate the server.
  
  * If you have connected to this server successfully in the past, the
  error may be temporary, and you can try again later.
  
  and in cache.log I get this
  
  clientNegotiateSSL: Error negotiating SSL connection on FD 23:
  error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca (1/0)
  
  
  What could be the problem please help
  
 
 SSL chain of trust is broken on one of the SSL links.
 
 Two things to try:
  1) adding sslflags=DONT_VERIFY_PEER  - If that works its the cache_peer
 link broken. If still fails then its the https_port certificate.
 
 Next look at the certificate itself, see if it contains the whole chain of
 trust (concatenated certificate + signing authority cert).
 I'm a bit hazy about whether the https_port needs the signing authority in
 it or not when the certs are of the unlinked chain type (I forget what the
 right name is even). But I think cache_peer needs the full chain to be in
 the cert.
 
 Amos
 



--
Disclaimer and Confidentiality


This material has been checked for  computer viruses and although none has
been found, we cannot guarantee  that it is completely free from such problems
and do not accept any  liability for loss or damage which may be caused.
Please therefore  check any attachments for viruses before using them on your
own  equipment. If you do find a computer virus please inform us immediately
so that we may take appropriate action. This communication is intended  solely
for the addressee and is confidential. If you are not the intended recipient,
any disclosure, copying, distribution or any action  taken or omitted to be
taken in reliance on it, is prohibited and may be  unlawful. The views
expressed in this message are those of the  individual sender, and may not
necessarily be that of ISA.


[squid-users] caching the uncacheable

2009-06-02 Thread Chudy Fernandez

Date Wed, 21 Jan 2004 19:50:30 GMT 
Pragma no-cache 
Cache-Control private, no-cache, no-cache=Set-Cookie, proxy-revalidate 
Expires Wed, 17 Sep 1975 21:32:10 GMT 
Last-Modified Wed, 19 Apr 2000 11:43:00 GMT 
How to cache contents with this kind of headers?

ignore-private ignore-no-cache ignore-must-revalidate ignore-reload 
override-expire override-lastmod
what else do I need? ... store-stale?


  


[squid-users] allways MISS when using storeurl with vay objects solved

2009-06-02 Thread Chudy Fernandez

example: xxx.photobucket.com
patch works
http://www.squid-cache.org/bugs/show_bug.cgi?id=2678


Thanks to Amos for the store.log tip