RE: [squid-users] HTTP_Miss/200,304 Very Slow responsetime. Experts please help.

2010-03-29 Thread GIGO .

Dear Amos,

Thank you so much i will try troubleshooting on the lines you suggested.


regards,

Bilal Aslam


> Date: Tue, 30 Mar 2010 17:05:50 +1300
> From: squ...@treenet.co.nz
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] HTTP_Miss/200,304 Very Slow responsetime. Experts 
> please help.
>
> GIGO . wrote:
>> I am using ISA server as cache_peer parent and runing multiple instances on 
>> my squid Sever. However i am failing to understand that why the behaviour of 
>> Squid is extremely slow. At home where i have direct access to internet the 
>> same setup works fine.Please somebody help me out
>>
>> regards,
>>
>> Bilal Aslam
>>
>
> First thing to check is access times on the ISA and whether the problem
> is actually Squid or something else down the software chain.
>
> Extremely slow times are usually the result of DNS failures. Each of the
> proxies needs to do its own lookups, so any small failure will compound
> into a big delay very fast.
>
> Your squid does its own DNS lookup on every request to figure out if
> it's part of localservers ACL or not (in both the always_direct and
> cache access controls).
>
> Amos
>
>>
>> ---
>> My squid server has internet access by being a secureNat client of ISA 
>> Server.
>>
>> My Configuration file for first Instance:
>> visible_hostname squidLhr
>> unique_hostname squidMain
>> pid_filename /var/run/squid.pid
>> http_port 8080
>> icp_port 0
>> snmp_port 3161
>> access_log /var/logs/access.log squid
>> cache_log /var/logs/cache.log
>> cache_store_log /var/logs/store.log
>> cache_effective_user proxy
>> cache_peer 127.0.0.1 parent 3128 0 default no-digest no-query
>> prefer_direct off
>> # never_direct allow all (handy to test that if the processes are working in 
>> collaboration)
>>
>> cache_dir aufs /var/spool/squid 1 16 256
>> coredump_dir /var/spool/squid
>> cache_swap_low 75
>> cache_replacement_policy lru
>> refresh_pattern ^ftp: 1440 20% 10080
>> refresh_pattern ^gopher: 1440 0% 1440
>> refresh_pattern . 0 20% 4320
>> acl manager proto cache_object
>> acl localhost src 127.0.0.1/32
>> acl to_localhost dst 127.0.0.0/8
>> #Define Local Network.
>> acl FcUsr src "/etc/squid/FcUsr.conf"
>> acl PUsr src "/etc/squid/PUsr.conf"
>> acl RUsr src "/etc/squid/RUsr.conf"
>> #Define Local Servers
>> acl localServers dst 10.0.0.0/8
>> #Defining & allowing ports section
>> acl SSL_ports port 443 #https
>> acl Safe_ports port 80 # http
>> acl Safe_ports port 21 # ftp
>> acl Safe_ports port 443 # https
>> acl Safe_ports port 70 # gopher
>> acl Safe_ports port 210 # wais
>> acl Safe_ports port 1025-65535 # unregistered ports
>> acl Safe_ports port 280 # http-mgmt
>> acl Safe_ports port 488 # gss-http
>> acl Safe_ports port 591 # filemaker
>> acl Safe_ports port 777 # multiling http
>> acl CONNECT method CONNECT
>> # Only allow cachemgr access from localhost
>> http_access allow manager localhost
>> http_access deny manager
>> # Deny request to unknown ports
>> http_access deny !Safe_ports
>> # Deny request to other than SSL ports
>> http_access deny CONNECT !SSL_ports
>> #Allow access from localhost
>> http_access allow localhost
>> # Local server should never be forwarded to neighbour/peers and they should 
>> never be cached.
>> always_direct allow localservers
>> cache deny LocalServers
>> # Windows Update Section...
>> acl windowsupdate dstdomain windowsupdate.microsoft.com
>> acl windowsupdate dstdomain .update.microsoft.com
>> acl windowsupdate dstdomain download.windowsupdate.com
>> acl windowsupdate dstdomain redir.metaservices.microsoft.com
>> acl windowsupdate dstdomain images.metaservices.microsoft.com
>> acl windowsupdate dstdomain c.microsoft.com
>> acl windowsupdate dstdomain www.download.windowsupdate.com
>> acl windowsupdate dstdomain wustat.windows.com
>> acl windowsupdate dstdomain crl.microsoft.com
>> acl windowsupdate dstdomain sls.microsoft.com
>> acl windowsupdate dstdomain productactivation.one.microsoft.com
>> acl windowsupdate dstdomain ntservicepack.microsoft.com
>> acl wuCONNECT dstdomain www.update.microsoft.com
>> acl wuCONNECT dstdomain sls.microsoft.com
>> http_access allow CONNECT wuCONNECT FcUsr
>> http_access allow CONNECT wuCONNECT PUsr
>> http_access allow CONNECT wuCONNECT RUsr
>> http_access allow CONNECT wuCONNECT localhost
>> http_access allow windowsupdate all
>> http_access allow windowsupdate localhost
>> acl workinghours time MTWHF 09:00-12:59
>> acl workinghours time MTWHF 15:00-17:00
>> acl BIP dst "/etc/squid/Blocked.conf"
>> Definitions for BlockingRules#
>> ###Definition of MP3/MPEG
>> acl FTP proto FTP
>> acl MP3url urlpath_regex \.mp3(\?.*)?$
>> acl Movies rep_mime_type video/mpeg
>> acl MP3s rep_mime_type audio/mpeg
>> ###Definition of Flash Video
>> acl deny_rep_mime_flashvideo rep_mime_type video/flv
>> ###Definition of Porn
>> acl S

Re: [squid-users] HTTP_Miss/200,304 Very Slow responsetime. Experts please help.

2010-03-29 Thread Amos Jeffries

GIGO . wrote:

I am using ISA server as cache_peer parent and runing multiple instances on my 
squid Sever. However i am failing to understand that why the behaviour of Squid 
is extremely slow. At home where i have direct access to internet the same 
setup works fine.Please somebody help me out
 
regards,
 
Bilal Aslam
 


First thing to check is access times on the ISA and whether the problem 
is actually Squid or something else down the software chain.


Extremely slow times are usually the result of DNS failures. Each of the 
proxies needs to do its own lookups, so any small failure will compound 
into a big delay very fast.


Your squid does its own DNS lookup on every request to figure out if 
it's part of localservers ACL or not (in both the always_direct and 
cache access controls).


Amos

 
---

My squid server has internet access by being a secureNat client of ISA Server.
 
My Configuration file for first Instance:

visible_hostname squidLhr
unique_hostname squidMain
pid_filename /var/run/squid.pid
http_port 8080
icp_port 0
snmp_port 3161
access_log /var/logs/access.log squid
cache_log /var/logs/cache.log
cache_store_log /var/logs/store.log
cache_effective_user proxy 
cache_peer 127.0.0.1  parent 3128 0 default no-digest no-query
prefer_direct off 
# never_direct allow all (handy to test that if the processes are working in collaboration)


cache_dir aufs /var/spool/squid 1 16 256
coredump_dir /var/spool/squid
cache_swap_low 75
cache_replacement_policy lru
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern . 0 20% 4320
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
#Define Local Network.
acl FcUsr src "/etc/squid/FcUsr.conf"
acl PUsr src "/etc/squid/PUsr.conf"
acl RUsr src "/etc/squid/RUsr.conf"
#Define Local Servers
acl localServers dst 10.0.0.0/8
#Defining & allowing ports section
acl SSL_ports port 443  #https
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager
# Deny request to unknown ports
http_access deny !Safe_ports
# Deny request to other than SSL ports
http_access deny CONNECT !SSL_ports
#Allow access from localhost
http_access allow localhost
# Local server should never be forwarded to neighbour/peers and they should 
never be cached.
always_direct allow localservers
cache deny LocalServers
# Windows Update Section...
acl windowsupdate dstdomain windowsupdate.microsoft.com
acl windowsupdate dstdomain .update.microsoft.com
acl windowsupdate dstdomain download.windowsupdate.com
acl windowsupdate dstdomain redir.metaservices.microsoft.com
acl windowsupdate dstdomain images.metaservices.microsoft.com
acl windowsupdate dstdomain c.microsoft.com
acl windowsupdate dstdomain www.download.windowsupdate.com
acl windowsupdate dstdomain wustat.windows.com
acl windowsupdate dstdomain crl.microsoft.com
acl windowsupdate dstdomain sls.microsoft.com
acl windowsupdate dstdomain productactivation.one.microsoft.com
acl windowsupdate dstdomain ntservicepack.microsoft.com
acl wuCONNECT dstdomain www.update.microsoft.com
acl wuCONNECT dstdomain sls.microsoft.com
http_access allow CONNECT wuCONNECT FcUsr
http_access allow CONNECT wuCONNECT PUsr
http_access allow CONNECT wuCONNECT RUsr
http_access allow CONNECT wuCONNECT localhost
http_access allow windowsupdate all
http_access allow windowsupdate localhost
acl workinghours time MTWHF 09:00-12:59
acl workinghours time MTWHF 15:00-17:00
acl BIP dst "/etc/squid/Blocked.conf"
Definitions for BlockingRules#
###Definition of MP3/MPEG
acl FTP proto FTP
acl MP3url urlpath_regex \.mp3(\?.*)?$
acl Movies rep_mime_type video/mpeg
acl MP3s rep_mime_type audio/mpeg
###Definition of Flash Video
acl deny_rep_mime_flashvideo rep_mime_type video/flv
###Definition of  Porn
acl Sex urlpath_regex sex
acl PornSites url_regex "/etc/squid/pornlist"
Definition of YouTube.
## The videos come from several domains
acl youtube_domains dstdomain .youtube.com .googlevideo.com .ytimg.com
###Definition of FaceBook
acl facebook_sites dstdomain .facebook.com
 Definition of MSN Messenger
acl msn urlpath_regex -i gateway.dll
acl msnd dstdomain messenger.msn.com gateway.messenger.hotmail.com
acl msn1 req_mime_type application/x-msn-messenger
Definition of Skype
acl numeric_IPs url_regex 
^(([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)|(\[

Re: [squid-users] Re: Negotiate/NTLM authentication caching

2010-03-29 Thread Amos Jeffries

Markus Moeller wrote:
I  may misunderstood what you said, but there is no caching of 
authentication for Kerberos nor Basic/Digest. I think the TTL you talk 
about is for authorisation.


Markus



Quite right.

Amos


"Khaled Blah"  wrote in message 
news:4a3250ab1003290408q72ec495an7d04934d527c3...@mail.gmail.com...

Thx a lot for your answer, Amos! You are of course right with your
concerns towards "IP/TCP caching". Not a very good idea!

Does the same hold true for Kerberos as well, though? I mean could it
be possible to cache Kerberos authentication in a secure fashion?

Thinking about what you said, I am wondering what the big difference
is to Basic/Digest authentication. I mean with them squid challenges
the user as well, the credentials the user's client sends are being
verified by the authentication helper and that result is cached so
that when the same user requests anything with the same credentials,
he or she will not be re-verified with the helper's help until the TTL
has passed, right? So what am I missing here?

Thx in advance for any insight you can give me on this!

Khaled

2010/3/28 Khaled Blah :

Thx a lot for your answer, Amos! You are of course right with your
concerns towards "IP/TCP caching". Not a very good idea!

Does the same hold true for Kerberos as well, though? I mean could it
be possible to cache Kerberos authentication in a secure fashion?

Thinking about what you said, I am wondering what the big difference
is to Basic/Digest authentication. I mean with them squid challenges
the user as well, the credentials the user's client sends are being
verified by the authentication helper and that result is cached so
that when the same user requests anything with the same credentials,
he or she will not be re-verified with the helper's help until the TTL
has passed, right? So what am I missing here?

Thx in advance for any insight you can give me on this!

Khaled

2010/3/28 Amos Jeffries :

Khaled Blah wrote:


Hi all,

I'm developing an authentication helper (Negotiate/NTLM) for squid and
I am trying to understand more how squid handles this process
internally. Most of all I'd like to know how and how long squid caches
authentication results. I have looked at the debug logs and they show
that squid seems to do "less caching" for Negotiate/NTLM than it does
for Basic/Digest authentication. I am wondering whether I can do
something about this so that a once verified user will only get his
credentials re-verified after a certain time and not all during. I am
grateful to any insight the list can give me. Thanks in advance!

Khaled


NTLM does not authenticate a user per-se. It authenticates a TCP link 
to a

some form of account (user being only one type). Squid holds the
authentication credentials for as long as the authenticated TCP link is
open. It challenges the browser on any requests without supplied
credentials, and re-verifies on every new link opened or change in 
existing

credentials.

Caching NTLM credentials for re-use on TCP links from specific IP 
addresses
has always been a very risky business. As the world is now moving 
further

towards NAT and proxy gateways a single IP address can have multiple
requests from multiple clients. This makes caching NTLM credentials 
an even

worse prospect in future than it is now or ever before.

What we are doing in Squid-3 now is improving the HTTP/1.1 support which
enables TCP links to be held open under more conditions than HTTP/1.0 
allows

and thus the length of time between credential checks to be lengthened
without loosing security.

I can tell you now that any patches to do with caching credentials 
will be

given some very strict checks even to be considered for acceptance into
Squid.

Amos
--
Please be using
Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
Current Beta Squid 3.1.0.18









--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.1


Re: [squid-users] TCP MISS 502

2010-03-29 Thread Ivan .
really? One site not working on each of the Squid boxes?

That would be very, very strange?

Ivan

On Tue, Mar 30, 2010 at 11:20 AM, Amos Jeffries  wrote:
> On Tue, 30 Mar 2010 10:50:53 +1100, "Ivan ."  wrote:
>> More odd tcp_miss
>>
>> Only had a small portion of the site which would work. Works fine from
>> the primary, but fails on the secondary squid.
>>
>
> Its at this point I'm suspecting the NIC or hardware.
> Though low level software such as the kernel or iptables version warrant a
> look as well.
>
> Amos
>


Re: [squid-users] TCP MISS 502

2010-03-29 Thread Amos Jeffries
On Tue, 30 Mar 2010 10:50:53 +1100, "Ivan ."  wrote:
> More odd tcp_miss
> 
> Only had a small portion of the site which would work. Works fine from
> the primary, but fails on the secondary squid.
> 

Its at this point I'm suspecting the NIC or hardware.
Though low level software such as the kernel or iptables version warrant a
look as well.

Amos


Re: [squid-users] TCP MISS 502

2010-03-29 Thread Ivan .
More odd tcp_miss

Only had a small portion of the site which would work. Works fine from
the primary, but fails on the secondary squid.

1269906612.412   5464 10.xxx..xxx  TCP_MISS/000 0 GET
http://www.internode.on.net/products/ - DIRECT/203.16.214.27 -
1269906612.930 17 10.xxx..xxx TCP_MISS/200 851 GET
http://advisories.internode.on.net/images/menu2-on.gif -
DIRECT/192.231.203.146 image/gif
1269906613.075  9 10.xxx..xxx TCP_REFRESH_MODIFIED/200 782 GET
http://advisories.internode.on.net/images/menu2.gif -
DIRECT/192.231.203.146 image/gif
1269906613.331221 10.xxx..xxx  TCP_MISS/200 819 GET
http://advisories.internode.on.net/images/menu1-on.gif -
DIRECT/192.231.203.146 image/gif
1269906614.487   1865 10.xxx..xxx  TCP_MISS/000 0 GET
http://www.internode.on.net/ - DIRECT/203.16.214.27 -
1269906696.702  60903 10.xxx..xxx  TCP_MISS/000 0 GET
http://www.internode.on.net/products/ - DIRECT/203.16.214.27 -
1269906767.709  61004 10.xxx..xxx  TCP_MISS/000 0 GET
http://www.internode.on.net/products/ - DIRECT/203.16.214.27 -
1269906840.719  60299 10.xxx..xxx  TCP_MISS/000 0 GET
http://www.internode.on.net/products/broadband/plan_changes/ -
DIRECT/203.16.214.27 -
1269906911.707  60981 10.xxx..xxx  TCP_MISS/000 0 GET
http://www.internode.on.net/products/broadband/plan_changes/ -
DIRECT/203.16.214.27 -



On Mon, Mar 29, 2010 at 5:56 PM, Ivan .  wrote:
> That is so odd, as I have two identical boxes, now running the same
> Squid version, going through the same infrastructure, one works, the
> other one doesn't?
>
> The only difference are the public addresses configured on each of the
> squid proxy systems.
>
> The TCP stats on the interface on the squid box that won't access that
> site, don't look to bad at all
>
> [r...@pcr-proxy ~]# netstat -s
> Ip:
>    1593488410 total packets received
>    17991 with invalid addresses
>    0 forwarded
>    0 incoming packets discarded
>    1593318874 incoming packets delivered
>    1413863445 requests sent out
>    193 reassemblies required
>    95 packets reassembled ok
> Icmp:
>    22106 ICMP messages received
>    0 input ICMP message failed.
>    ICMP input histogram:
>        destination unreachable: 16
>        echo requests: 22090
>    155761 ICMP messages sent
>    0 ICMP messages failed
>    ICMP output histogram:
>        destination unreachable: 133671
>        echo replies: 22090
> IcmpMsg:
>        InType3: 16
>        InType8: 22090
>        OutType0: 22090
>        OutType3: 133671
> Tcp:
>    27785486 active connections openings
>    78777077 passive connection openings
>    68247 failed connection attempts
>    560600 connection resets received
>    569 connections established
>    1589479495 segments received
>    1403833081 segments send out
>    6034370 segments retransmited
>    0 bad segments received.
>    626711 resets sent
> Udp:
>    3817253 packets received
>    20 packets to unknown port received.
>    0 packet receive errors
>    3840233 packets sent
> TcpExt:
>    217 invalid SYN cookies received
>    15888 resets received for embryonic SYN_RECV sockets
>    42765 packets pruned from receive queue because of socket buffer overrun
>    7282834 TCP sockets finished time wait in fast timer
>    3 active connections rejected because of time stamp
>    11427 packets rejects in established connections because of timestamp
>    8682907 delayed acks sent
>    1268 delayed acks further delayed because of locked socket
>    Quick ack mode was activated 1227980 times
>    36 packets directly queued to recvmsg prequeue.
>    14 packets directly received from prequeue
>    538829561 packets header predicted
>    492906318 acknowledgments not containing data received
>    190275750 predicted acknowledgments
>    372 times recovered from packet loss due to fast retransmit
>    348117 times recovered from packet loss due to SACK data
>    174 bad SACKs received
>    Detected reordering 71 times using FACK
>    Detected reordering 963 times using SACK
>    Detected reordering 25 times using reno fast retransmit
>    Detected reordering 998 times using time stamp
>    560 congestion windows fully recovered
>    10689 congestion windows partially recovered using Hoe heuristic
>    TCPDSACKUndo: 921
>    197231 congestion windows recovered after partial ack
>    1316789 TCP data loss events
>    TCPLostRetransmit: 22
>    3020 timeouts after reno fast retransmit
>    78970 timeouts after SACK recovery
>    10665 timeouts in loss state
>    743644 fast retransmits
>    1003156 forward retransmits
>    1884003 retransmits in slow start
>    1604549 other TCP timeouts
>    TCPRenoRecoveryFail: 150
>    31151 sack retransmits failed
>    4198383 packets collapsed in receive queue due to low socket buffer
>    814608 DSACKs sent for old packets
>    33462 DSACKs sent for out of order packets
>    65506 DSACKs received
>    266 DSACKs for out of order packets received
>    215231 connections reset due to unexpected data
>   

Re: [squid-users] reverse proxy for OWA 2010 - firts issue

2010-03-29 Thread Amos Jeffries
On Mon, 29 Mar 2010 16:45:45 +0200, "Andrea Gallazzi"
 wrote:
> Hi, 
> I installed ubuntu server (latest) with squid 2.7. 
> 
> I am following this example config:
> http://wiki.squid-cache.org/ConfigExamples/Reverse/OutlookWebAccess
>  
> but on first command  "https_port"  squid returns the error
"unrecognired"
>  
> Where is the problem?
>  
> thanks

Squid currently only uses OpenSSL for HTTPS.  Unfortunately the OpenSSL
software license and the Squid GPLv2 software cannot be distributed
together.
OS distributions are prevented from packaging an HTTPS server-enabled
version of Squid which https_port requires.

You will have to built your own.

Amos


Re: [squid-users] squid deployment

2010-03-29 Thread Amos Jeffries
On Mon, 29 Mar 2010 15:18:24 +0200, guest01  wrote:
> Hi guys,
> 
> We want to replace our current proxy solution (crappy commercial
> product which is way too expensive) and thought about Squid, which is
> a great product.I already found a couple of example configurations,
> basically for reverse proxying. What we are looking for is a caching
> and authentication (LDAP and NTLM) only solution with content
> filtering via ICAP. We have following configuration in mind (firewalls
> omitted):
> 
> Clients
>  |
>  |
>  v
> Loadbalancer
>  |
>  |
>  v
> Squid-Proxies  <>   ICAP-Server
>  |
>  |
>  v
> INTERNET
> 
> We are expecting approx. 4500 requests per second average (top 6000
> RPS) and 150Mbit/s, so I suppose we need a couple of Squids. The

Yes, around 5-7 would be my first-glance guess.
Instances that is, not boxes. Since a quad-core box can run 3 Squid and an
8-core box can run 6 or 7 Squid.

> preferable solution would be big servers with a lot of memory and
> Squid 3.0 on a 64Bit RHEL5.
> Does anybody know any similar scenarios? Any suggestions? What are
> your experiences?

For a combo of ICAP and speed 3.1 is what you want to be looking at.
3.0 is not really in speed the race.

> 
> The ICAP Servers are commercial ones (at least at the beginning), but
> I have following problem. I want to use multiple ICAP Servers in each
> Squid configuration with loadbalancing, unfortunately it is not
> supported and does not work in Squid 3.

Definitely 3.1 with ICAP service sets.
 http://wiki.squid-cache.org/Features/AdaptationChain

Amos


RE: [squid-users] Squid 3.1.1 is available

2010-03-29 Thread Amos Jeffries
On Mon, 29 Mar 2010 15:57:52 -0600, "David Parks" 
wrote:
> Just to make sure I read this correctly - the feature for logging to a
UDP
> port is not available until 3.2 (which doesn't have a release date in
the
> near future), correct?
> 
> As of now the only option is logging to a file correct?

Correct.

Amos


Re: [squid-users] Apt-get Issue through squid

2010-03-29 Thread Amos Jeffries
On Mon, 29 Mar 2010 21:39:04 +0200, Jakob Curdes 
wrote:
> a...@gmail schrieb:
>> Hi again,
>> Sorry I forgot to mention I already have tried
>>
>> export http_proxy=http://ip_address:port
>> but no luck so far
> This has nothing to do with luck. It has to do with a problem solving 
> strategy. Try using wget after setting a http_proxy variable. If it 
> works, proceed to solving apt-get. If it does not work, look into the 
> squid log if you find a relevant entry. If there is no entry 
> corresponding to your target URL, wget did not use the proxy or was 
> blocked by a firewall. To rule this out, you can use tcpdump on one or 
> both machines to check the packet flow. etc etc. Only if you cannot find

> a solution or explanation after following such a strategy you should 
> describe your problem to a relevant (probably not this one) mailing
list.
> 
> Regards,
> Jakob Curdes

And don't forget to use the correct syntax. A lot of the snippets you have
posted as examples of what you have done contain typos. If you make that
kind of typo in the real test it WILL fail silently.

  export http_proxy="http://ip_address:port/";

In the file /etc/profile for system wide settings.

Amos


RE: [squid-users] WCCP and ICP

2010-03-29 Thread Michael Bowe
> -Original Message-
> From: Bradley, Stephen W. Mr. [mailto:bradl...@muohio.edu]

> How do I get the two servers to talk to each other to improve cache
> hits on=  the stream?
> (I plan on putting this into a bigger group of servers.)

Couple of things to be aware of :

In my understanding that Cisco WCCP hashes the destination IP to work out
which squid to send request to. Thus there should be no overlap of objects
between the two (or more) squids and no need to run ICP.

However if you are regularly adding/removing caches, or have changed the
hashing to be based on source IP, then you probably should run ICP on your
caches.

Cache1 would have something like this
cache_peer server2.domain.com sibling 3128 3130 proxy-only

Cache2 would have something like this
cache_peer server1.domain.com sibling 3128 3130 proxy-only


Michael.



RE: [squid-users] Squid 3.1.1 is available

2010-03-29 Thread David Parks
Just to make sure I read this correctly - the feature for logging to a UDP port 
is not available until 3.2 (which doesn't have a release date in the near 
future), correct?

As of now the only option is logging to a file correct?

Thanks,
David





Re: [squid-users] error in redirector

2010-03-29 Thread Amos Jeffries
On Mon, 29 Mar 2010 18:37:29 +0530, senthilkumaar2021
 wrote:
> Hi All,
> 
> I tried to configure squirm with squid to redirect address
> I am getting the following error in the cache.log and not able to browse

> any sites
> 
> 2010/03/29 18:29:06| helperHandleRead: unexpected reply on channel -1 
> from url_rewriter #1 ''
> 
> My squid.conf look like this
> 
> url_rewrite_program /usr/local/squirm/bin/squirm
> url_rewrite_children 10
>  url_rewrite_concurrency 302
> 
> Help me in solving this issue
> 
> Regards
> senthil

http://wiki.squid-cache.org/Features/Redirectors#How_do_I_make_it_concurrent.3F

Amos


[squid-users] Re: Sending on Group names after Kerb LDAP look-up

2010-03-29 Thread Markus Moeller

Did you try -r with squid_kerb_auth ?

Markus

"Nick Cairncross"  wrote in message 
news:c7d69a71.1dc21%nick.cairncr...@condenast.co.uk...

Hi,

I just wanted to give this a bump; Is it possible to manipulate the 
(Kerberos-authenticated) username that gets sent to my ICAP server and strip 
off the @domain?


E.g. jsm...@myaddomain  becomes   jsmith

Relevant squid lines just FYI:

icap_send_client_username on
icap_client_username_header X-Authenticated-User

Access log shows my jsm...@myaddomain and I would LOVE to be able to just 
have the first part in ICAP X-Authenticated-User.


Thanks again,
Nick



On 25/03/2010 16:18, "Nick Cairncross"  
wrote:


Amos,

Thanks for your help - you are right in that the connector has the ability 
to receive and manipulate ICAP, and using an NTLM authenticated user allows 
me to do the thing I need. All was nearly lost.


However, if I change to Kerberos authentication on my Squid then the 
connector breaks because it receives the user name as an UPN. Is it possible 
to send just the first part of the authenticated user (i.e. Username?) and 
not include the domain?


I read something interesting here: 
http://markmail.org/message/u3yoiykwkaykreoz about using string 
substitutions (%U, %N etc) Is this achievable with Squid? This could be the 
final piece in my puzzle...


Thanks,

Nick



On 24/03/2010 05:58, "Amos Jeffries"  wrote:

Nick Cairncross wrote:

Hi All,

Things seem to be going well with my Squid project so far; a combined
Mac/Windows AD environment using Kerberos authentication with fall
back of NTLM. I (hopefully) seem to be getting the hang of it! I've
been trying out the Kerberos LDAP look up tool and have a couple of
questions (I think the answers will be no..):

- Is it possible to wrap up the matched group name(s) in the header
as it gets sent onwards to my peer? I used to use the authentication


I don't think so.
 There is a lot of manipulation magic you can do with the ICAP or eCAP
interfaces that is not possible directly in Squid though.

The risk is breaking back-end services that can't handle the altered
header. Since you say below about already doing so, I assume this is a
non-risk for your network.


agent that came from our A/V provider. This tool ran as a service and
linked into our ISA. Once a user authenticated their group membership
was forwarded along with their username to my peer (Scansafe). The
problem is that it only does NTLM auth. It added the group
(WINNT://[group]) into the header and then a rule base at the peer
site could be set up based on group. Since I am using Kerberos I
wondered whether it's possible to send the results of the Kerb LDAP
auth? I already see the user on the peer as the Kerberos login. It
would be great if I could include the group or groups...


You can do transparent login pass-thru to the peer (login=PASS). You can
log Squid-3.1 into the peer with kerberos credentials.
 But I do not think the Kerberos details get decoded to a
username/password for Squid to pass back as a pair.



This is what I use currently: cache_peer proxy44.scansafe.net parent
8080 7 no-query no-digest no-netdb-exchange login=* (From
http://www.hutsby.net/2008/03/apple-mac-osx-squid-and-scansafe.html)

- Are there plans to integrate the lookup tool in future versions of
Squid? I've enjoyed learning about compiling but.. just wondering..



No. Plans are for all network-specific adaptation to be done via
external helper processes.  The *CAP interfaces for add-on modules allow
all the adaptation extras to be plugged in as needed in a very powerful way.
 Check that AV tool, it likely has an ICAP interface Squid-3 can plug
into already.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


** Please consider the environment before printing this e-mail **

The information contained in this e-mail is of a confidential nature and is 
intended only for the addressee.  If you are not the intended addressee, any 
disclosure, copying or distribution by you is prohibited and may be 
unlawful.  Disclosure to any party other than the addressee, whether 
inadvertent or otherwise, is not intended to waive privilege or 
confidentiality.  Internet communications are not secure and therefore Conde 
Nast does not accept legal responsibility for the contents of this message. 
Any views or opinions expressed are those of the author.


Company Registration details:
The Conde Nast Publications Ltd
Vogue House
Hanover Square
London W1S 1JU

Registered in London No. 226900


** Please consider the environment before printing this e-mail **

The information contained in this e-mail is of a confidential nature and is 
intended only for the addressee.  If you are not the intended addressee, any 
disclosure, copying or distribution by you is prohibited and may be 
unlawful.  Disclosure to any party other than the addressee, whether 
inadvertent or otherwise, is not intended to waive privilege or 
confidentiali

Re: [squid-users] Squid loops on epoll/read/gettimeofday

2010-03-29 Thread Amos Jeffries
On Mon, 29 Mar 2010 13:58:58 -0300, Marcus Kool
 wrote:
> I use squid
> Squid Cache: Version 3.0.STABLE20
> configure options:  '--prefix=/local/squid' '--with-default-user=squid' 
> '--with-filedescriptors=2400' '--enable-icap-client'
> '--enable-storeio=aufs,ufs,null' 
> '--with-pthreads' '--enable-async-io=8' '--enable-removal-policies=lru' 
> '--enable-default-err-language=English' '--enable-err-languages=Dutch
> English Portuguese' 
> '--enable-ssl' '--enable-cachemgr-hostname=localhost'
> '--enable-cache-digests' 
> '--enable-follow-x-forwarded-for' '--enable-forw-via-db'
> '--enable-xmalloc-statistics' 
> '--disable-hostname-checks' '--enable-epoll' '--enable-useragent-log'
> '--enable-referer-log' 
> '--enable-underscores' 'CC=gcc' 'CFLAGS=-O2 -m32' 'CXXFLAGS=-O2 -m32'
>

> 
> Note that FD 27 and FD 28 have the same NODE.
> This pipe is used for what ???

Something detailed in your cache.log when running at debug level ALL,1
(notices).

> 
> The EAGAIN return code to read is strange.  It suggest that the
> read() could return data soon, but Squid is looping now for over 4
hours.
> 
> Hopefully one of the developers knows what the pipe is used for and
> can do a guess what is causing the EAGAIN return code.

One of the helpers. Can't tell from the trace.

Please also see if this is reproduced with the 3.1 release.

Amos


[squid-users] Re: Negotiate/NTLM authentication caching

2010-03-29 Thread Markus Moeller
I  may misunderstood what you said, but there is no caching of 
authentication for Kerberos nor Basic/Digest. I think the TTL you talk about 
is for authorisation.


Markus

"Khaled Blah"  wrote in message 
news:4a3250ab1003290408q72ec495an7d04934d527c3...@mail.gmail.com...

Thx a lot for your answer, Amos! You are of course right with your
concerns towards "IP/TCP caching". Not a very good idea!

Does the same hold true for Kerberos as well, though? I mean could it
be possible to cache Kerberos authentication in a secure fashion?

Thinking about what you said, I am wondering what the big difference
is to Basic/Digest authentication. I mean with them squid challenges
the user as well, the credentials the user's client sends are being
verified by the authentication helper and that result is cached so
that when the same user requests anything with the same credentials,
he or she will not be re-verified with the helper's help until the TTL
has passed, right? So what am I missing here?

Thx in advance for any insight you can give me on this!

Khaled

2010/3/28 Khaled Blah :

Thx a lot for your answer, Amos! You are of course right with your
concerns towards "IP/TCP caching". Not a very good idea!

Does the same hold true for Kerberos as well, though? I mean could it
be possible to cache Kerberos authentication in a secure fashion?

Thinking about what you said, I am wondering what the big difference
is to Basic/Digest authentication. I mean with them squid challenges
the user as well, the credentials the user's client sends are being
verified by the authentication helper and that result is cached so
that when the same user requests anything with the same credentials,
he or she will not be re-verified with the helper's help until the TTL
has passed, right? So what am I missing here?

Thx in advance for any insight you can give me on this!

Khaled

2010/3/28 Amos Jeffries :

Khaled Blah wrote:


Hi all,

I'm developing an authentication helper (Negotiate/NTLM) for squid and
I am trying to understand more how squid handles this process
internally. Most of all I'd like to know how and how long squid caches
authentication results. I have looked at the debug logs and they show
that squid seems to do "less caching" for Negotiate/NTLM than it does
for Basic/Digest authentication. I am wondering whether I can do
something about this so that a once verified user will only get his
credentials re-verified after a certain time and not all during. I am
grateful to any insight the list can give me. Thanks in advance!

Khaled


NTLM does not authenticate a user per-se. It authenticates a TCP link to 
a

some form of account (user being only one type). Squid holds the
authentication credentials for as long as the authenticated TCP link is
open. It challenges the browser on any requests without supplied
credentials, and re-verifies on every new link opened or change in 
existing

credentials.

Caching NTLM credentials for re-use on TCP links from specific IP 
addresses

has always been a very risky business. As the world is now moving further
towards NAT and proxy gateways a single IP address can have multiple
requests from multiple clients. This makes caching NTLM credentials an 
even

worse prospect in future than it is now or ever before.

What we are doing in Squid-3 now is improving the HTTP/1.1 support which
enables TCP links to be held open under more conditions than HTTP/1.0 
allows

and thus the length of time between credential checks to be lengthened
without loosing security.

I can tell you now that any patches to do with caching credentials will 
be

given some very strict checks even to be considered for acceptance into
Squid.

Amos
--
Please be using
Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
Current Beta Squid 3.1.0.18








Re: [squid-users] Squid 3.1.1 is available

2010-03-29 Thread Amos Jeffries
On Mon, 29 Mar 2010 20:45:57 +0800, Jeff Peng 
wrote:
> On Mon, Mar 29, 2010 at 7:45 PM, Amos Jeffries 
> wrote:
>> The Squid HTTP Proxy team is very pleased to announce the
>> availability of the Squid-3.1.1 release!
>>
>>
>> This is the first release of the Squid-3.1 series which has passed our
>> criteria for use in production environments.
>>
> 
> That's very nice.
> In fact I have tested squid-3.1 on an ubuntu server, and it has been
> running fine for long days.

Yeah, its been running fine in production for some time on several OS. We
just needed a period of 14 days consecutive with no new unresolved bugs
found.

Amos



Re: [squid-users] Apt-get Issue through squid

2010-03-29 Thread Jakob Curdes

a...@gmail schrieb:

Hi again,
Sorry I forgot to mention I already have tried

export http_proxy=http://ip_address:port
but no luck so far
This has nothing to do with luck. It has to do with a problem solving 
strategy. Try using wget after setting a http_proxy variable. If it 
works, proceed to solving apt-get. If it does not work, look into the 
squid log if you find a relevant entry. If there is no entry 
corresponding to your target URL, wget did not use the proxy or was 
blocked by a firewall. To rule this out, you can use tcpdump on one or 
both machines to check the packet flow. etc etc. Only if you cannot find 
a solution or explanation after following such a strategy you should 
describe your problem to a relevant (probably not this one) mailing list.


Regards,
Jakob Curdes


Re: [squid-users] Apt-get Issue through squid

2010-03-29 Thread a...@gmail

Hi again,
Sorry I forgot to mention I already have tried

export http_proxy=http://ip_address:port
but no luck so far
Regards
Adam

- Original Message - 
From: "Leonardo Carneiro - Veltrac" 

To: 
Sent: Monday, March 29, 2010 7:48 PM
Subject: Re: [squid-users] Apt-get Issue through squid


Also, you can educate your users so they know that your network has a 
proxy and to setup the proxy on the apps is a necessary step to get to 
work. Proxy is not a 'out-of-the-earth' thing now days and most of the 
users (on a enterprise network, at least) will be able to understand this.



a...@gmail wrote:

Hi there,
Thanks for your reply, I was merely asking if anyone has or had the same 
problem before, or anyone who might have a solution, of course
If I stop squid now and disable it reconfigure my system to what it was 
before of course I will get the updates and the access to the internet
but now any application or programme I want to run I have to find out 
where it is where it's going etc..


It looks as if I need to tweak for every single task,. of every single 
application of every single client.


Yes I have followed the configuration where the whole internet goes 
through a proxy, when faced with a problem like this can you
imagine  how many programmes and apps are there? If I have to tweak each 
and everyone of them by hand and how many clients I have and so on

So I can spend the rest of my life fixing things.

Anyway thanks for your reply
Regards
Adam
- Original Message - From: "Jakob Curdes" 
To: "a...@gmail" 
Cc: 
Sent: Monday, March 29, 2010 7:00 PM
Subject: Re: [squid-users] Apt-get Issue through squid



a...@gmail schrieb:

Hello Everybody!

I have a question if you don't mind or if anyone has a solution to this

I am trying to download some packages with apt-get on one of my Ubuntu 
clients
All of the links fail, which means they are blocked by Squid, When I 
try the same thing
on the Squid machine itself which is also the router I get all the 
updates

Please do not jump to assumptions without having checked the facts.
"All of the links fail, which means they are blocked by Squid" is the 
least likely cause.
You can verify that easily by looking at the squid access log, without 
going the deviation  via the mailing list.


MY assumption is:
- The firewall on the router allows direct internet access
- so it is clear that apt-get on the firewall can get the updates 
[without using squid at all]
- apt-get, being a unix-style command line tool, does not know or 
respect the browser settings for proxies
- you did not set a http_proxy/ftp_proxy variable in the shell calling 
apt-get nor did you configure a proxy in apt.conf
- As you do not allow direct internet access (or maybe even do not have 
a gateway set on the client, which would be perfectly OK), apt-get tries 
to resolve the name (may succeed depending on setup) an then tries to 
download from the origin server

(which you prohibit, so it fails also).

It is very unlikely with any squid configuration near the defaults (eg. 
without authentication or complex header manipulation)
that the proxy blocks requests from a particular machine depending on 
the "browser" used.


Conclusion: 99% not a squid issue. You might ask on the ubuntu mailing 
lists for help if Google does not give you enough explanation how to use 
apt-get with a proxy.


HTH,
Jakob Curdes







Re: [squid-users] Apt-get Issue through squid

2010-03-29 Thread Jakob Curdes

a...@gmail schrieb:

Hi Again,
I do appreciate that, but some people are very restricted time wise
The way it looks I could easily spend a whole year tweaking it before 
I could get everything working or maybe more :-)
Most people on the mailing list here are very restriced time-wise. This 
is why we are glad if people try to look into log files before asking a 
question on the list.



JC


Re: [squid-users] Apt-get Issue through squid

2010-03-29 Thread a...@gmail

Hi Again,
I do appreciate that, but some people are very restricted time wise
The way it looks I could easily spend a whole year tweaking it before I 
could get everything working or maybe more :-)


Anyway, Thank you all for your suggestions and help
Regards
Adam
- Original Message - 
From: "Leonardo Carneiro - Veltrac" 

To: 
Sent: Monday, March 29, 2010 7:48 PM
Subject: Re: [squid-users] Apt-get Issue through squid


Also, you can educate your users so they know that your network has a 
proxy and to setup the proxy on the apps is a necessary step to get to 
work. Proxy is not a 'out-of-the-earth' thing now days and most of the 
users (on a enterprise network, at least) will be able to understand this.



a...@gmail wrote:

Hi there,
Thanks for your reply, I was merely asking if anyone has or had the same 
problem before, or anyone who might have a solution, of course
If I stop squid now and disable it reconfigure my system to what it was 
before of course I will get the updates and the access to the internet
but now any application or programme I want to run I have to find out 
where it is where it's going etc..


It looks as if I need to tweak for every single task,. of every single 
application of every single client.


Yes I have followed the configuration where the whole internet goes 
through a proxy, when faced with a problem like this can you
imagine  how many programmes and apps are there? If I have to tweak each 
and everyone of them by hand and how many clients I have and so on

So I can spend the rest of my life fixing things.

Anyway thanks for your reply
Regards
Adam
- Original Message - From: "Jakob Curdes" 
To: "a...@gmail" 
Cc: 
Sent: Monday, March 29, 2010 7:00 PM
Subject: Re: [squid-users] Apt-get Issue through squid



a...@gmail schrieb:

Hello Everybody!

I have a question if you don't mind or if anyone has a solution to this

I am trying to download some packages with apt-get on one of my Ubuntu 
clients
All of the links fail, which means they are blocked by Squid, When I 
try the same thing
on the Squid machine itself which is also the router I get all the 
updates

Please do not jump to assumptions without having checked the facts.
"All of the links fail, which means they are blocked by Squid" is the 
least likely cause.
You can verify that easily by looking at the squid access log, without 
going the deviation  via the mailing list.


MY assumption is:
- The firewall on the router allows direct internet access
- so it is clear that apt-get on the firewall can get the updates 
[without using squid at all]
- apt-get, being a unix-style command line tool, does not know or 
respect the browser settings for proxies
- you did not set a http_proxy/ftp_proxy variable in the shell calling 
apt-get nor did you configure a proxy in apt.conf
- As you do not allow direct internet access (or maybe even do not have 
a gateway set on the client, which would be perfectly OK), apt-get tries 
to resolve the name (may succeed depending on setup) an then tries to 
download from the origin server

(which you prohibit, so it fails also).

It is very unlikely with any squid configuration near the defaults (eg. 
without authentication or complex header manipulation)
that the proxy blocks requests from a particular machine depending on 
the "browser" used.


Conclusion: 99% not a squid issue. You might ask on the ubuntu mailing 
lists for help if Google does not give you enough explanation how to use 
apt-get with a proxy.


HTH,
Jakob Curdes







Re: [squid-users] Apt-get Issue through squid

2010-03-29 Thread Jakob Curdes

a...@gmail schrieb:

Hi there,
Thanks for your reply, I was merely asking if anyone has or had the 
same problem before, or anyone who might have a solution, of course
and you got a quick answer... answering time might beat most expensive 
support services for commercial software  . :-))
If I stop squid now and disable it reconfigure my system to what it 
was before of course I will get the updates and the access to the 
internet
but now any application or programme I want to run I have to find out 
where it is where it's going etc..


It looks as if I need to tweak for every single task,. of every single 
application of every single client.


Yes I have followed the configuration where the whole internet goes 
through a proxy, when faced with a problem like this can you
imagine  how many programmes and apps are there? If I have to tweak 
each and everyone of them by hand and how many clients I have and so on

So I can spend the rest of my life fixing things.
Ah, well in the effect the  http_proxy variable is not special to 
apt-get , almost all (I don't recall an exception) unix command line 
programs in different distributions obey this variable, also many 
programming language libraries. Examples are yum, wget, java ...
So if you set these two variables in a general shell option file, you 
should be done for most cases.


But there is also a different solution to this problem. You can use 
squid to do transparent proxying, that is, intercept outgoing requests 
on port 80 (instead of 8080, 3128 etc) and redirect them to the origin 
servers. With a setup in this way, all applications work without proxy 
settings as they never know they are talking to a proxy. Instructions 
for setup can be found in the squid wiki and in many howtos out there.


But beware! This setup has other disadvantages. Before deploying such a 
setup - in fact, before deploying any proxy setup in a production 
enviroment - you should thoroughly test this with an environment where 
failures are not critical.


Regards,
Jakob Curdes


Re: [squid-users] Apt-get Issue through squid

2010-03-29 Thread Leonardo Carneiro - Veltrac
Also, you can educate your users so they know that your network has a 
proxy and to setup the proxy on the apps is a necessary step to get to 
work. Proxy is not a 'out-of-the-earth' thing now days and most of the 
users (on a enterprise network, at least) will be able to understand this.



a...@gmail wrote:

Hi there,
Thanks for your reply, I was merely asking if anyone has or had the 
same problem before, or anyone who might have a solution, of course
If I stop squid now and disable it reconfigure my system to what it 
was before of course I will get the updates and the access to the 
internet
but now any application or programme I want to run I have to find out 
where it is where it's going etc..


It looks as if I need to tweak for every single task,. of every single 
application of every single client.


Yes I have followed the configuration where the whole internet goes 
through a proxy, when faced with a problem like this can you
imagine  how many programmes and apps are there? If I have to tweak 
each and everyone of them by hand and how many clients I have and so on

So I can spend the rest of my life fixing things.

Anyway thanks for your reply
Regards
Adam
- Original Message - From: "Jakob Curdes" 
To: "a...@gmail" 
Cc: 
Sent: Monday, March 29, 2010 7:00 PM
Subject: Re: [squid-users] Apt-get Issue through squid



a...@gmail schrieb:

Hello Everybody!

I have a question if you don't mind or if anyone has a solution to this

I am trying to download some packages with apt-get on one of my 
Ubuntu clients
All of the links fail, which means they are blocked by Squid, When I 
try the same thing
on the Squid machine itself which is also the router I get all the 
updates

Please do not jump to assumptions without having checked the facts.
"All of the links fail, which means they are blocked by Squid" is the 
least likely cause.
You can verify that easily by looking at the squid access log, 
without going the deviation  via the mailing list.


MY assumption is:
- The firewall on the router allows direct internet access
- so it is clear that apt-get on the firewall can get the updates 
[without using squid at all]
- apt-get, being a unix-style command line tool, does not know or 
respect the browser settings for proxies
- you did not set a http_proxy/ftp_proxy variable in the shell 
calling apt-get nor did you configure a proxy in apt.conf
- As you do not allow direct internet access (or maybe even do not 
have a gateway set on the client, which would be perfectly OK), 
apt-get tries to resolve the name (may succeed depending on setup) an 
then tries to download from the origin server

(which you prohibit, so it fails also).

It is very unlikely with any squid configuration near the defaults 
(eg. without authentication or complex header manipulation)
that the proxy blocks requests from a particular machine depending on 
the "browser" used.


Conclusion: 99% not a squid issue. You might ask on the ubuntu 
mailing lists for help if Google does not give you enough explanation 
how to use apt-get with a proxy.


HTH,
Jakob Curdes 





Re: [squid-users] Apt-get Issue through squid

2010-03-29 Thread Leonardo Carneiro - Veltrac
Welcome to the internet. If you don't want to configure every single 
app, you can use WPAD [1] or Proxy Interception [2].


[1] http://wiki.squid-cache.org/Technology/WPAD
[2] http://wiki.squid-cache.org/SquidFaq/InterceptionProxy


a...@gmail wrote:

Hi there,
Thanks for your reply, I was merely asking if anyone has or had the 
same problem before, or anyone who might have a solution, of course
If I stop squid now and disable it reconfigure my system to what it 
was before of course I will get the updates and the access to the 
internet
but now any application or programme I want to run I have to find out 
where it is where it's going etc..


It looks as if I need to tweak for every single task,. of every single 
application of every single client.


Yes I have followed the configuration where the whole internet goes 
through a proxy, when faced with a problem like this can you
imagine  how many programmes and apps are there? If I have to tweak 
each and everyone of them by hand and how many clients I have and so on

So I can spend the rest of my life fixing things.

Anyway thanks for your reply
Regards
Adam
- Original Message - From: "Jakob Curdes" 
To: "a...@gmail" 
Cc: 
Sent: Monday, March 29, 2010 7:00 PM
Subject: Re: [squid-users] Apt-get Issue through squid



a...@gmail schrieb:

Hello Everybody!

I have a question if you don't mind or if anyone has a solution to this

I am trying to download some packages with apt-get on one of my 
Ubuntu clients
All of the links fail, which means they are blocked by Squid, When I 
try the same thing
on the Squid machine itself which is also the router I get all the 
updates

Please do not jump to assumptions without having checked the facts.
"All of the links fail, which means they are blocked by Squid" is the 
least likely cause.
You can verify that easily by looking at the squid access log, 
without going the deviation  via the mailing list.


MY assumption is:
- The firewall on the router allows direct internet access
- so it is clear that apt-get on the firewall can get the updates 
[without using squid at all]
- apt-get, being a unix-style command line tool, does not know or 
respect the browser settings for proxies
- you did not set a http_proxy/ftp_proxy variable in the shell 
calling apt-get nor did you configure a proxy in apt.conf
- As you do not allow direct internet access (or maybe even do not 
have a gateway set on the client, which would be perfectly OK), 
apt-get tries to resolve the name (may succeed depending on setup) an 
then tries to download from the origin server

(which you prohibit, so it fails also).

It is very unlikely with any squid configuration near the defaults 
(eg. without authentication or complex header manipulation)
that the proxy blocks requests from a particular machine depending on 
the "browser" used.


Conclusion: 99% not a squid issue. You might ask on the ubuntu 
mailing lists for help if Google does not give you enough explanation 
how to use apt-get with a proxy.


HTH,
Jakob Curdes 





Re: [squid-users] Apt-get Issue through squid

2010-03-29 Thread a...@gmail

Hi there,
Thanks for your reply, I was merely asking if anyone has or had the same 
problem before, or anyone who might have a solution, of course
If I stop squid now and disable it reconfigure my system to what it was 
before of course I will get the updates and the access to the internet
but now any application or programme I want to run I have to find out where 
it is where it's going etc..


It looks as if I need to tweak for every single task,. of every single 
application of every single client.


Yes I have followed the configuration where the whole internet goes through 
a proxy, when faced with a problem like this can you
imagine  how many programmes and apps are there? If I have to tweak each 
and everyone of them by hand and how many clients I have and so on

So I can spend the rest of my life fixing things.

Anyway thanks for your reply
Regards
Adam
- Original Message - 
From: "Jakob Curdes" 

To: "a...@gmail" 
Cc: 
Sent: Monday, March 29, 2010 7:00 PM
Subject: Re: [squid-users] Apt-get Issue through squid



a...@gmail schrieb:

Hello Everybody!

I have a question if you don't mind or if anyone has a solution to this

I am trying to download some packages with apt-get on one of my Ubuntu 
clients
All of the links fail, which means they are blocked by Squid, When I try 
the same thing
on the Squid machine itself which is also the router I get all the 
updates

Please do not jump to assumptions without having checked the facts.
"All of the links fail, which means they are blocked by Squid" is the 
least likely cause.
You can verify that easily by looking at the squid access log, without 
going the deviation  via the mailing list.


MY assumption is:
- The firewall on the router allows direct internet access
- so it is clear that apt-get on the firewall can get the updates [without 
using squid at all]
- apt-get, being a unix-style command line tool, does not know or respect 
the browser settings for proxies
- you did not set a http_proxy/ftp_proxy variable in the shell calling 
apt-get nor did you configure a proxy in apt.conf
- As you do not allow direct internet access (or maybe even do not have a 
gateway set on the client, which would be perfectly OK), apt-get tries to 
resolve the name (may succeed depending on setup) an then tries to 
download from the origin server

(which you prohibit, so it fails also).

It is very unlikely with any squid configuration near the defaults (eg. 
without authentication or complex header manipulation)
that the proxy blocks requests from a particular machine depending on the 
"browser" used.


Conclusion: 99% not a squid issue. You might ask on the ubuntu mailing 
lists for help if Google does not give you enough explanation how to use 
apt-get with a proxy.


HTH,
Jakob Curdes 




Re: [squid-users] Apt-get Issue through squid

2010-03-29 Thread Jakob Curdes

a...@gmail schrieb:

Hello Everybody!

I have a question if you don't mind or if anyone has a solution to this

I am trying to download some packages with apt-get on one of my Ubuntu 
clients
All of the links fail, which means they are blocked by Squid, When I 
try the same thing
on the Squid machine itself which is also the router I get all the 
updates

Please do not jump to assumptions without having checked the facts.
"All of the links fail, which means they are blocked by Squid" is the 
least likely cause.
You can verify that easily by looking at the squid access log, without 
going the deviation  via the mailing list.


MY assumption is:
- The firewall on the router allows direct internet access
- so it is clear that apt-get on the firewall can get the updates 
[without using squid at all]
- apt-get, being a unix-style command line tool, does not know or 
respect the browser settings for proxies
- you did not set a http_proxy/ftp_proxy variable in the shell calling 
apt-get nor did you configure a proxy in apt.conf
- As you do not allow direct internet access (or maybe even do not have 
a gateway set on the client, which would be perfectly OK), apt-get tries 
to resolve the name (may succeed depending on setup) an then tries to 
download from the origin server

(which you prohibit, so it fails also).

It is very unlikely with any squid configuration near the defaults (eg. 
without authentication or complex header manipulation)
that the proxy blocks requests from a particular machine depending on 
the "browser" used.


Conclusion: 99% not a squid issue. You might ask on the ubuntu mailing 
lists for help if Google does not give you enough explanation how to use 
apt-get with a proxy.


HTH,
Jakob Curdes


[squid-users] Squid loops on epoll/read/gettimeofday

2010-03-29 Thread Marcus Kool

I use squid
Squid Cache: Version 3.0.STABLE20
configure options:  '--prefix=/local/squid' '--with-default-user=squid' 
'--with-filedescriptors=2400' '--enable-icap-client' '--enable-storeio=aufs,ufs,null' 
'--with-pthreads' '--enable-async-io=8' '--enable-removal-policies=lru' 
'--enable-default-err-language=English' '--enable-err-languages=Dutch English Portuguese' 
'--enable-ssl' '--enable-cachemgr-hostname=localhost' '--enable-cache-digests' 
'--enable-follow-x-forwarded-for' '--enable-forw-via-db' '--enable-xmalloc-statistics' 
'--disable-hostname-checks' '--enable-epoll' '--enable-useragent-log' '--enable-referer-log' 
'--enable-underscores' 'CC=gcc' 'CFLAGS=-O2 -m32' 'CXXFLAGS=-O2 -m32'


strace shows this:
 0.29 gettimeofday({1269878848, 223018}, NULL) = 0
 0.33 epoll_wait(6, {{EPOLLIN, {u32=23, u64=8800387989527}}}, 2400, 10) 
= 1
 0.32 gettimeofday({1269878848, 223083}, NULL) = 0
 0.31 read(27, 0xffd3de98, 256) = -1 EAGAIN (Resource temporarily 
unavailable)
 0.30 gettimeofday({1269878848, 223143}, NULL) = 0
 0.33 epoll_wait(6, {{EPOLLIN, {u32=23, u64=8800387989527}}}, 2400, 10) 
= 1
 0.31 gettimeofday({1269878848, 223208}, NULL) = 0
 0.32 read(27, 0xffd3de98, 256) = -1 EAGAIN (Resource temporarily 
unavailable)
 0.29 gettimeofday({1269878848, 223269}, NULL) = 0
 0.33 epoll_wait(6, {{EPOLLIN, {u32=23, u64=8800387989527}}}, 2400, 10) 
= 1
 0.32 gettimeofday({1269878848, 223334}, NULL) = 0
 ...
So Squid loops on epoll/read/gettimeofday on FD 27.
Squid works! I can continue using it but it uses 100% CPU.

lsof shows that FD 27 is a pipe:
[r...@srv004 fd]# lsof -c squid
COMMAND   PID  USER   FD   TYPE DEVICE SIZENODE NAME
squid   13663  root  cwdDIR9,2 4096   65089 /root
squid   13663  root  rtdDIR9,2 4096   2 /
squid   13663  root  txtREG9,3  1880413 4718713 
/local/squid/sbin/squid
squid   13663  root  memREG9,2   125736 1431984 
/lib/ld-2.5.so
squid   13663  root  memREG9,2  1611564 1432001 
/lib/libc-2.5.so
squid   13663  root  memREG9,276400  716063 
/lib/libresolv-2.5.so
squid   13663  root  memREG9,275028  867107 
/usr/lib/libz.so.1.2.3
squid   13663  root  memREG9,2   129716 1432048 
/lib/libpthread-2.5.so
squid   13663  root  memREG9,2   101404  716061 
/lib/libnsl-2.5.so
squid   13663  root  memREG9,245288  716062 
/lib/libcrypt-2.5.so
squid   13663  root  memREG9,2   936908  867168 
/usr/lib/libstdc++.so.6.0.8
squid   13663  root  memREG9,2   217016 3289491 
/var/db/nscd/group
squid   13663  root  memREG9,2   217016 3289470 
/var/db/nscd/passwd
squid   13663  root  memREG9,2   243928 3905305 
/lib/libsepol.so.1
squid   13663  root  memREG9,291892 1431943 
/lib/libselinux.so.1
squid   13663  root  memREG9,2 6404 1432021 
/lib/libkeyutils-1.2.so
squid   13663  root  memREG9,232024  859697 
/usr/lib/libkrb5support.so.0.1

squid   13663  root  memREG9,216428 1433428 
/lib/libdl-2.5.so
squid   13663  root  memREG9,2   155608  859112 
/usr/lib/libk5crypto.so.3.1
squid   13663  root  memREG9,2 6300 1433405 
/lib/libcom_err.so.2.1
squid   13663  root  memREG9,2   609068  854936 
/usr/lib/libkrb5.so.3.3
squid   13663  root  memREG9,2   184812  855987 
/usr/lib/libgssapi_krb5.so.2.2
squid   13663  root  memREG9,246476 4035468 
/lib/libgcc_s-4.1.2-20080825.so.1

squid   13663  root  memREG9,2   208352 1433430 
/lib/libm-2.5.so
squid   13663  root  memREG9,2  1295424 1431972 
/lib/libcrypto.so.0.9.8e
squid   13663  root  memREG9,2   291236 1433501 
/lib/libssl.so.0.9.8e
squid   13663  root0u   CHR1,3 1736 /dev/null
squid   13663  root1u   CHR1,3 1736 /dev/null
squid   13663  root2u   CHR1,3 1736 /dev/null
squid   13663  root3u   REG9,3  3390696 4718891 
/local/squid/logs/cache.log
squid   13663  root4u   CHR1,3 1736 /dev/null
squid   13663  root5u  unix 0x810002e816c0  1723398 socket
squid   13665 squid  cwdDIR9,2 4096   65089 /root
squid   13665 squid  rtdDIR9,2 4096   2 /
squid   13665 squid  txtREG9,3  1880413 4718713 
/local/squid/sbin/squid
squid   13665 squid  memREG9,2   125736 1431984 
/lib/ld-2.5.so
squid   13665 squid  memREG9,2  1611564 1432001 
/lib/li

Re: [squid-users] Sending on Group names after Kerb LDAP look-up

2010-03-29 Thread Nick Cairncross
Hi,

I just wanted to give this a bump; Is it possible to manipulate the 
(Kerberos-authenticated) username that gets sent to my ICAP server and strip 
off the @domain?

E.g. jsm...@myaddomain  becomes   jsmith

Relevant squid lines just FYI:

icap_send_client_username on
icap_client_username_header X-Authenticated-User

Access log shows my jsm...@myaddomain and I would LOVE to be able to just have 
the first part in ICAP X-Authenticated-User.

Thanks again,
Nick



On 25/03/2010 16:18, "Nick Cairncross"  wrote:

Amos,

Thanks for your help - you are right in that the connector has the ability to 
receive and manipulate ICAP, and using an NTLM authenticated user allows me to 
do the thing I need. All was nearly lost.

However, if I change to Kerberos authentication on my Squid then the connector 
breaks because it receives the user name as an UPN. Is it possible to send just 
the first part of the authenticated user (i.e. Username?) and not include the 
domain?

I read something interesting here: http://markmail.org/message/u3yoiykwkaykreoz 
about using string substitutions (%U, %N etc) Is this achievable with Squid? 
This could be the final piece in my puzzle...

Thanks,

Nick



On 24/03/2010 05:58, "Amos Jeffries"  wrote:

Nick Cairncross wrote:
> Hi All,
>
> Things seem to be going well with my Squid project so far; a combined
> Mac/Windows AD environment using Kerberos authentication with fall
> back of NTLM. I (hopefully) seem to be getting the hang of it! I've
> been trying out the Kerberos LDAP look up tool and have a couple of
> questions (I think the answers will be no..):
>
> - Is it possible to wrap up the matched group name(s) in the header
> as it gets sent onwards to my peer? I used to use the authentication

I don't think so.
  There is a lot of manipulation magic you can do with the ICAP or eCAP
interfaces that is not possible directly in Squid though.

The risk is breaking back-end services that can't handle the altered
header. Since you say below about already doing so, I assume this is a
non-risk for your network.

> agent that came from our A/V provider. This tool ran as a service and
> linked into our ISA. Once a user authenticated their group membership
> was forwarded along with their username to my peer (Scansafe). The
> problem is that it only does NTLM auth. It added the group
> (WINNT://[group]) into the header and then a rule base at the peer
> site could be set up based on group. Since I am using Kerberos I
> wondered whether it's possible to send the results of the Kerb LDAP
> auth? I already see the user on the peer as the Kerberos login. It
> would be great if I could include the group or groups...

You can do transparent login pass-thru to the peer (login=PASS). You can
log Squid-3.1 into the peer with kerberos credentials.
  But I do not think the Kerberos details get decoded to a
username/password for Squid to pass back as a pair.

>
> This is what I use currently: cache_peer proxy44.scansafe.net parent
> 8080 7 no-query no-digest no-netdb-exchange login=* (From
> http://www.hutsby.net/2008/03/apple-mac-osx-squid-and-scansafe.html)
>
> - Are there plans to integrate the lookup tool in future versions of
> Squid? I've enjoyed learning about compiling but.. just wondering..
>

No. Plans are for all network-specific adaptation to be done via
external helper processes.  The *CAP interfaces for add-on modules allow
all the adaptation extras to be plugged in as needed in a very powerful way.
  Check that AV tool, it likely has an ICAP interface Squid-3 can plug
into already.

Amos
--
Please be using
   Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
   Current Beta Squid 3.1.0.18


** Please consider the environment before printing this e-mail **

The information contained in this e-mail is of a confidential nature and is 
intended only for the addressee.  If you are not the intended addressee, any 
disclosure, copying or distribution by you is prohibited and may be unlawful.  
Disclosure to any party other than the addressee, whether inadvertent or 
otherwise, is not intended to waive privilege or confidentiality.  Internet 
communications are not secure and therefore Conde Nast does not accept legal 
responsibility for the contents of this message.  Any views or opinions 
expressed are those of the author.

Company Registration details:
The Conde Nast Publications Ltd
Vogue House
Hanover Square
London W1S 1JU

Registered in London No. 226900


** Please consider the environment before printing this e-mail **

The information contained in this e-mail is of a confidential nature and is 
intended only for the addressee.  If you are not the intended addressee, any 
disclosure, copying or distribution by you is prohibited and may be unlawful.  
Disclosure to any party other than the addressee, whether inadvertent or 
otherwise, is not intended to waive privilege or confidentiality.  Internet 
communications are not secure and therefore Conde Nast does not accept legal 
responsib

[squid-users] Apt-get Issue through squid

2010-03-29 Thread a...@gmail

Hello Everybody!

I have a question if you don't mind or if anyone has a solution to this

I am trying to download some packages with apt-get on one of my Ubuntu 
clients
All of the links fail, which means they are blocked by Squid, When I try the 
same thing

on the Squid machine itself which is also the router I get all the updates

Any Idea on how to fix this

Thanking you all in advance
Regards
Adam



[squid-users] WCCP and ICP

2010-03-29 Thread Bradley, Stephen W. Mr.
Okay, I have been fighting to get WCCP and ICP working in a two server test=  
cluster for weeks now with no luck.

The WCCP was not big deal but I think that with just WCCP you just get a ro= 
uter to send the HTTP requests to each server.  I do not think that if the = 
server doesn't have the request the router checks the other server.  What t= 
hat would mean is that with just WCCP and two servers we only get half as m= 
any potential HITS as we could if both servers were searched for each reque= st.

That being said here is my confgs from the two test servers.  Like I said W= 
CCP works fine.

How do I get the two servers to talk to each other to improve cache hits on=  
the stream?
(I plan on putting this into a bigger group of servers.)

Thanks in advance.
Steve


Both compiled with:

CFLAGS="-DNUMTHREADS=128" ./configure  --prefix=/usr --includedir=/usr/include 
--datadir=/usr/share --bindir=/usr/sbin --libexecdir=/usr/lib/squid 
--localstatedir=/var --sysconfdir=/etc/squid --enable-wccpv2 
--enable-linux-netfilter --enable-default-err-language=English 
--enable-err-languages=English --enable-async-io 
--enable-removal-policies=lru,heap --disable-auth --disable-ident-lookups 
--enable-storeio="aufs" --enable-cache-digests --enable-icmp --with-maxfd=65536 
--enable-poll

Server1

acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8
acl localnet src 10.0.0.0/8 # RFC 1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC 1918 possible internal network acl 
localnet src 192.168.0.0/16 # RFC 1918 possible internal network acl localnet 
src X.X.X.X/16 icp_access allow all acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow all localhost
http_access deny all
http_port 3128 transparent
hierarchy_stoplist cgi-bin ?
cache_effective_user squid
cache_effective_group squid
cache_store_log none
cache_dir aufs /opt/cache/squid 16000 16 256 access_log 
/opt/cache/squid/log/access.log squid cache_log /opt/cache/squid/log/cache.log 
acl QUERY urlpath_regex cgi-bin \?
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
half_closed_clients off
maximum_object_size 32768 KB
cache_swap_high 100%
cache_swap_low 80%
cache deny QUERY
cache_mem 2048 MB
wccp2_router X.X.X.X
wccp_version 2
wccp2_forwarding_method 1
wccp2_return_method 1
wccp2_service standard 0 password=3D coredump_dir /var/spool/squid 
background_ping_rate 10 dead_peer_timeout 10 cache_peer server2.domain.com 
parent 3128 3130 no-digest

Server2

acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8
acl localnet src 10.0.0.0/8 # RFC 1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC 1918 possible internal network acl 
localnet src 192.168.0.0/16 # RFC 1918 possible internal network acl localnet 
src X.X.X.X/16 acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
icp_access allow all
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow all localhost
http_access deny all
http_port 3128 transparent
hierarchy_stoplist cgi-bin ?
cache_effective_user squid
cache_effective_group squid
cache_store_log none
cache_dir aufs /opt/cache/squid 16000 16 256 access_log 
/opt/cache/squid/log/access.log squid cache_log /opt/cache/squid/log/cache.log 
# cache_store_log /var/log/squid/store.log acl QUERY urlpath_regex cgi-bin \?
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
half_closed_clients off
maximum_object_size  32768 KB
cache_swap_high 100%
cache_swap_low 80%
cache deny QUERY
cache_mem 2048 MB
wccp2_router X.X.X.X
wccp_version 2
wccp2_forwarding_method 1
wccp2_return_method 1
wcc

Re: [squid-users] Reverse Proxy and OWA

2010-03-29 Thread Ludovit Koren
> Amos Jeffries  writes:

> Andrea Gallazzi wrote:
>> Hi All, I am a newbie about squid.
>> 
>> I am interested about squid as reverse proxy for Outlook Web
>> App and Activesync for Exchange 2010
>> 
>> Did Someone have experience about this?

> Yes.

>> Is it possible to use at the same time squid as proxy and
>> reverse proxy ?

> Yes.

> See FAQ.  http://wiki.squid-cache.org/ConfigExamples

> Amos -- 
> Please be using Current Stable Squid 2.7.STABLE8 or
> 3.0.STABLE25 Current Beta Squid 3.1.0.18

I succeded to configure for WebApp. Autodiscover, Activesync is
working only with basic authentication. If we configure NTLM
authentication on exchange server/ISA, it stops working. Do you know
what could be wrong? (I can supply configuration) I am using squid
2.7.STABLE8 on FreeBSD.

Any hint greatly appreciated.

Regards,

lk



Re: [squid-users] WebFilter by ip

2010-03-29 Thread Mike Rambo

Landy Landy wrote:



I have a small network at an elementery school where I have two labs: one would 
have access to the internet and one won't. I'm currently doing this. Now, I 
also have teachers and others that would be accessing the web as well. I would 
like to allow teachers and other full access to the internet and the allowed 
students (the other lab) would be restricted to certain pages that's where 
squidGuarg comes in.

Since, I'm already doing:

acl localnet src 172.16.0.0/16
acl proxy src 172.16.0.1
acl allowed src "/etc/msd/ipAllowed"

acl CONNECT method CONNECT

http_access allow proxy
http_access allow localhost

# Block some sites

acl blockanalysis01 dstdomain .scorecardresearch.com .google-analytics.com
acl blockads01  dstdomain .rad.msn.com ads1.msn.com ads2.msn.com 
ads3.msn.com ads4.msn.com
acl blockads02  dstdomain .adserver.yahoo.com 
pagead2.googlesyndication.com ad.yieldmanager.com
acl blockads03  dstdomain .doubleclick.net
http_access deny blockanalysis01
http_access deny blockads01
http_access deny blockads02
http_access deny blockads03

http_access allow allowed
http_access deny all



I don't see how I can take an ip address from ipAllowed to do content 
filtering. This is where I'm stuck.



It sounds like you are missing the concept that squidGuard is a separate 
process with a separate set of rules from that of squid. SG will act on 
whatever squid redirects to it.


You have rules (above) that permit only a subset of your total user base 
access to the web as determined by whether they are allowed access to 
the proxy at all.


squidGuard works as a squid redirector (see url_rewrite_program in 
squid.conf) on top of this. With this enabled, all web traffic permitted 
access to the proxy (in your case defined by "http_access allow 
allowed") will also be redirected to SG and be filtered according to 
whatever rules you set up there. Within SG you can allow or disallow 
based upon network segment, individual IP address, userid if you set up 
authentication, time of day, destination url on the web and other 
parameters.


IOW, you "take an ip address from ipAllowed to do content filtering" by 
virtue of that fact that the client in ipAllowed has already been 
permitted access to the proxy and with the redirector enabled will now 
also be processed according to the rules set up in the redirect 
(url_rewrite) program.


HTH.


--
Mike Rambo


NOTE: In order to control energy costs the light at the end
of the tunnel has been shut off until further notice...


[squid-users] reverse proxy for OWA 2010 - firts issue

2010-03-29 Thread Andrea Gallazzi
Hi, 
I installed ubuntu server (latest) with squid 2.7. 


I am following this example config:
http://wiki.squid-cache.org/ConfigExamples/Reverse/OutlookWebAccess

but on first command  "https_port"  squid returns the error "unrecognired"

Where is the problem?

thanks 


Re: [squid-users] Does squid redirector work for https requests

2010-03-29 Thread Priyadarsan Roy
Dear Amos,


> Squid may send the URL it gets in the HTTPS tunneling request. Which 
> consists only of a server name and port. The URL re-writer can then do 
> what it pleases.
> 
> However it should be noted that if Squid alters the destination server 
> the browser is expecting to connect to very bad things might follow. 
> There is no guarantee the HTTPS transfer will succeed.

Thanks for the tip. The port part was causing a problem with our in
house written python redirector. 

Regards,
P Roy


-- 
Netzary InfoDynamics
"Making IT to Work for You"

website : http://www.netzary.com
hand Phone  : +91 8088503811
telephone   : +91 80 41738665
fax : +91 80 22075212



[squid-users] Web html documents taking long period to load using IE7

2010-03-29 Thread Don
Hello all, we are having some issues with web documents opening up
extremely slow up to 5-6 minutes per link and I would like to query
the squid community for help.  Our users are opening web documents up
for viewing and complaining extremely slow speeds.  I have
investigated and confirmed the results when using IE7 and only IE7.  I
suspect something to do with the no-cache pragma causing numerous
retries to squid but, am not definitive.  This issue is not apparent
when using firefox or in a non-proxy environment.  Based on the squid
logs can anyone let me know if my assumption is correct and is there a
resolution?  Our squid version is 2.5 stable 14 running on RHEL4.

Here are the logs:

Fri Mar 26 10:58:35 2010 93 192.168.2.30 TCP_MISS/200 24596 GET
http://example.in.out/PICDetail.aspx? - DIRECT/12.12.3.4 text/html
Fri Mar 26 10:58:35 2010  2 192.168.2.30 TCP_IMS_HIT/304 226 GET
http://example.in.out/styles/default.css - NONE/- text/css
Fri Mar 26 10:58:35 2010  1 192.168.2.30 TCP_IMS_HIT/304 242 GET
http://example.in.out/scripts/MenuNavFuncs.js - NONE/-
application/x-javascript
Fri Mar 26 10:58:35 2010  6 192.168.2.30 TCP_IMS_HIT/304 242 GET
http://example.in.out/scripts/Cookie.js - NONE/-
application/x-javascript
Fri Mar 26 10:58:35 2010  9 192.168.2.30 TCP_IMS_HIT/304 242 GET
http://example.in.out/scripts/search.js - NONE/-
application/x-javascript
Fri Mar 26 10:58:36 2010 95 192.168.2.30 TCP_MISS/200 24684 GET
http://example.in.out/PICDetail.aspx? - DIRECT/12.12.3.4 text/html
Fri Mar 26 10:58:36 2010  7 192.168.2.30 TCP_NEGATIVE_HIT/404 1877
GET http://example.in.out/NSWebForms.js - NONE/- text/html
Fri Mar 26 10:58:36 2010  1 192.168.2.30 TCP_NEGATIVE_HIT/404 1877
GET http://example.in.out/NSValidation.js - NONE/- text/html
Fri Mar 26 10:58:36 2010 24 192.168.2.30 TCP_IMS_HIT/304 228 GET
http://example.in.out/wdocs/8900.1/v03%20tech%20admin/chapter%2054/03_054_005.htm
- NONE/- text/html
Fri Mar 26 10:58:36 2010  1 192.168.2.30 TCP_IMS_HIT/304 227 GET
http://example.in.out/images/line_patt_dot.gif - NONE/- image/gif
Fri Mar 26 10:58:36 2010 85 192.168.2.30 TCP_MISS/404 1869 GET
http://example.in.out/wdocs/librarian.js - DIRECT/12.12.3.4 text/html
Fri Mar 26 10:58:36 2010  1 192.168.2.30 TCP_IMS_HIT/304 243 GET
http://example.in.out/wdocs/docutils.js - NONE/-
application/x-javascript
Fri Mar 26 10:58:36 2010  3 192.168.2.30 TCP_IMS_HIT/304 229 GET
http://example.in.out/wdocs/8900.1/v03%20tech%20admin/chapter%2054/03_054_005_files/image002.jpg
- NONE/- image/jpeg
Fri Mar 26 10:58:38 2010 57 192.168.2.30
TCP_CLIENT_REFRESH_MISS/200 331 HEAD
http://example.in.out/wdocs/8900.1/v03%20tech%20admin/chapter%2054/03_054_006.htm
- DIRECT/12.12.3.4 text/html
Fri Mar 26 10:59:38 2010 58 192.168.2.30
TCP_CLIENT_REFRESH_MISS/200 331 HEAD
http://example.in.out/wdocs/8900.1/v03%20tech%20admin/chapter%2019/03_019_005.htm
- DIRECT/12.12.3.4 text/html
Fri Mar 26 11:00:38 2010 57 192.168.2.30
TCP_CLIENT_REFRESH_MISS/200 330 HEAD
http://example.in.out/wdocs/8900.1/v03%20tech%20admin/chapter%2020/03_020_001.htm
- DIRECT/12.12.3.4 text/html
Fri Mar 26 11:01:38 2010 57 192.168.2.30
TCP_CLIENT_REFRESH_MISS/200 330 HEAD
http://example.in.out/wdocs/8900.1/v03%20tech%20admin/chapter%2019/03_019_001.htm
- DIRECT/12.12.3.4 text/html
Fri Mar 26 11:02:38 2010 58 192.168.2.30
TCP_CLIENT_REFRESH_MISS/200 330 HEAD
http://example.in.out/wdocs/8900.1/v03%20tech%20admin/chapter%2001/03_001_001.htm
- DIRECT/12.12.3.4 text/html
Fri Mar 26 11:03:38 2010 65 192.168.2.30
TCP_CLIENT_REFRESH_MISS/200 330 HEAD
http://example.in.out/wdocs/8900.1/v03%20tech%20admin/chapter%2028/03_028_001.htm
- DIRECT/12.12.3.4 text/html



Thanks for any help.

Don


Re: [squid-users] url_rewrite_program with acl

2010-03-29 Thread Leonardo Carneiro - Veltrac

tks

worked like a charm


John Doe wrote:

From: Leonardo Carneiro - Veltrac 
  
I'm testing some redirectors to learn something about it. Is 
there a way to use acl with redirectors, so they will only redirect 
some URL or hosts, intead of redirecting all?



I guess 'url_rewrite_access' should do that.

JD


  

  


[squid-users] Re: squid deployment

2010-03-29 Thread guest01
Damn, I shouldn't have pressed the send button yet  Anyway, I
found a similar scenario, at least for small environments, at
http://wiki.squid-cache.org/ConfigExamples/Webwasher

But I don't see any chance to support multiple icap servers. Is there
a solution for this problem (I don't want to use the loadbalancer for
balancing my icap servers), I don't want to build this solution if I
can prevent it somehow:

 Clients
 |
 |
 v
 Loadbalancer
 |
 |
 v
 Squid-Proxies  ---> Loadbalancer --->   ICAP-Server
 |^  ICAP-Server
 ||  ^
 |---|
 |
 |
 v
 INTERNET

I would appreciate any input,

thanks, best regards


On Mon, Mar 29, 2010 at 3:18 PM, guest01  wrote:
> Hi guys,
>
> We want to replace our current proxy solution (crappy commercial
> product which is way too expensive) and thought about Squid, which is
> a great product.I already found a couple of example configurations,
> basically for reverse proxying. What we are looking for is a caching
> and authentication (LDAP and NTLM) only solution with content
> filtering via ICAP. We have following configuration in mind (firewalls
> omitted):
>
> Clients
>     |
>     |
>     v
> Loadbalancer
>     |
>     |
>     v
> Squid-Proxies  <>   ICAP-Server
>     |
>     |
>     v
> INTERNET
>
> We are expecting approx. 4500 requests per second average (top 6000
> RPS) and 150Mbit/s, so I suppose we need a couple of Squids. The
> preferable solution would be big servers with a lot of memory and
> Squid 3.0 on a 64Bit RHEL5.
> Does anybody know any similar scenarios? Any suggestions? What are
> your experiences?
>
> The ICAP Servers are commercial ones (at least at the beginning), but
> I have following problem. I want to use multiple ICAP Servers in each
> Squid configuration with loadbalancing, unfortunately it is not
> supported and does not work in Squid 3.
>
> best regards
>


[squid-users] squid deployment

2010-03-29 Thread guest01
Hi guys,

We want to replace our current proxy solution (crappy commercial
product which is way too expensive) and thought about Squid, which is
a great product.I already found a couple of example configurations,
basically for reverse proxying. What we are looking for is a caching
and authentication (LDAP and NTLM) only solution with content
filtering via ICAP. We have following configuration in mind (firewalls
omitted):

Clients
 |
 |
 v
Loadbalancer
 |
 |
 v
Squid-Proxies  <>   ICAP-Server
 |
 |
 v
INTERNET

We are expecting approx. 4500 requests per second average (top 6000
RPS) and 150Mbit/s, so I suppose we need a couple of Squids. The
preferable solution would be big servers with a lot of memory and
Squid 3.0 on a 64Bit RHEL5.
Does anybody know any similar scenarios? Any suggestions? What are
your experiences?

The ICAP Servers are commercial ones (at least at the beginning), but
I have following problem. I want to use multiple ICAP Servers in each
Squid configuration with loadbalancing, unfortunately it is not
supported and does not work in Squid 3.

best regards


[squid-users] error in redirector

2010-03-29 Thread senthilkumaar2021

Hi All,

I tried to configure squirm with squid to redirect address
I am getting the following error in the cache.log and not able to browse 
any sites


2010/03/29 18:29:06| helperHandleRead: unexpected reply on channel -1 
from url_rewriter #1 ''


My squid.conf look like this

url_rewrite_program /usr/local/squirm/bin/squirm
url_rewrite_children 10
url_rewrite_concurrency 302

Help me in solving this issue

Regards
senthil




[squid-users] Error building 2.7.STABLE9 on OSX 10.5.8

2010-03-29 Thread Ricardo Newbery


I get the following error when trying to build 2.7.STABLE9 on OSX  
10.5.8.  Any suggestions?




checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... cfgaux/install-sh -c -d
checking for gawk... no
checking for mawk... no
checking for nawk... no
checking for awk... awk
checking whether make sets $(MAKE)... yes
checking whether to enable maintainer-specific portions of  
Makefiles... no

checking for gcc... gcc
checking for C compiler default output file name... a.out
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking for style of include used by make... GNU
checking dependency style of gcc... gcc3
checking whether gcc and cc understand -c and -o together... rm:  
conftest.dSYM: is a directory

yes
checking build system type... i686-apple-darwin9.8.0
checking host system type... i686-apple-darwin9.8.0
checking for pkg-config... /opt/local/bin/pkg-config
Store modules built: ufs
Removal policies built: lru
Auth scheme modules built: basic
unlinkd enabled
checking for egrep... /usr/bin/egrep
checking how to run the C preprocessor... gcc -E
checking for a BSD-compatible install... /usr/bin/install -c
checking for ranlib... ranlib
checking whether ln -s works... yes
checking for sh... /bin/sh
checking for false... /usr/bin/false
checking for true... /usr/bin/true
checking for rm... /bin/rm
checking for mv... /bin/mv
checking for mkdir... /bin/mkdir
checking for ln... /bin/ln
checking for perl... /opt/local/bin/perl
checking for ar... /usr/bin/ar
checking for dirent.h that defines DIR... yes
checking for library containing opendir... none required
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for stddef.h... yes
checking for limits.h... yes
checking for sys/param.h... yes
checking for sys/socket.h... yes
checking for netinet/in.h... yes
checking for stdio.h... yes
checking for stdlib.h... yes
checking for arpa/inet.h... yes
checking for arpa/nameser.h... yes
checking for assert.h... yes
checking for bstring.h... no
checking for crypt.h... no
checking for ctype.h... yes
checking for errno.h... yes
checking for execinfo.h... yes
checking for fcntl.h... yes
checking for fnmatch.h... yes
checking for getopt.h... yes
checking for glob.h... yes
checking for gnumalloc.h... no
checking for grp.h... yes
checking for libc.h... yes
checking for linux/netfilter_ipv4.h... no
checking for linux/netfilter_ipv4/ip_tproxy.h... no
checking for malloc.h... no
checking for math.h... yes
checking for memory.h... yes
checking for mount.h... no
checking for net/if.h... yes
checking for net/pfvar.h... no
checking for netdb.h... yes
checking for netinet/if_ether.h... yes
checking for netinet/tcp.h... yes
checking for openssl/err.h... yes
checking for openssl/md5.h... yes
checking for openssl/ssl.h... yes
checking for openssl/engine.h... yes
checking for paths.h... yes
checking for poll.h... yes
checking for pwd.h... yes
checking for regex.h... yes
checking for resolv.h... yes
checking for sched.h... yes
checking for signal.h... yes
checking for stdarg.h... yes
checking for string.h... yes
checking for strings.h... yes
checking for sys/bitypes.h... no
checking for sys/file.h... yes
checking for sys/ioctl.h... yes
checking for sys/mount.h... yes
checking for md5.h... no
checking for sys/md5.h... no
checking for sys/msg.h... yes
checking for sys/prctl.h... no
checking for sys/resource.h... yes
checking for sys/poll.h... yes
checking for sys/select.h... yes
checking for sys/stat.h... yes
checking for sys/statfs.h... no
checking for sys/statvfs.h... yes
checking for syscall.h... no
checking for sys/syscall.h... yes
checking for sys/time.h... yes
checking for sys/un.h... yes
checking for sys/vfs.h... no
checking for sys/wait.h... yes
checking for sys/event.h... yes
checking for syslog.h... yes
checking for time.h... yes
checking for unistd.h... yes
checking for utime.h... yes
checking for varargs.h... no
checking for byteswap.h... no
checking for glib.h... no
checking for stdint.h... yes
checking for inttypes.h... yes
checking for grp.h... (cached) yes
checking for nss_common.h... no
checking for nss.h... no
checking for db.h... yes
checking for db_185.h... no
checking for aio.h... yes
checking for sys/capability.h... no
checking for ip_compat.h... no
checking for ip_fil_compat.h... no
checking for ip_fil.h... no
checking for ip_nat.h... no
checking for ipl.h... no
checking for netinet/ip_compat.h... no
checking for netinet/ip_fil_compat.h... no
checking for netinet/ip_fil.h... no
checking for netinet/ip_nat.h... no
checking for netinet/ipl.h... no
checking for an ANSI C-conforming const... yes
checking whether byte ordering is bigendi

Re: [squid-users] Squid 3.1.1 is available

2010-03-29 Thread Jeff Peng
On Mon, Mar 29, 2010 at 7:45 PM, Amos Jeffries  wrote:
> The Squid HTTP Proxy team is very pleased to announce the
> availability of the Squid-3.1.1 release!
>
>
> This is the first release of the Squid-3.1 series which has passed our
> criteria for use in production environments.
>

That's very nice.
In fact I have tested squid-3.1 on an ubuntu server, and it has been
running fine for long days.

-- 
Jeff Peng
Email: jeffp...@netzero.net
Skype: compuperson


Re: [squid-users] url_rewrite_program with acl

2010-03-29 Thread John Doe
From: Leonardo Carneiro - Veltrac 
> I'm testing some redirectors to learn something about it. Is 
> there a way to use acl with redirectors, so they will only redirect 
> some URL or hosts, intead of redirecting all?

I guess 'url_rewrite_access' should do that.

JD


  


[squid-users] url_rewrite_program with acl

2010-03-29 Thread Leonardo Carneiro - Veltrac

Hi everyone,

I'm testing some redirectors to learn something about it. Is there a way 
to use acl with redirectors, so they will only redirect some URL or 
hosts, intead of redirecting all?


Tks in advance.
--
Leonardo Carneiro


[squid-users] HTTP_Miss/200,304 Very Slow responsetime. Experts please help.

2010-03-29 Thread GIGO .

I am using ISA server as cache_peer parent and runing multiple instances on my 
squid Sever. However i am failing to understand that why the behaviour of Squid 
is extremely slow. At home where i have direct access to internet the same 
setup works fine.Please somebody help me out
 
regards,
 
Bilal Aslam
 
 
---
My squid server has internet access by being a secureNat client of ISA Server.
 
My Configuration file for first Instance:
visible_hostname squidLhr
unique_hostname squidMain
pid_filename /var/run/squid.pid
http_port 8080
icp_port 0
snmp_port 3161
access_log /var/logs/access.log squid
cache_log /var/logs/cache.log
cache_store_log /var/logs/store.log
cache_effective_user proxy 
cache_peer 127.0.0.1  parent 3128 0 default no-digest no-query
prefer_direct off 
# never_direct allow all (handy to test that if the processes are working in 
collaboration)

cache_dir aufs /var/spool/squid 1 16 256
coredump_dir /var/spool/squid
cache_swap_low 75
cache_replacement_policy lru
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern . 0 20% 4320
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
#Define Local Network.
acl FcUsr src "/etc/squid/FcUsr.conf"
acl PUsr src "/etc/squid/PUsr.conf"
acl RUsr src "/etc/squid/RUsr.conf"
#Define Local Servers
acl localServers dst 10.0.0.0/8
#Defining & allowing ports section
acl SSL_ports port 443  #https
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager
# Deny request to unknown ports
http_access deny !Safe_ports
# Deny request to other than SSL ports
http_access deny CONNECT !SSL_ports
#Allow access from localhost
http_access allow localhost
# Local server should never be forwarded to neighbour/peers and they should 
never be cached.
always_direct allow localservers
cache deny LocalServers
# Windows Update Section...
acl windowsupdate dstdomain windowsupdate.microsoft.com
acl windowsupdate dstdomain .update.microsoft.com
acl windowsupdate dstdomain download.windowsupdate.com
acl windowsupdate dstdomain redir.metaservices.microsoft.com
acl windowsupdate dstdomain images.metaservices.microsoft.com
acl windowsupdate dstdomain c.microsoft.com
acl windowsupdate dstdomain www.download.windowsupdate.com
acl windowsupdate dstdomain wustat.windows.com
acl windowsupdate dstdomain crl.microsoft.com
acl windowsupdate dstdomain sls.microsoft.com
acl windowsupdate dstdomain productactivation.one.microsoft.com
acl windowsupdate dstdomain ntservicepack.microsoft.com
acl wuCONNECT dstdomain www.update.microsoft.com
acl wuCONNECT dstdomain sls.microsoft.com
http_access allow CONNECT wuCONNECT FcUsr
http_access allow CONNECT wuCONNECT PUsr
http_access allow CONNECT wuCONNECT RUsr
http_access allow CONNECT wuCONNECT localhost
http_access allow windowsupdate all
http_access allow windowsupdate localhost
acl workinghours time MTWHF 09:00-12:59
acl workinghours time MTWHF 15:00-17:00
acl BIP dst "/etc/squid/Blocked.conf"
Definitions for BlockingRules#
###Definition of MP3/MPEG
acl FTP proto FTP
acl MP3url urlpath_regex \.mp3(\?.*)?$
acl Movies rep_mime_type video/mpeg
acl MP3s rep_mime_type audio/mpeg
###Definition of Flash Video
acl deny_rep_mime_flashvideo rep_mime_type video/flv
###Definition of  Porn
acl Sex urlpath_regex sex
acl PornSites url_regex "/etc/squid/pornlist"
Definition of YouTube.
## The videos come from several domains
acl youtube_domains dstdomain .youtube.com .googlevideo.com .ytimg.com
###Definition of FaceBook
acl facebook_sites dstdomain .facebook.com
 Definition of MSN Messenger
acl msn urlpath_regex -i gateway.dll
acl msnd dstdomain messenger.msn.com gateway.messenger.hotmail.com
acl msn1 req_mime_type application/x-msn-messenger
Definition of Skype
acl numeric_IPs url_regex 
^(([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+)|(\[([0-9af]+)?:([0-9af:]+)?:([0-9af]+)?\])):443
acl Skype_UA browser ^skype^
##Definition of Yahoo! Messenger
acl ym dstdomain .messenger.yahoo.com .psq.yahoo.com
acl ym dstdomain .us.il.yimg.com .msg.yahoo.com .pager.yahoo.com
acl ym dstdomain .rareedge.com .ytunnelpro.com .chat.yahoo.com
acl ym dstdomain .voice.yahoo.com
acl ymregex url_regex yupdater.yim ymsgr myspaceim
## Other protocols Yahoo!Messenger uses ??
acl ym dstdomain .skype.com .imvu.com
###Definition for Disallowing download of executa

[squid-users] Squid 3.1.1 is available

2010-03-29 Thread Amos Jeffries

The Squid HTTP Proxy team is very pleased to announce the
availability of the Squid-3.1.1 release!


This is the first release of the Squid-3.1 series which has passed our 
criteria for use in production environments.



3.1.1 brings many new features and upgrades to the basic networking 
protocols. A short list of the major new features is:


 * Connection Pinning (for NTLM Auth Passthrough)
 * Native IPv6
 * Quality of Service (QoS) Flow support
 * Native Memory Cache
 * SSL Bump (for HTTPS Filtering and Adaptation)
 * TProxy v4.1+ support
 * eCAP Adaptation Module support
 * Error Page Localization
 * Follow X-Forwarded-For support
 * X-Forwarded-For options extended (truncate, delete, transparent)
 * Peer-Name ACL
 * Reply headers to external ACL.
 * ICAP and eCAP Logging
 * ICAP Service Sets and Chains
 * ICY (SHOUTcast) streaming protocol support
 * HTTP/1.1 support on connections to web servers and peers.
   (with plans to make this full support within the 3.1 series)

Further details can be found in the release notes or the wiki.
  http://www.squid-cache.org/Versions/v3/3.1/RELEASENOTES.html
  http://wiki.squid-cache.org/RoadMap/Squid3


3.1.1 still has some issues.

Some may still be resolved by a future 3.1 release:
 * IPv4 fall-back occasionally failing on dual IPv4/IPv6 websites.
 * An ongoing very slow FD leak introduced somewhere during the 
Squid-3.0 cycle to AUFS.

 * Windows support is still largely missing.
 * Build status for the 3.x series is still largely unknown for Unix 
based OS and other less popular systems.


Some are not able to be fixed in the 3.1 series:
 * The lack of some features available in Squid-2.x series. See the 
regression sections of the release notes for full details.
 * The lack of IPv6 split-stack support for MacOSX, OpenBSD and maybe 
others.



All users of Squid-3.1 beta releases are urged to upgrade to this 
release as soon as possible.


3.0.STABLE25 is expected to be the last release of the 3.0 series. 
Support for Squid-3.0 bug fixes has now officially ceased. Bugs in 3.0 
will continue to be fixed, however the fixes will be added to the 3.1 
series. All users of Squid-3.0 are encouraged to plan for upgrades 
within the year.



Plans for the next series of releases is already well underway. Our 
future release plans and upcoming features can be found at:

  http://wiki.squid-cache.org/RoadMap/Squid3



Please refer to the release notes at
http://www.squid-cache.org/Versions/v3/3.1/RELEASENOTES.html
if and when you are ready to make the switch to Squid-3.1

This new release can be downloaded from our HTTP or FTP servers

  http://www.squid-cache.org/Versions/v3/3.1/
  ftp://ftp.squid-cache.org/pub/squid/

or the mirrors. For a list of mirror sites see

  http://www.squid-cache.org/Download/http-mirrors.dyn
  http://www.squid-cache.org/Download/mirrors.dyn

If you encounter any issues with this release please file a bug report.
  http://bugs.squid-cache.org/


Amos Jeffries


Re: [squid-users] Negotiate/NTLM authentication caching

2010-03-29 Thread Khaled Blah
Thx a lot for your answer, Amos! You are of course right with your
concerns towards "IP/TCP caching". Not a very good idea!

Does the same hold true for Kerberos as well, though? I mean could it
be possible to cache Kerberos authentication in a secure fashion?

Thinking about what you said, I am wondering what the big difference
is to Basic/Digest authentication. I mean with them squid challenges
the user as well, the credentials the user's client sends are being
verified by the authentication helper and that result is cached so
that when the same user requests anything with the same credentials,
he or she will not be re-verified with the helper's help until the TTL
has passed, right? So what am I missing here?

Thx in advance for any insight you can give me on this!

Khaled

2010/3/28 Khaled Blah :
> Thx a lot for your answer, Amos! You are of course right with your
> concerns towards "IP/TCP caching". Not a very good idea!
>
> Does the same hold true for Kerberos as well, though? I mean could it
> be possible to cache Kerberos authentication in a secure fashion?
>
> Thinking about what you said, I am wondering what the big difference
> is to Basic/Digest authentication. I mean with them squid challenges
> the user as well, the credentials the user's client sends are being
> verified by the authentication helper and that result is cached so
> that when the same user requests anything with the same credentials,
> he or she will not be re-verified with the helper's help until the TTL
> has passed, right? So what am I missing here?
>
> Thx in advance for any insight you can give me on this!
>
> Khaled
>
> 2010/3/28 Amos Jeffries :
>> Khaled Blah wrote:
>>>
>>> Hi all,
>>>
>>> I'm developing an authentication helper (Negotiate/NTLM) for squid and
>>> I am trying to understand more how squid handles this process
>>> internally. Most of all I'd like to know how and how long squid caches
>>> authentication results. I have looked at the debug logs and they show
>>> that squid seems to do "less caching" for Negotiate/NTLM than it does
>>> for Basic/Digest authentication. I am wondering whether I can do
>>> something about this so that a once verified user will only get his
>>> credentials re-verified after a certain time and not all during. I am
>>> grateful to any insight the list can give me. Thanks in advance!
>>>
>>> Khaled
>>
>> NTLM does not authenticate a user per-se. It authenticates a TCP link to a
>> some form of account (user being only one type). Squid holds the
>> authentication credentials for as long as the authenticated TCP link is
>> open. It challenges the browser on any requests without supplied
>> credentials, and re-verifies on every new link opened or change in existing
>> credentials.
>>
>> Caching NTLM credentials for re-use on TCP links from specific IP addresses
>> has always been a very risky business. As the world is now moving further
>> towards NAT and proxy gateways a single IP address can have multiple
>> requests from multiple clients. This makes caching NTLM credentials an even
>> worse prospect in future than it is now or ever before.
>>
>> What we are doing in Squid-3 now is improving the HTTP/1.1 support which
>> enables TCP links to be held open under more conditions than HTTP/1.0 allows
>> and thus the length of time between credential checks to be lengthened
>> without loosing security.
>>
>> I can tell you now that any patches to do with caching credentials will be
>> given some very strict checks even to be considered for acceptance into
>> Squid.
>>
>> Amos
>> --
>> Please be using
>>  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
>>  Current Beta Squid 3.1.0.18
>>
>


Re: [squid-users] Reverse Proxy and OWA

2010-03-29 Thread Jakob Curdes

Amos Jeffries schrieb:

Andrea Gallazzi wrote:

Hi All,
I am a newbie about squid.

I am interested about squid as reverse proxy for Outlook Web App and 
Activesync for Exchange 2010


Did Someone have experience about this?
We are running such a reverse proxy setup for several clients, but with 
Exchange 2003. 2007 and 2010 make increasing use of the webservices and 
autodiscover features.
I am not sure in which way these influence the way proxying needs to be 
handled; at least the autodiscover feature influences the way you must 
handle your DNS and certificates setup.
My suggestion: First try to do a setup with port forwarding without 
squid (probably better not to do this with a real Exchange server in a 
production network, though... ).
Once you have that running you are sure you got all the certificates and 
DNS issues solved. Then introduce squid into the working connection (but 
do not forget to eliminate the port forwarding rule first!).


Is it possible to use at the same time squid as proxy and reverse 
proxy ?
Sure. For different reasons it might be advisable to run two separate 
suids on one machine, though. Fore example, it is easier to track down 
errors and you can monitor the services separately. On the other hand 
you need to change the default configuration in several places for the 
second squid (ports, IPs, logging places,  PId file et cetera.)
In any case for such setups  is is wise to use test environments  - it 
is easy to screw up your internet access and then you have a hard time 
from users.


HTH,
Jakob Curdes



Re: [squid-users] Reverse Proxy and OWA

2010-03-29 Thread Amos Jeffries

Andrea Gallazzi wrote:

Hi All,
I am a newbie about squid.

I am interested about squid as reverse proxy for Outlook Web App and 
Activesync for Exchange 2010


Did Someone have experience about this?


Yes.


Is it possible to use at the same time squid as proxy and reverse proxy ?


Yes.

See FAQ.  http://wiki.squid-cache.org/ConfigExamples

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
  Current Beta Squid 3.1.0.18


[squid-users] Reverse Proxy and OWA

2010-03-29 Thread Andrea Gallazzi

Hi All,
I am a newbie about squid.

I am interested about squid as reverse proxy for Outlook Web App and 
Activesync for Exchange 2010


Did Someone have experience about this?

Is it possible to use at the same time squid as proxy and reverse proxy ?

Thank You

Andrea