Re: [squid-users] 3.1.16 FTP problems ..

2011-11-08 Thread Paul Regan
Thanks for the reply, will open the bugzilla ..

Some further info is I just tried the same using a 3.0 stable13 server
with the same results.  I haven't run a wire trace but cache.log
reports the same error :

ftpDataRead: read error: (107) Transport endpoint is not connected

Paul

On 7 November 2011 22:37, Amos Jeffries squ...@treenet.co.nz wrote:
 On Mon, 7 Nov 2011 14:37:24 +, Paul Regan wrote:

 New install of 3.1.16, FTP appears to be configured correctly but
 hence the mail group post its not working as expected ..

 Using ftp.emea.ibm.com as the test destination.

 Active FTP from proxy server command line works fine.

 Via a browser  squidclient

 The following error was encountered while trying to retrieve the URL:
 ftp://ftp.emea.ibm.com/
 Read Error
 The system returned: [No Error]
 An error condition occurred while reading data from the network.
 Please retry your request.

 cache.log :

 2011/11/07 12:45:20| ftpDataRead: read error: (107) Transport endpoint
 is not connected

 Wire capture :

 12:45:20.395751 IP proxyserver.domain.60769  ftp.emea.ibm.com.ftp: S
 2543509559:2543509559(0) win 5840 mss 1460,sackOK,timestamp 853154020
 0,nop,wscale 7
 12:45:20.414004 IP ftp.emea.ibm.com.ftp  proxyserver.domain.60769: S
 429684985:429684985(0) ack 2543509560 win 65535 mss 1380,nop,wscale
 5
 12:45:20.414012 IP proxyserver.domain.60769  ftp.emea.ibm.com.ftp: .
 ack 1 win 46
 12:45:20.444260 IP ftp.emea.ibm.com.ftp  proxyserver.domain.60769: P
 1:78(77) ack 1 win 16384

 snip
 Lots of packets on control channel. Looks normal. The handshake, login, file
 location etc. happen about here.

 12:45:20.608768 IP ftp.emea.ibm.com.ftp  proxyserver.domain.60769: P
 ack 78 win 16384
 12:45:20.818002 IP ftp.emea.ibm.com.ftp-data 
 proxyserver.domain.36703: S 3997244124:3997244124(0) win 65535 mss
 1380,nop,wscale 5

 Server connects in with a data connection. You configured ftp_passive OFF.
 So this is expected.


 12:45:20.818022 IP proxyserver.domain.36703 
 ftp.emea.ibm.com.ftp-data: S 2004820984:2004820984(0) ack 3997244125
 win 5840 mss 1460,nop,wscale 7
 12:45:20.835744 IP ftp.emea.ibm.com.ftp-data 
 proxyserver.domain.36703: . ack 1 win 11520
 12:45:20.835804 IP ftp.emea.ibm.com.ftp-data 
 proxyserver.domain.36703: P 1:692(691) ack 1 win 11520
 12:45:20.835809 IP proxyserver.domain.36703 
 ftp.emea.ibm.com.ftp-data: . ack 692 win 57

 12:45:20.835812 IP ftp.emea.ibm.com.ftp-data 
 proxyserver.domain.36703: FP 692:692(0) ack 1 win 11520

 FIN-ACK on data channel from server after 692 bytes transferred.

 12:45:20.835899 IP ftp.emea.ibm.com.ftp  proxyserver.domain.60769: P
 1108:1129(21) ack 78 win 16384
 12:45:20.836193 IP proxyserver.domain.60769  ftp.emea.ibm.com.ftp: F
 78:78(0) ack 1129 win 63

 FIN on the control channel from Squid. Data file is received and connections
 can close.

 12:45:20.836210 IP proxyserver.domain.36703 
 ftp.emea.ibm.com.ftp-data: R 1:1(0) ack 693 win 57
 12:45:20.853549 IP ftp.emea.ibm.com.ftp  proxyserver.domain.60769: P
 ack 79 win 16384
 12:45:20.853576 IP ftp.emea.ibm.com.ftp  proxyserver.domain.60769: P
 1129:1163(34) ack 79 win 16384

 Server keeps throwing data at Squid down the control channel after FIN.

 12:45:20.853586 IP proxyserver.domain.60769  ftp.emea.ibm.com.ftp: R
 2543509638:2543509638(0) win 0

 Squid responds to the extra data with RST.

 12:45:20.853603 IP ftp.emea.ibm.com.ftp  proxyserver.domain.60769: FP
 1163:1163(0) ack 79 win 16384
 12:45:20.853606 IP proxyserver.domain.60769  ftp.emea.ibm.com.ftp: R
 2543509638:2543509638(0) win 0

 More packets from the server. More RST from Squid. Only 17.41 ms have
 elapsed, so its within bounds of network lag.


 Overall this looks like proper TCP closing sequence. Except that the data
 channel is closed first. Before any messages on the control channel arrive
 to inform Squid that the transfer is completed.

 Can you report this with the above details in bugzilla please so this does
 not get lost. I think we could work around it by checking the expected
 object size on data closure and skip the error if they match.

 Amos




Re: [squid-users] 3.1.16 FTP problems ..

2011-11-08 Thread Amos Jeffries

On 8/11/2011 9:25 p.m., Paul Regan wrote:

Thanks for the reply, will open the bugzilla ..

Some further info is I just tried the same using a 3.0 stable13 server
with the same results.  I haven't run a wire trace but cache.log
reports the same error :

ftpDataRead: read error: (107) Transport endpoint is not connected

Paul


Good to know its not a new regression at least.

While I think of it, with 3.1 you can get a debug_options 9,2 trace in 
cache.log of the FTP control channel events. That may be useful to go 
with the report.


Amos


On 7 November 2011 22:37, Amos Jeffriessqu...@treenet.co.nz  wrote:

On Mon, 7 Nov 2011 14:37:24 +, Paul Regan wrote:

New install of 3.1.16, FTP appears to be configured correctly but
hence the mail group post its not working as expected ..

Using ftp.emea.ibm.com as the test destination.

Active FTP from proxy server command line works fine.

Via a browser  squidclient

The following error was encountered while trying to retrieve the URL:
ftp://ftp.emea.ibm.com/
Read Error
The system returned: [No Error]
An error condition occurred while reading data from the network.
Please retry your request.

cache.log :

2011/11/07 12:45:20| ftpDataRead: read error: (107) Transport endpoint
is not connected

Wire capture :

12:45:20.395751 IP proxyserver.domain.60769  ftp.emea.ibm.com.ftp: S
2543509559:2543509559(0) win 5840mss 1460,sackOK,timestamp 853154020
0,nop,wscale 7
12:45:20.414004 IP ftp.emea.ibm.com.ftp  proxyserver.domain.60769: S
429684985:429684985(0) ack 2543509560 win 65535mss 1380,nop,wscale
5
12:45:20.414012 IP proxyserver.domain.60769  ftp.emea.ibm.com.ftp: .
ack 1 win 46
12:45:20.444260 IP ftp.emea.ibm.com.ftp  proxyserver.domain.60769: P
1:78(77) ack 1 win 16384

snip
Lots of packets on control channel. Looks normal. The handshake, login, file
location etc. happen about here.


12:45:20.608768 IP ftp.emea.ibm.com.ftp  proxyserver.domain.60769: P
ack 78 win 16384
12:45:20.818002 IP ftp.emea.ibm.com.ftp-data
proxyserver.domain.36703: S 3997244124:3997244124(0) win 65535mss
1380,nop,wscale 5

Server connects in with a data connection. You configured ftp_passive OFF.
So this is expected.



12:45:20.818022 IP proxyserver.domain.36703
ftp.emea.ibm.com.ftp-data: S 2004820984:2004820984(0) ack 3997244125
win 5840mss 1460,nop,wscale 7
12:45:20.835744 IP ftp.emea.ibm.com.ftp-data
proxyserver.domain.36703: . ack 1 win 11520
12:45:20.835804 IP ftp.emea.ibm.com.ftp-data
proxyserver.domain.36703: P 1:692(691) ack 1 win 11520
12:45:20.835809 IP proxyserver.domain.36703
ftp.emea.ibm.com.ftp-data: . ack 692 win 57
12:45:20.835812 IP ftp.emea.ibm.com.ftp-data
proxyserver.domain.36703: FP 692:692(0) ack 1 win 11520

FIN-ACK on data channel from server after 692 bytes transferred.


12:45:20.835899 IP ftp.emea.ibm.com.ftp  proxyserver.domain.60769: P
1108:1129(21) ack 78 win 16384
12:45:20.836193 IP proxyserver.domain.60769  ftp.emea.ibm.com.ftp: F
78:78(0) ack 1129 win 63

FIN on the control channel from Squid. Data file is received and connections
can close.


12:45:20.836210 IP proxyserver.domain.36703
ftp.emea.ibm.com.ftp-data: R 1:1(0) ack 693 win 57
12:45:20.853549 IP ftp.emea.ibm.com.ftp  proxyserver.domain.60769: P
ack 79 win 16384
12:45:20.853576 IP ftp.emea.ibm.com.ftp  proxyserver.domain.60769: P
1129:1163(34) ack 79 win 16384

Server keeps throwing data at Squid down the control channel after FIN.


12:45:20.853586 IP proxyserver.domain.60769  ftp.emea.ibm.com.ftp: R
2543509638:2543509638(0) win 0

Squid responds to the extra data with RST.


12:45:20.853603 IP ftp.emea.ibm.com.ftp  proxyserver.domain.60769: FP
1163:1163(0) ack 79 win 16384
12:45:20.853606 IP proxyserver.domain.60769  ftp.emea.ibm.com.ftp: R
2543509638:2543509638(0) win 0

More packets from the server. More RST from Squid. Only 17.41 ms have
elapsed, so its within bounds of network lag.


Overall this looks like proper TCP closing sequence. Except that the data
channel is closed first. Before any messages on the control channel arrive
to inform Squid that the transfer is completed.

Can you report this with the above details in bugzilla please so this does
not get lost. I think we could work around it by checking the expected
object size on data closure and skip the error if they match.

Amos






[squid-users] Log file roll over Issues

2011-11-08 Thread Justin Lawler
Hi,

We're having issues with log file roll over in squid - when squid is under 
heavy load and the log files are very big, triggering a log file roll over 
(squid -k rotate) makes squid unresponsive, and has to be killed manually with 
a kill -9. 

Has this ever been seen before?

We're running squid 3.1.16 on solaris/sparc (t5220) machines. We're using ICAP 
for both REQMOD  RESPMOD's. We have not been able to reproduce the issue on 
solaris AMD machines. Note - the sparc machines are a good deal slower than the 
AMD machines, although have support for 64 threads on 8 cores.


Thanks and regards,
Justin
This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



Re: [squid-users] Log file roll over Issues

2011-11-08 Thread Amos Jeffries

On 8/11/2011 11:00 p.m., Justin Lawler wrote:

Hi,

We're having issues with log file roll over in squid - when squid is under 
heavy load and the log files are very big, triggering a log file roll over 
(squid -k rotate) makes squid unresponsive, and has to be killed manually with 
a kill -9.


Unresponsive for how long?

Rotating logs involves pausing all other operations (ie is a synchronous 
blocking operation) while it moving the log files, new logs opened, 
dumps the cache index to a new journal file, new helper started and 
existign ones scheduled for gracefule close, then general operations are 
resumed.


The cache journal handling, or the disk I/O for move+create of new logs 
may be the holdup. Possibly if you have a lot of RAM allocated the 
helper forks could take a while as well.  If you can use the strace tool 
to figure out what Squid is doing it would help.




Has this ever been seen before?


IIRC non-responsive has not been mentioned in relation to rotate before. 
CPU consumption and RAM OOM crashes are the common reports.


Amos


RE: [squid-users] Log file roll over Issues

2011-11-08 Thread Jenny Lee

 Hi,
 
 We're having issues with log file roll over in squid - when squid is under 
 heavy load and the log files are very big, triggering a log file roll over 
 (squid -k rotate) makes squid unresponsive, and has to be killed manually 
 with a kill -9. 

You would be better off moving the log files aside, sending squid a reconfigure 
and working on the log files later so that you do not block squid.

That is what I do for access.log:

mv /squid/logs/access.log /squid/logs/access.log.bak
/squid/squid -k reconfigure
gzip /squid/logs/access.log.bak 

Jenny 

Re: [squid-users] Log file roll over Issues

2011-11-08 Thread Amos Jeffries

On 8/11/2011 11:48 p.m., Jenny Lee wrote:

Hi,

We're having issues with log file roll over in squid - when squid is under 
heavy load and the log files are very big, triggering a log file roll over 
(squid -k rotate) makes squid unresponsive, and has to be killed manually with 
a kill -9.

You would be better off moving the log files aside, sending squid a reconfigure 
and working on the log files later so that you do not block squid.

That is what I do for access.log:

mv /squid/logs/access.log /squid/logs/access.log.bak
/squid/squid -k reconfigure
gzip /squid/logs/access.log.bak

Jenny   


Reconfigure is a lot more intrusive to the traffic than rotate. Since it 
involves reloading teh config files and closing server ports for a while.


You can do the same sequence as move+rotate+gzip using logfile_rotate 
directive n Squid set to '0'. In fact that is exactly what the 
third-party logrotate.d system and others do.


Amos


[squid-users] TCP_MISS/504 when accesing www.citibank.com

2011-11-08 Thread feralert
Hi all,

I'm new on the list so hi everyone and please do excuse my english.

And now down to bussiness :)

Im having trouble accessing www.citibank.com with squid 2.7. I'm
actually trying from two different platforms (Debian Lenny linux box
running 2.7.STABLE3 and NetBSD running  2.7.STABLE7), using different
connections to the internet and both getting weird results.


* ) With the NetBSD machine:

What i usually get in the browser is a  (110) Connection timed out,
and a TCP_MISS/504 in the servers logs.

While trying to retrieve the URL: http://www.citibank.com/
The following error was encountered:
Connection to 192.193.219.58 Failed
The system returned:
(110) Connection timed out
The remote host or network may be down. Please try the request again.

03:16:39 120355 10.5.5.236 TCP_MISS/504 1454 GET
http://www.citibank.com/ - DIRECT/192.193.219.58 text/html


But sometimes (ina very few rare occasions) it gets through and I see
the following:

1320746405.173  15615 10.5.5.236 TCP_MISS/200 6985 CONNECT
metrics1.citibank.com:443 - DIRECT/63.140.40.2 -

And in the browser I see that I get redirected to
https://online.citibank.com/US/Welcome.c;. From there on I have no
trouble surfing the website. Also, if I try go directly to
https://online.citibank.com; works every time.



* ) With the linux one:

With this one I never gotten through (maybe I havent tried as many
times), I also get the Connection timed out  and TCP_MISS/504, and
also aZero Sized Reply and a TCP_MISS/502:

The browser shows:

 While trying to retrieve the URL: http://www.citibank.com/
 The following error was encountered:
 Zero Sized Reply
Squid did not receive any data for this request.

And in the logs I get:
03:01:03 150396 10.5.5.236 TCP_MISS/502 1334 GET
http://www.citibank.com/ - DIRECT/192.193.103.222 text/html
03:03:35 151215 10.5.5.236 TCP_MISS/504 1477 GET
http://www.citibank.com/favicon.ico - DIRECT/192.193.103.222 text/html



Machines are:

- Linux box (debian lenny).
# squid -v
Squid Cache: Version 2.7.STABLE3
configure options:  '--prefix=/usr' '--exec_prefix=/usr'
'--bindir=/usr/sbin' '--sbindir=/usr/sbin'
'--libexecdir=/usr/lib/squid' '--sysconfdir=/etc/squid'
'--localstatedir=/var/spool/squid' '--datadir=/usr/share/squid'
'--enable-async-io' '--with-pthreads'
'--enable-storeio=ufs,aufs,coss,diskd,null' '--enable-linux-netfilter'
'--enable-arp-acl' '--enable-epoll'
'--enable-removal-policies=lru,heap' '--enable-snmp'
'--enable-delay-pools' '--enable-htcp' '--enable-cache-digests'
'--enable-underscores' '--enable-referer-log' '--enable-useragent-log'
'--enable-auth=basic,digest,ntlm,negotiate'
'--enable-negotiate-auth-helpers=squid_kerb_auth' '--enable-carp'
'--enable-follow-x-forwarded-for' '--with-large-files'
'--with-maxfd=65536' 'i386-debian-linux'
'build_alias=i386-debian-linux' 'host_alias=i386-debian-linux'
'target_alias=i386-debian-linux' 'CFLAGS=-Wall -g -O2' 'LDFLAGS='
'CPPFLAGS='

- NetBSD 5.0.2 machine:
[root@prometeo ~]# squid -v
Squid Cache: Version 2.7.STABLE7
configure options:  '--sysconfdir=/usr/pkg/etc/squid'
'--localstatedir=/var/squid' '--datarootdir=/usr/pkg/share/squid'
'--enable-auth=basic,digest,ntlm'
'--enable-cachemgr-hostname=localhost' '--enable-delay-pools'
'--enable-icmp' '--enable-removal-policies=lru,heap' '--enable-poll'
'--enable-underscores' '--enable-storeio=ufs null aufs coss diskd'
'--with-aio' '--enable-ipf-transparent' '--enable-carp'
'--enable-snmp' '--enable-ssl' '--with-openssl=/usr'
'--enable-basic-auth-helpers=getpwnam MSNT NCSA YP PAM'
'--enable-digest-auth-helpers=password'
'--enable-ntlm-auth-helpers=fakeauth SMB'
'--enable-external-acl-helpers=ip_user unix_group' '--prefix=/usr/pkg'
'--build=i386--netbsdelf' '--host=i386--netbsdelf'
'--mandir=/usr/pkg/man' 'build_alias=i386--netbsdelf'
'host_alias=i386--netbsdelf' 'CC=cc' 'CFLAGS=-O2 -pthread
-I/usr/include' 'LDFLAGS=-L/usr/lib -Wl,-R/usr/lib -pthread
-Wl,-R/usr/pkg/lib' 'LIBS=' 'CPPFLAGS=-I/usr/include'


I can post both squid.conf files if requested.


Any hints?

Cheers,
Fred.


[squid-users] MIB

2011-11-08 Thread Paolo Supino
Hi

I've been trying to find an elaborated description of each OID for
Squid's MIB but I have not found any. Is there an elaborated
description of the MIB somewhere? If so where?





TIA
Paolo


Re: [squid-users] MIB

2011-11-08 Thread Amos Jeffries

On 9/11/2011 12:35 a.m., Paolo Supino wrote:

Hi

I've been trying to find an elaborated description of each OID for
Squid's MIB but I have not found any. Is there an elaborated
description of the MIB somewhere? If so where?



http://wiki.squid-cache.org/Features/Snmp#Squid_OIDs

Amos


Re: [squid-users] MIB

2011-11-08 Thread Paolo Supino
Hi

 I saw in the archives a reply you sent with more details about a few
OIDs (http://marc.info/?l=squid-usersm=122818959808426w=2).  I'm
looking for a similar level of detailed descriptions for other OIDs,
possibly for all MIB OIDs. Is there something like that?






TIA
Paolo


On Tue, Nov 8, 2011 at 12:57 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 9/11/2011 12:35 a.m., Paolo Supino wrote:

 Hi

 I've been trying to find an elaborated description of each OID for
 Squid's MIB but I have not found any. Is there an elaborated
 description of the MIB somewhere? If so where?


 http://wiki.squid-cache.org/Features/Snmp#Squid_OIDs

 Amos



[squid-users] Squid 3.1 NTLM Passthrough (SSO) to IIS with Firefox

2011-11-08 Thread Bartschies, Thomas
Hi,

our setup is:
Firefox 7.0.1, Squid 3.1.16 and Sharepoint Server on IIS. 
In Firefox we've set already:
network.automatic-ntlm-auth.trusted-uris to the server address
network.automatic-ntlm-auth.allow-proxies = true (default)

in squid.conf, we've tried some combinations of the following settings,
having the current settings this way:
client_persistent_connections on
server_persistent_connections on
pipeline_prefetch off

Every time we try to connect to the sharepoint site, the browser
authentication box pops up. Even when we supply
correct credentials, the request for them pops up again. Making it
impossible to logon to the site.

Internet Explorer 8/9 works fine. Google Chrome 15 also requests
credentials once and then logon works.

First question is: Should this even work with Firefox, or is it known
not to?

If it should work, what other settings we've possibly missed?

Connection pinning seems to be working, if I'm reading the traces
correctly. Sharepoint answers with HTTP Code 401.

Our Proxy Setup is open. There are absolutely no client address
restrictions and we're also not using proxy authentication.
So there's not ntlm_auth helper in use.

Kind regards,
Thomas


Re: [squid-users] MIB

2011-11-08 Thread Amos Jeffries

On 9/11/2011 1:07 a.m., Paolo Supino wrote:

Hi

  I saw in the archives a reply you sent with more details about a few
OIDs (http://marc.info/?l=squid-usersm=122818959808426w=2).  I'm
looking for a similar level of detailed descriptions for other OIDs,
possibly for all MIB OIDs. Is there something like that?


That is all we have for the OIDs.

If the description part of any row is not clear enough, check the FAQ 
section of the wiki for an explanation of the terms used. Or ask here.


Amos


Re: [squid-users] Squid 3.1 NTLM Passthrough (SSO) to IIS with Firefox

2011-11-08 Thread Amos Jeffries

On 9/11/2011 1:11 a.m., Bartschies, Thomas wrote:

Hi,

our setup is:
Firefox 7.0.1, Squid 3.1.16 and Sharepoint Server on IIS.
In Firefox we've set already:
network.automatic-ntlm-auth.trusted-uris to the server address
network.automatic-ntlm-auth.allow-proxies = true (default)

in squid.conf, we've tried some combinations of the following settings,
having the current settings this way:
client_persistent_connections on
server_persistent_connections on


Right the above need to be on for NTLM to work properly.


pipeline_prefetch off

Every time we try to connect to the sharepoint site, the browser
authentication box pops up. Even when we supply
correct credentials, the request for them pops up again. Making it
impossible to logon to the site.

Internet Explorer 8/9 works fine. Google Chrome 15 also requests
credentials once and then logon works.

First question is: Should this even work with Firefox, or is it known
not to?


It is known to work as seamlessly as IE when setup properly.

This sounds like



If it should work, what other settings we've possibly missed?


There is nothing special for Firefox. Since the other browsers are 
working fine (through the proxy?) it suggests a config issue setting up 
firefox.




Connection pinning seems to be working, if I'm reading the traces
correctly. Sharepoint answers with HTTP Code 401.

Our Proxy Setup is open. There are absolutely no client address
restrictions and we're also not using proxy authentication.
So there's not ntlm_auth helper in use.

Kind regards,
Thomas


Amos


[squid-users] load balancing

2011-11-08 Thread Nicola Gentile
Good Morning,
I have a proxy squid on debian with kerberos authentication and it works fine.
I would create a cluster load balancing for 2/3 proxy squid.
In particular, the clients connect to the load balancer, that
redirects the request to one of the proxies.
These proxies will must authenticate through kerberos.

Is it possible implement something like that?

What can I use?

Best regards.

Nicola


[squid-users] Cacheing in the cloud

2011-11-08 Thread David Brown
Hello, with all of the SaaS, PaaS and the like running on clouds
everywhere with packaged deployments that can't be tinkered with where
does Squid and cacheing come into the game? 

Does squid run in these types of environments?

If so, is the cacheing advantage realized the same as in traditional
stand-alone hardware?

I ran some searches @ squid-cache.org but I did not find any real good
reading on this subject.



Re: [squid-users] load balancing

2011-11-08 Thread guest01
Hi,

Yes, it is even pretty easy to accomplish. We are using a dedicated
Loadbalancer (but you can of course use LVS as loadbalancer) which is
balancing proxy request to 8 squid instances on 4 different real
servers with Kerberos authentication. We are not using any cache
hierarchy, just 4 standalone squid servers.
Just create a virtual loadbalancer IP, configure an DNS-entry for that
IP and configure this FQDN (don't use the IP-address because Kerberos
won't work) in your client browsers. Create a Kerberos Ticket for this
hostname/fqdn (I assume you already did something similiar for your
current setup) and use this ticketfile on your squid servers. That's
pretty much it.

regards
Peter

On Tue, Nov 8, 2011 at 2:43 PM, Nicola Gentile nikko...@gmail.com wrote:
 Good Morning,
 I have a proxy squid on debian with kerberos authentication and it works fine.
 I would create a cluster load balancing for 2/3 proxy squid.
 In particular, the clients connect to the load balancer, that
 redirects the request to one of the proxies.
 These proxies will must authenticate through kerberos.

 Is it possible implement something like that?

 What can I use?

 Best regards.

 Nicola



RE: [squid-users] live.com domain caching issue

2011-11-08 Thread zongo saiba
Guys, 

Thanks a lot for your help

Zongo

-Original Message-
From: John Hardin [mailto:jhar...@impsec.org] 
Sent: 08 November 2011 01:47
To: zongo saiba
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] live.com domain caching issue

On Mon, 7 Nov 2011, zongo saiba wrote:

 Is there any possibilities to cache domain related to windows ?
 I have been strolling for days the internet but with no solution and 
 that is why I decided to ask for help

 acl windows_live dstdomain  login.live.com:443

You want to intercept, decrypt and cache your users' logins to live.com?

-- 
  John Hardin KA7OHZhttp://www.impsec.org/~jhardin/
  jhar...@impsec.orgFALaholic #11174 pgpk -a jhar...@impsec.org
  key: 0xB8732E79 -- 2D8C 34F4 6411 F507 136C  AF76 D822 E6E6 B873 2E79
---
   It is not the place of government to make right every tragedy and
   woe that befalls every resident of the nation.
---
  4 days until Veterans Day



[squid-users] availability of last stable version

2011-11-08 Thread Sidnei Moreira
hi,

 

does anyone know when last stable version of squid (i.e. 3.1.16), will be
available for installing on ubuntu or debian, via apt?

 

thanks

sidnei

 




Re: [squid-users] Squid 3.1 NTLM Passthrough (SSO) to IIS with Firefox

2011-11-08 Thread Bartschies, Thomas

Hi,

I should add that were running squid NOT in transparent mode and that the proxy 
port is 8080 and NOT 80 as one may have guessed.
I don't know any other firefox config settings than the ones I've already 
mentioned, with the exception of the network settings for
kerberos authentication. The squid traces clearly show, that NTLM 
authentication is used, so kerberos shouldn't be relevant.

Here is an excerpt from my config, without some access rules and acls. Even 
without the cache_peer, no change.

acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
acl SSL_ports port 443 1025-65535 22
acl Safe_ports port 80 81 83 85 # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 22  # https, snews
acl Safe_ports port 70  # gopher
acl Safe_ports port 631 # cups
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
http_access deny msnmessenger
http_access deny to_localhost
http_access allow localhost
http_access allow manager localhost
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow all
cache_peer 10.x.x.x parent 8080 3130 no-query default
auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off
http_reply_access deny aim_http
http_reply_access allow all
icp_access allow localnet
icp_access deny all
htcp_access allow localnet
htcp_access deny all
http_port 8080 connection-auth=on
hierarchy_stoplist cgi-bin ?
cache_mem 500 MB
cache_dir aufs /var/spool/squid 2000 16 256
maximum_object_size 1 KB
ftp_list_width 64
url_rewrite_children 15
url_rewrite_access deny localhost
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern (cgi-bin|\?)0   0%  0
refresh_pattern .   0   20% 4320
quick_abort_pct 95
negative_dns_ttl 1 seconds
request_header_access Accept-Encoding deny support.microsoft.com
reply_header_access Accept-Encoding deny support.microsoft.com
forward_timeout 15 minutes
request_timeout 30 minutes
shutdown_lifetime 10 seconds
client_persistent_connections on
server_persistent_connections on
log_icp_queries off
error_directory /usr/share/squid/errors/de
always_direct allow local-intranet
icap_enable off
icap_preview_enable on
icap_preview_size 128
icap_send_client_ip on
dns_nameservers 127.0.0.1 212.202.215.1 212.202.215.2
ignore_unknown_nameservers off
forwarded_for off
pipeline_prefetch off
ignore_expect_100 on

Regards, Thomas

-Ursprüngliche Nachricht-
Von: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Gesendet: Dienstag, 8. November 2011 13:45
An: squid-users@squid-cache.org
Betreff: Re: [squid-users] Squid 3.1 NTLM Passthrough (SSO) to IIS with Firefox

On 9/11/2011 1:11 a.m., Bartschies, Thomas wrote:
 Hi,

 our setup is:
 Firefox 7.0.1, Squid 3.1.16 and Sharepoint Server on IIS.
 In Firefox we've set already:
 network.automatic-ntlm-auth.trusted-uris to the server address
 network.automatic-ntlm-auth.allow-proxies = true (default)

 in squid.conf, we've tried some combinations of the following settings,
 having the current settings this way:
 client_persistent_connections on
 server_persistent_connections on

Right the above need to be on for NTLM to work properly.

 pipeline_prefetch off

 Every time we try to connect to the sharepoint site, the browser
 authentication box pops up. Even when we supply
 correct credentials, the request for them pops up again. Making it
 impossible to logon to the site.

 Internet Explorer 8/9 works fine. Google Chrome 15 also requests
 credentials once and then logon works.

 First question is: Should this even work with Firefox, or is it known
 not to?

It is known to work as seamlessly as IE when setup properly.

This sounds like


 If it should work, what other settings we've possibly missed?

There is nothing special for Firefox. Since the other browsers are 
working fine (through the proxy?) it suggests a config issue setting up 
firefox.


 Connection pinning seems to be working, if I'm reading the traces
 correctly. Sharepoint answers with HTTP Code 401.

 Our Proxy Setup is open. There are absolutely no client address
 restrictions and we're also not using proxy authentication.
 So there's not ntlm_auth helper in use.

 Kind regards,
 Thomas

Amos


[squid-users] squid reverse-config - pass external clients ip as origin

2011-11-08 Thread Fuchs, Martin
Hi !

Is there somehow the possibility to tell squid in a reverse config to pass the 
external clients ip as the origin ip ?
I know of the many x-forwarded-for discussions, but here I have a case with 
squid as a generic firewall package, where I cannot edit every internal 
destinations configs...

Any help is greatly appreciated ;-)

regards,
martin


Re: [squid-users] squid reverse-config - pass external clients ip as origin

2011-11-08 Thread Amos Jeffries

On Tue, 8 Nov 2011 21:23:21 +, Fuchs, Martin wrote:

Hi !

Is there somehow the possibility to tell squid in a reverse config to
pass the external clients ip as the origin ip ?


Careful origin is the correct term for web server.
I assume you mean passing the client IP so the server sees the client 
connecting.



I know of the many x-forwarded-for discussions, but here I have a
case with squid as a generic firewall package, where I cannot edit
every internal destinations configs...


Then you are probably screwed. The only alternative to XFF headers, is 
TPROXY to forge packets leaving Squid with the client IP.



Using TPROXY means abandoning all the reverse-proxy benefits and taking 
up all the interception problems. Along with the extra problem that 
traffic has to be forcibly bottlenecked through Squid, creating a single 
point of failure. The origin servers will respond directly to those 
client IPs and must themselves be diverted back into Squid. The choice 
is yours, but I do not recommend it.


IMHO the best you can do is pass the forwarded-for details and inform 
those who do have config access to use it or get bad traffic accounting. 
XFF is a well known feature with decades of history and help available 
online if they need it.


Amos



Re: [squid-users] availability of last stable version

2011-11-08 Thread Amos Jeffries

On Tue, 8 Nov 2011 14:09:39 -0400, Sidnei Moreira wrote:

hi,

does anyone know when last stable version of squid (i.e. 3.1.16), 
will be

available for installing on ubuntu or debian, via apt?



It went into Debian sid repositories last week.
I usually leave the unofficial Ubuntu back-port packages until its also 
at least hit the Testing repositories in Debian.


Amos



Re: [squid-users] Log file roll over Issues

2011-11-08 Thread Brett Lymn
On Tue, Nov 08, 2011 at 10:00:34AM +, Justin Lawler wrote:
 
 We're having issues with log file roll over in squid - when squid is under 
 heavy load and the log files are very big, triggering a log file roll over 
 (squid -k rotate) makes squid unresponsive, and has to be killed manually 
 with a kill -9. 
 
 Has this ever been seen before?
 

Do you have the logs on a ZFS volume?  Are you using logadm to rotate
the logs?  If yes to both of these, are your patches up to date?  There
was a bug in ZFS/logadm that caused squid to hang trying to write to a
new log file, you may be getting bitten by that.

-- 
Brett Lymn
Warning:
The information contained in this email and any attached files is
confidential to BAE Systems Australia. If you are not the intended
recipient, any use, disclosure or copying of this email or any
attachments is expressly prohibited.  If you have received this email
in error, please notify us immediately. VIRUS: Every care has been
taken to ensure this email and its attachments are virus free,
however, any loss or damage incurred in using this email is not the
sender's responsibility.  It is your responsibility to ensure virus
checks are completed before installing any data sent in this email to
your computer.




Re: [squid-users] Squid 3.1 NTLM Passthrough (SSO) to IIS with Firefox

2011-11-08 Thread E.S. Rosenberg
Firefox on windows machines that are not in the domain by us only
worked properly when we switched it to using it's own NTLM
implementation instead of the native one, this is done by setting
network.auth.force-generic-ntlm to true

I am no big NTLM/AD guru (my field is the linux/unix machines in our
school), but from what I gleaned Mozilla encourages *not* using their
ntlm implementation since they see it as less secure than using the
native implementation, but I could be wrong here if anyone can
enlighten me I'd be happy :).

As far as I recall on a windows machine when using native NTLM and not
in the domain you also have to add the domain part in front of the
username because otherwise it sends the local machine name as the
'domain' (ie domain\username), but I think even with that it still
would continue to pop up when using native instead of mozilla.

I also have noticed that when using ntlm-auth on a client that is not
in the domain (windows/linux) you may be presented with multiple
authentication dialogs when you start to browse, my theory on that has
always been that the browser sent multiple request and squid replied
to each request with a 407 and since the browser doesn't have
authentication details yet it fires up a dialog for every 407
received.

Hopefully this was helpful, good luck,
Eli

2011/11/8 Bartschies, Thomas thomas.bartsch...@cvk.de:

 Hi,

 I should add that were running squid NOT in transparent mode and that the 
 proxy port is 8080 and NOT 80 as one may have guessed.
 I don't know any other firefox config settings than the ones I've already 
 mentioned, with the exception of the network settings for
 kerberos authentication. The squid traces clearly show, that NTLM 
 authentication is used, so kerberos shouldn't be relevant.

 Here is an excerpt from my config, without some access rules and acls. Even 
 without the cache_peer, no change.

 acl manager proto cache_object
 acl localhost src 127.0.0.1/32
 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32
 acl SSL_ports port 443 1025-65535 22
 acl Safe_ports port 80 81 83 85 # http
 acl Safe_ports port 21          # ftp
 acl Safe_ports port 443 22      # https, snews
 acl Safe_ports port 70          # gopher
 acl Safe_ports port 631         # cups
 acl Safe_ports port 1025-65535  # unregistered ports
 acl Safe_ports port 280         # http-mgmt
 acl Safe_ports port 488         # gss-http
 acl Safe_ports port 591         # filemaker
 acl Safe_ports port 777         # multiling http
 http_access deny msnmessenger
 http_access deny to_localhost
 http_access allow localhost
 http_access allow manager localhost
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow all
 cache_peer 10.x.x.x parent 8080 3130 no-query default
 auth_param basic children 5
 auth_param basic realm Squid proxy-caching web server
 auth_param basic credentialsttl 2 hours
 auth_param basic casesensitive off
 http_reply_access deny aim_http
 http_reply_access allow all
 icp_access allow localnet
 icp_access deny all
 htcp_access allow localnet
 htcp_access deny all
 http_port 8080 connection-auth=on
 hierarchy_stoplist cgi-bin ?
 cache_mem 500 MB
 cache_dir aufs /var/spool/squid 2000 16 256
 maximum_object_size 1 KB
 ftp_list_width 64
 url_rewrite_children 15
 url_rewrite_access deny localhost
 refresh_pattern ^ftp:           1440    20%     10080
 refresh_pattern ^gopher:        1440    0%      1440
 refresh_pattern (cgi-bin|\?)    0       0%      0
 refresh_pattern .               0       20%     4320
 quick_abort_pct 95
 negative_dns_ttl 1 seconds
 request_header_access Accept-Encoding deny support.microsoft.com
 reply_header_access Accept-Encoding deny support.microsoft.com
 forward_timeout 15 minutes
 request_timeout 30 minutes
 shutdown_lifetime 10 seconds
 client_persistent_connections on
 server_persistent_connections on
 log_icp_queries off
 error_directory /usr/share/squid/errors/de
 always_direct allow local-intranet
 icap_enable off
 icap_preview_enable on
 icap_preview_size 128
 icap_send_client_ip on
 dns_nameservers 127.0.0.1 212.202.215.1 212.202.215.2
 ignore_unknown_nameservers off
 forwarded_for off
 pipeline_prefetch off
 ignore_expect_100 on

 Regards, Thomas

 -Ursprüngliche Nachricht-
 Von: Amos Jeffries [mailto:squ...@treenet.co.nz]
 Gesendet: Dienstag, 8. November 2011 13:45
 An: squid-users@squid-cache.org
 Betreff: Re: [squid-users] Squid 3.1 NTLM Passthrough (SSO) to IIS with 
 Firefox

 On 9/11/2011 1:11 a.m., Bartschies, Thomas wrote:
 Hi,

 our setup is:
 Firefox 7.0.1, Squid 3.1.16 and Sharepoint Server on IIS.
 In Firefox we've set already:
 network.automatic-ntlm-auth.trusted-uris to the server address
 network.automatic-ntlm-auth.allow-proxies = true (default)

 in squid.conf, we've tried some combinations of the following settings,
 having the current settings this way:
 client_persistent_connections on
 server_persistent_connections on

 Right the above need to be on for NTLM to work properly.

 

Re: [squid-users] Cacheing in the cloud

2011-11-08 Thread Amos Jeffries

On Tue, 08 Nov 2011 08:02:29 -0600, David Brown wrote:

Hello, with all of the SaaS, PaaS and the like running on clouds
everywhere with packaged deployments that can't be tinkered with 
where

does Squid and cacheing come into the game?


In the other category AFAIK. (My knowledge of cloud details is low, 
so this may be so much hot air YHBW).


Depends on what you call cloud, though.

If you are talking about VM style stuff where things float around 
between CPUs and physical machines. the cloud is done at the individual 
packet level. Squid is not even close to relevant at that level.


If you are talking about the SaaS side of clouds. Where services use 
AJAX / XHR / RESTful HTTP for APIs and interactions. Squid is one of the 
engines that can be used to move traffic around and provide scalable 
capacity. Integrating with ESI and ICAP services to provide pluggable 
capabilities. We are adding the real-time controls to integrate it 
better with the new shiny toys and management systems, so the abilities 
are both a bit clunky in older Squid and flexible in feature support as 
the releases get more recent.


If you mean something else, ... um. !?



Does squid run in these types of environments?


It runs usually as the foundation software for scaling out HTTP based 
SaaS systems, but can also run as a VM on top of the other types of 
cloud.


IIRC there is an Amazon EC based Squid device floating around 
somewhere for use as an easily scaled CDN in that cloud.




If so, is the cacheing advantage realized the same as in traditional
stand-alone hardware?


If by the same you mean the same way. Yes.
  Squid has not changed much in its storage criteria since clouds 
became popular again.



If by the same you means the same amount. Unknown, the cloud 
benefits/problems fade beside the variability in site designs.


  For example, at the two extremes we have Wikipedia designed to be 
cache friendly and general visitor traffic scales to 100% (many TB per 
minute) with little change in the internal service resource usage. 
Whereas YouTube is designed to be cache unfriendly and can blow out even 
a clouds bandwidth capacity under the same type of popularity.
  AJAX, XHR and RESTful requests tends to be less cacheable by their 
dynamic nature. But not necessarily so. Getting the cache benefit out of 
them requires expertise which a lot of the current web-devs seem not to 
have or at least not to think about when building things quickly.


This is specific to the caching features though, others like load 
balancing and traffic handling are not affected.




I ran some searches @ squid-cache.org but I did not find any real 
good

reading on this subject.


IMO that is because cloud and the related terminology are just the new 
words for stuff Squid and other software have been doing for decades. 
The documentation has not caught up to the re-wording, or description of 
how the traditional proxy cluster and cloud concepts map together. Terms 
like routing I used above, in the Squid context it has nothing to do 
with BGP or packets, but maps easily to the cloud concept of channel 
management. Or caching, in the SaaS cloud is somewhat between an 
aggregating buffer and a data source.



PS: blog/whitepaper/ web articles welcome if anyone with more knowledge 
in both areas wants to try their hand at writing something up properly.


Amos



Re: [squid-users] squidnt 2.7Stable8 - NTLM/negotiate user authproblems with 2008R2/Win7

2011-11-08 Thread Amos Jeffries

On Tue, 8 Nov 2011 07:41:57 -0800 (PST), franzo318 wrote:

hi guys,

installation: squidnt 2.7-8 on Win2008R2(domainmember srv)
clients: win7/other 2008R2 Server
configuration: with user authentication -
auth_param ntlm program c:/squid/libexec/mswin_ntlm_auth.exe

result: proxy ist not able to authenticate the client request because 
it can

not handle  the default win7/2008 R2 security setting  LAN
Manager-Authenticaton-level
it would only work, if this setting would be changed from default to 
ntlm

only - but this change would result in an securityhole!!!

the same problem occurs while using the negotiate scheme and
auth_param negotiate program 
c:/squid/libexec/mswin_negotiate_auth.exe


my fazit:
user authentication with squidnt 2.7 in an 2008r2/win7 environment, 
is not

possible without security impact.


Um, SquidNT 2.7 was a copyright infringing trojan built from Squid 
sources. I hope you mean the Acme packaged build of Squid 2.7 for 
Windows. SquidNT also being the internal alpha code name which was 
dropped when the Windows support was merged to the stable releases of 
Squid.




is this right? or can anbody provide an workaround to the described 
problem?


Since the Win2008 default is Kerberos authentication (AKA 
negoiate/Kerberos) rather than NTLM the use of mswin_ntlm_auth.exe is 
itself a/the security hole in a manner of speaking.


mswin_negotiate_auth.exe uses the Windows native APIs to do Kerberos, 
so should work. But 2.7 is a bit old and there may be some small API 
updates needed since it was released. You may want to contact Guido at 
Acme for support.


HTH
Amos



Re: [squid-users] TCP_MISS/504 when accesing www.citibank.com

2011-11-08 Thread Amos Jeffries

On 9/11/2011 12:29 a.m., feralert wrote:

Hi all,

I'm new on the list so hi everyone and please do excuse my english.

And now down to bussiness :)

Im having trouble accessing www.citibank.com with squid 2.7. I'm
actually trying from two different platforms (Debian Lenny linux box
running 2.7.STABLE3 and NetBSD running  2.7.STABLE7), using different
connections to the internet and both getting weird results.


* ) With the NetBSD machine:

What i usually get in the browser is a  (110) Connection timed out,
and a TCP_MISS/504 in the servers logs.


While trying to retrieve the URL: http://www.citibank.com/
The following error was encountered:
Connection to 192.193.219.58 Failed
The system returned:
(110) Connection timed out
The remote host or network may be down. Please try the request again.

03:16:39 120355 10.5.5.236 TCP_MISS/504 1454 GET
http://www.citibank.com/ - DIRECT/192.193.219.58 text/html


But sometimes (ina very few rare occasions) it gets through and I see
the following:

1320746405.173  15615 10.5.5.236 TCP_MISS/200 6985 CONNECT
metrics1.citibank.com:443 - DIRECT/63.140.40.2 -

And in the browser I see that I get redirected to
https://online.citibank.com/US/Welcome.c;. From there on I have no
trouble surfing the website. Also, if I try go directly to
https://online.citibank.com; works every time.



* ) With the linux one:

With this one I never gotten through (maybe I havent tried as many
times), I also get the Connection timed out  and TCP_MISS/504, and
also aZero Sized Reply and a TCP_MISS/502:

The browser shows:


While trying to retrieve the URL: http://www.citibank.com/
The following error was encountered:
Zero Sized Reply
Squid did not receive any data for this request.

And in the logs I get:
03:01:03 150396 10.5.5.236 TCP_MISS/502 1334 GET
http://www.citibank.com/ - DIRECT/192.193.103.222 text/html
03:03:35 151215 10.5.5.236 TCP_MISS/504 1477 GET
http://www.citibank.com/favicon.ico - DIRECT/192.193.103.222 text/html




These seems to be all symptoms of either TCP connection setup problems 
or ICMP blocking. The Zero Sized Reply hints that it is more likely 
MTU problems and ICMP blocking somewhere. Where Squid can locate and 
send data to the server, but nothing comes back (ie the packet sent was 
too big, but the ICMP reply telling the Squid machine to send smaller 
packts never got delivered).


Amos


[squid-users] about http protocol version

2011-11-08 Thread Mikio Kishi
Hi,

Squid 3.1 seems to response HTTP/1.0 to the client even if the
server responses HTTP/1.1 to squid. Why ?

For your information, Squid 3.2 responses HTTP/1.1.

Sincerely,

--
Mikio Kishi