Re: [squid-users] Squid and Hangout (google) problem

2013-05-28 Thread Carlo Filippetto
2013/5/28 Amos Jeffries :
> On 28/05/2013 2:38 a.m., Carlo Filippetto wrote:
>>
>> I have a squid server with NTLM_auth. Now we want to use the
>> google-hangout to made video conference, but squid stops the video
>> session with a 407 (TCP_DENIED).
>>
>> Someone can help me to solve this problem?
>
>
> "407 Proxy Authentication Requried" gets sent to the client when no auth
> credentials are delivered by it to the proxy. NTLM is a nasty protocol which
> requires several request-reply sequences involving 407 status code and
> various tokens to be sent to-and-fro. This may be what you are spotting in
> the logs.
>
> This might also help ...
> http://answers.awesomium.com/questions/1343/using-ntlm-to-authenticate.html
>
>
> Amos


Dear Amos,
my problem is not NTLM, I use NTLM from more the 5 years..
The problem is Hangout software, if you tried to do a con (not video)
it works, but if you tried to make a video-conference it doesn't work.
May be the encoding?

Thank you


[squid-users] Re: what is best method to connect two squid servers on the same router?

2013-05-28 Thread Ahmad
hi  all ,

glad to say that the problem has been solved :)




it was squid issue , not cisco issue ,
i mean that my config in the diagram above was correct 100% .

this is an issue when dealing with multilayer switches and its realtion with
wccp .

i found that using wccp asigment method to be l2  is   better than has
method and it solved the cpu high utilization and could finally operate the
two squid cache servers on the same cisco router .

thanks alot for interest and help

regards



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/what-is-best-method-to-connect-two-squid-servers-on-the-same-router-tp4659922p4660273.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: TPROXY

2013-05-28 Thread alvarogp
alvarogp wrote
> Hello,
> 
> I have the next configuration:
> - Ubuntu 12.04 with 2 interfaces eth0 (local) and eth1 (internet access)
> - IPtables 1.4.12
> - Squid 3.3.4 with Tproxy
>  
> With Iptables I have configured the proxy to forward the traffic from the
> local LAN (eth0) to the outside world (eth1). The configuration is:
> 
> iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE
> iptables -A FORWARD -i eth1 -o eth0 -m state --state RELATED,ESTABLISHED
> -j ACCEPT
> iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT
> echo 1 > /proc/sys/net/ipv4/ip_forward
> 
> To configure and install Tproxy I have followed the tutorial described in
> the wiki:
> 
> ./configure --enable-linux-netfilter
> 
> net.ipv4.ip_forward = 1
> net.ipv4.conf.default.rp_filter = 0
> net.ipv4.conf.all.rp_filter = 0
> net.ipv4.conf.eth0.rp_filter = 0
> 
> iptables -t mangle -N DIVERT
> iptables -t mangle -A DIVERT -j MARK --set-mark 1
> iptables -t mangle -A DIVERT -j ACCEPT
> iptables  -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
> iptables  -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY
> --tproxy-mark 0x1/0x1 --on-port 3129
> 
> For squid.conf, I have maintained the configuration my default adding to
> it:
> 
> http_port 3128
> http_port 3129 tproxy
> 
> If Squid is running, the packets from the local LAN are routed correctly
> and the web pages are showed perfectly. The problem I have is that this
> accesses are not reflected in the access.log and cache.log, so could be
> possible that squid is not caching any cacheable content?
> 
> I read one other post from a guy who had a very similar problem:
> 
> http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-TPROXY-and-empty-access-log-td1036667.html
> 
> If I do the same that him specifying in the user's browser the proxy,
> activity (ABORTED request for each web I have tried to access) is
> reflected in access.log. The time out expires and the local LAN users
> cannot access to Internet.
> 
> All the information needed please tell me.
> 
> Thank you in advance,
> 
> Alvaro

Hi,

Does anyone know some configuration guide to configure Squid with TProxy in
the wiki? The three that I only know are:

http://wiki.squid-cache.org/ConfigExamples/FullyTransparentWithTPROXY
http://wiki.squid-cache.org/ConfigExamples/UbuntuTproxy4Wccp2#Linux_and_Squid_Configuration
http://wiki.squid-cache.org/Features/Tproxy4

I have followed the steps of the last one. 

Is it possible that I am confused and Squid is not able to cache if is
working with TProxy?

Thank you in advance.

 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TPROXY-tp4658393p4660274.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: TPROXY

2013-05-28 Thread Amm

> From: alvarogp 
>To: squid-users@squid-cache.org 
>Sent: Tuesday, 28 May 2013 1:28 PM
>Subject: [squid-users] Re: TPROXY
> 
>
>alvarogp wrote
>> Hello,
>> 
>> I have the next configuration:
>> - Ubuntu 12.04 with 2 interfaces eth0 (local) and eth1 (internet access)
>> - IPtables 1.4.12
>> - Squid 3.3.4 with Tproxy
>>  
>> With Iptables I have configured the proxy to forward the traffic from the
>> local LAN (eth0) to the outside world (eth1). The configuration is:
>> 
>> iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE
>> iptables -A FORWARD -i eth1 -o eth0 -m state --state RELATED,ESTABLISHED
>> -j ACCEPT
>> iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT
>> echo 1 > /proc/sys/net/ipv4/ip_forward
>> 
>> To configure and install Tproxy I have followed the tutorial described in
>> the wiki:
>> 
>> ./configure --enable-linux-netfilter
>> 
>> net.ipv4.ip_forward = 1
>> net.ipv4.conf.default.rp_filter = 0
>> net.ipv4.conf.all.rp_filter = 0
>> net.ipv4.conf.eth0.rp_filter = 0
>> 
>> iptables -t mangle -N DIVERT
>> iptables -t mangle -A DIVERT -j MARK --set-mark 1
>> iptables -t mangle -A DIVERT -j ACCEPT
>> iptables  -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
>> iptables  -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY
>> --tproxy-mark 0x1/0x1 --on-port 3129
>> 
>> For squid.conf, I have maintained the configuration my default adding to
>> it:
>> 
>> http_port 3128
>> http_port 3129 tproxy
>> 
>> If Squid is running, the packets from the local LAN are routed correctly
>> and the web pages are showed perfectly. The problem I have is that this
>> accesses are not reflected in the access.log and cache.log, so could be
>> possible that squid is not caching any cacheable content?



I have had exact same problem when I was trying TPROXY with similar
configuration.

Squid would route packets but not LOG anything in access log.

If I stop squid then clients cant access any website. (this indicates that
packets are indeed routing through squid).

I gave up later on. I might give it a try again after few days.


Amm.



Re: [squid-users] Re: A lot of TCP_REFRESH_UNMODIFIED after upgrading squid

2013-05-28 Thread Alex Domoradov
On Tue, May 28, 2013 at 9:37 AM, Amos Jeffries  wrote:
> On 28/05/2013 5:17 a.m., Alex Domoradov wrote:
>>
>> Any suggestions?
>
>
> One thing you should be aware of is that between 2.6 and 3.3 Squid gained a
> huge amount of HTTP/1.1 feature support. Including a lot of caching and
> revalidation changes that were not in HTTP/1.0. REFRESH occuring a lot more
> is a side affect of several of those changes.
I see, at the moment I have upgraded only parent squid, the main squid
still uses Version 2.6.STABLE21. Because it used about 300 clients
online and had a lot of changes it's little difficult to upgrade. But
that is an another story :)

>> On Sun, May 26, 2013 at 10:00 PM, Alex Domoradov wrote:
>>>
>>> Hello all, I got a strange behavior after I have upgraded squid from
>>> 2.6.STABLE21 to  3.3.5 on the parent proxy server.
>>>
>>> I have a file in the cache
>>>
>>> # zcat /var/log/squid/store.log-20130519.gz | grep 0295
>>> 1368817711.745 SWAPOUT 00 0295 83D4FBB382014271606DD58FADD64E98
>>> 200 1368817554 1368815579-1 image/vnd.adobe.photoshop
>>> 635342245/635342245 GET
>>> http://storage.example.net/b4bf4e39486f405346adbd09505767af-index_v2.psd
>>>
>>> As we can see in the access log from main squid - first attempt from
>>> client (192.168.204.208) was unsuccessful
>>>
>>> # cat /var/log/squid/access-alt.log | grep
>>> b4bf4e39486f405346adbd09505767af
>>> 1368817711.751 158444 192.168.204.208 TCP_MISS/200 635342846 GET
>>> http://storage.example.net/b4bf4e39486f405346adbd09505767af-index_v2.psd
>>> - FIRST_PARENT_MISS/192.168.220.2 image/vnd.adobe.photoshop
>>>
>>> and the file was downloaded directly from server
>>> # zcat /var/log/squid/access.log-20130519.gz | grep
>>> b4bf4e39486f405346adbd09505767af
>>> 1368817552.345  0 192.168.220.1 UDP_MISS/000 94 ICP_QUERY
>>> http://storage.example.net/b4bf4e39486f405346adbd09505767af-index_v2.psd
>>> - NONE/- -
>>> 1368817711.745 158442 192.168.220.1 TCP_MISS/200 635342769 GET
>>> http://storage.example.net/b4bf4e39486f405346adbd09505767af-index_v2.psd
>>> - DIRECT/205.251.242.180 image/vnd.adobe.photoshop
>
>
> Where did this log snippet come from? the child or parent Squid?
the first one - main (child) squid
the second - parent squid

>>> Later another client (192.168.203.121) trying to download the same
>>> file and get hit in the parent cache.
>>>
>>> 1369057070.790  79814 192.168.203.121 TCP_MISS/200 635342857 GET
>>> http://storage.example.net/b4bf4e39486f405346adbd09505767af-index_v2.psd
>>> - PARENT_HIT/192.168.220.2 image/vnd.adobe.photoshop
>>>
>>> So seems that everything work fine. Today after upgrading squid on the
>>> parent from 2.6 to 3.3.5 I have tried again download the same file
>>>
>>> # curl -v -O
>>> http://storage.example.net/b4bf4e39486f405346adbd09505767af-index_v2.psd
>>> d* About to connect() to storage.example.net port 80 (#0)
>>> *   Trying xxx.xxx.xxx.198... connected
>>> * Connected to storage.example.net (xxx.xxx.xxx.198) port 80 (#0)

 GET /b4bf4e39486f405346adbd09505767af-index_v2.psd HTTP/1.1
 User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7
 NSS/3.14.0.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
 Host: storage.example.net
 Accept: */*

>>>% Total% Received % Xferd  Average Speed   TimeTime Time
>>> Current
>>>   Dload  Upload   Total   SpentLeft
>>> Speed
>>>0 00 00 0  0  0 --:--:--  0:00:01
>>> --:--:-- 0* HTTP 1.0, assume close after body
>>> < HTTP/1.0 200 OK
>>> < Last-Modified: Fri, 17 May 2013 18:32:59 GMT
>>> < Accept-Ranges: bytes
>>> < Content-Type: image/vnd.adobe.photoshop
>>> < Content-Length: 635342245
>>> < x-amz-id-2:
>>> +HuykoFgicH0hUFZQIBTU1AS8OZ7bN56vmcNxHz+1bYD8QOAwFseLuMQQElW4DZX
>>> < x-amz-request-id: 63F9E75242B5C0B9
>>> < Date: Sun, 26 May 2013 18:34:32 GMT
>>> < ETag: "5b98acdf5929a2344aa9c3bbee870943"
>>> < Server: AmazonS3
>>> < Age: 0
>>> < X-Cache: HIT from svn-parent.example.lan
>>> < X-Cache-Lookup: HIT from svn-parent.example.lan:3128
>>> < Via: 1.1 svn-parent.example.lan (squid/3.3.5)
>>> < X-Cache: MISS from squid.example.lan
>>> < X-Cache-Lookup: MISS from squid.example.lan:3129
>>> < Connection: close
>>> <
>>> { [data not shown]
>>> 100  605M  100  605M0 0  82.9M  0  0:00:07  0:00:07
>>> --:--:--  110M* Closing connection #0
>>>
>>> And in the log I see the following lines
>>>
>>> main squid
>>> 1369593277.244   5787 192.168.210.102 TCP_MISS/200 635342835 GET
>>> http://storage.example.net/b4bf4e39486f405346adbd09505767af-index_v2.psd
>>> - FIRST_PARENT_MISS/192.168.220.2 image/vnd.adobe.photoshop
>>>
>>> parent squid (3.3.5)
>>> # cat /var/log/squid/access.log | grep b4bf4e39486f405346adbd09505767af
>>> 1369593271.465  0 192.168.220.1 UDP_MISS/000 94 ICP_QUERY
>>> http://storage.psd2html.com/b4bf4e39486f405346adbd09505767af-index_v2.psd
>>> - HIER_NONE/- -
>>> 1369593277.206   5741 192.168.220.1 TCP_REFRESH_UNMODIFIED/200
>>

Re: [squid-users] Squid and Hangout (google) problem

2013-05-28 Thread Amos Jeffries

On 28/05/2013 7:03 p.m., Carlo Filippetto wrote:

2013/5/28 Amos Jeffries:

On 28/05/2013 2:38 a.m., Carlo Filippetto wrote:

I have a squid server with NTLM_auth. Now we want to use the
google-hangout to made video conference, but squid stops the video
session with a 407 (TCP_DENIED).

Someone can help me to solve this problem?


"407 Proxy Authentication Requried" gets sent to the client when no auth
credentials are delivered by it to the proxy. NTLM is a nasty protocol which
requires several request-reply sequences involving 407 status code and
various tokens to be sent to-and-fro. This may be what you are spotting in
the logs.

This might also help ...
http://answers.awesomium.com/questions/1343/using-ntlm-to-authenticate.html


Amos


Dear Amos,
my problem is not NTLM, I use NTLM from more the 5 years..
The problem is Hangout software, if you tried to do a con (not video)
it works, but if you tried to make a video-conference it doesn't work.
May be the encoding?


Yes, as you say the problem is not NTLM itself. And since you have been 
using it successfully for so long probably not Squid either.
You should herefore take this up with the Hangout software help 
channels, not the Squid ones.


And no, encoding does not matter to Squid unless you have an old Squid 
which is still version 3.1 or older (HTTP/1.0 features) and Hangout 
requiring HTTP/1.1 features for the stream. The solution there should be 
obvious.


Amos


Re: [squid-users] Re: TPROXY

2013-05-28 Thread Amos Jeffries

On 28/05/2013 8:11 p.m., Amm wrote:



From: alvarogp 
To: squid-users@squid-cache.org
Sent: Tuesday, 28 May 2013 1:28 PM
Subject: [squid-users] Re: TPROXY


alvarogp wrote

Hello,

I have the next configuration:
- Ubuntu 12.04 with 2 interfaces eth0 (local) and eth1 (internet access)
- IPtables 1.4.12
- Squid 3.3.4 with Tproxy
   
With Iptables I have configured the proxy to forward the traffic from the

local LAN (eth0) to the outside world (eth1). The configuration is:

iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE
iptables -A FORWARD -i eth1 -o eth0 -m state --state RELATED,ESTABLISHED
-j ACCEPT
iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT
echo 1 > /proc/sys/net/ipv4/ip_forward

To configure and install Tproxy I have followed the tutorial described in
the wiki:

./configure --enable-linux-netfilter

net.ipv4.ip_forward = 1
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.eth0.rp_filter = 0

iptables -t mangle -N DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 1
iptables -t mangle -A DIVERT -j ACCEPT
iptables  -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
iptables  -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY
--tproxy-mark 0x1/0x1 --on-port 3129

For squid.conf, I have maintained the configuration my default adding to
it:

http_port 3128
http_port 3129 tproxy

If Squid is running, the packets from the local LAN are routed correctly
and the web pages are showed perfectly. The problem I have is that this
accesses are not reflected in the access.log and cache.log, so could be
possible that squid is not caching any cacheable content?

I have had exact same problem when I was trying TPROXY with similar
configuration.

Squid would route packets but not LOG anything in access log.

If I stop squid then clients cant access any website. (this indicates that
packets are indeed routing through squid).


access.log would indicate that none of them are actually making it to 
the Squid process.


Perhapse the Ubuntu kernel version has a bug which makes the packets 
work when *some* process it listening on the required port, but the 
packets actually not getting there.


Or perhapse TCP packets are sending the HTTP reuqest through Squid and 
Squid relaying it but the response not going back to Squid (direct back 
to client). In that event Squid would wait for some time (read/write 
timeouts are 15 minutes long) before logging the failed HTTP 
transaction. That could be caused by some bad configuration on a router 
outside of the Squid machine.


Amos


[squid-users] Re: TPROXY

2013-05-28 Thread alvarogp
Amos Jeffries-2 wrote
> On 28/05/2013 8:11 p.m., Amm wrote:
>> 
>>> From: alvarogp <

> alvarix.gp@

> >
>>> To: 

> squid-users@

>>> Sent: Tuesday, 28 May 2013 1:28 PM
>>> Subject: [squid-users] Re: TPROXY
>>>
>>>
>>> alvarogp wrote
 Hello,

 I have the next configuration:
 - Ubuntu 12.04 with 2 interfaces eth0 (local) and eth1 (internet
 access)
 - IPtables 1.4.12
 - Squid 3.3.4 with Tproxy

 With Iptables I have configured the proxy to forward the traffic from
 the
 local LAN (eth0) to the outside world (eth1). The configuration is:

 iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE
 iptables -A FORWARD -i eth1 -o eth0 -m state --state
 RELATED,ESTABLISHED
 -j ACCEPT
 iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT
 echo 1 > /proc/sys/net/ipv4/ip_forward

 To configure and install Tproxy I have followed the tutorial described
 in
 the wiki:

 ./configure --enable-linux-netfilter

 net.ipv4.ip_forward = 1
 net.ipv4.conf.default.rp_filter = 0
 net.ipv4.conf.all.rp_filter = 0
 net.ipv4.conf.eth0.rp_filter = 0

 iptables -t mangle -N DIVERT
 iptables -t mangle -A DIVERT -j MARK --set-mark 1
 iptables -t mangle -A DIVERT -j ACCEPT
 iptables  -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
 iptables  -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY
 --tproxy-mark 0x1/0x1 --on-port 3129

 For squid.conf, I have maintained the configuration my default adding
 to
 it:

 http_port 3128
 http_port 3129 tproxy

 If Squid is running, the packets from the local LAN are routed
 correctly
 and the web pages are showed perfectly. The problem I have is that this
 accesses are not reflected in the access.log and cache.log, so could be
 possible that squid is not caching any cacheable content?
>> I have had exact same problem when I was trying TPROXY with similar
>> configuration.
>>
>> Squid would route packets but not LOG anything in access log.
>>
>> If I stop squid then clients cant access any website. (this indicates
>> that
>> packets are indeed routing through squid).
> 
> access.log would indicate that none of them are actually making it to 
> the Squid process.
> 
> Perhapse the Ubuntu kernel version has a bug which makes the packets 
> work when *some* process it listening on the required port, but the 
> packets actually not getting there.
> 
> Or perhapse TCP packets are sending the HTTP reuqest through Squid and 
> Squid relaying it but the response not going back to Squid (direct back 
> to client). In that event Squid would wait for some time (read/write 
> timeouts are 15 minutes long) before logging the failed HTTP 
> transaction. That could be caused by some bad configuration on a router 
> outside of the Squid machine.
> 
> Amos

Thank you Amos, I will try with other configuration in that case.

Alvaro



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TPROXY-tp4658393p4660279.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] HTTPS intercept sent to cache_peer

2013-05-28 Thread Karl Hiramoto

Hi,

I'm trying to setup squid to be a load balancer, and provide redundancy, 
to other anonymous proxies.   Everything works fine for HTTP, but   when 
trying to use HTTPS   squid  falls back to http. Some sites don't allow 
you to browse or login without HTTPS.


My Setup is:

 /--->  AnonProxy1 >Final 
destination

Client ---> MyProxy  -*--->  AnonProxy2 >Final destination
 \--->  AnonProxy3 >Final 
destination




Ideally between squid MyProxy and AnonProxy I'd like HTTP CONNECT 
(RFC2616 ) tunnel to be setup.Does anyone have an example 
configuration for this?
If I setup my client to connect directly to   AnonProxy1   HTTP and 
HTTPS work fine.I don't have any control or ability to change 
configuration of AnonProxy.




acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1

acl SSL_ports port 443
acl CONNECT method CONNECT

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow localhost
http_access deny all
http_port 3128

hierarchy_stoplist cgi-bin ?


coredump_dir /var/spool/squid

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

# anonymous proxy cache peers
cache_peer X.X.1.1 parent 8800 0 round-robin
cache_peer X.X.2.2 parent 8800 0 round-robin
cache_peer X.X.3.3 parent 8800 0 round-robin

http_port 3129 intercept
https_port 3130 intercept key=/etc/squid/squid.key cert=/etc/squid/squid.crt


Thanks,

karl


RE: [squid-users] Re: Heap Policy

2013-05-28 Thread Farooq Bhatti


> Ideally the policy would have been a cache_dir parameter. 

Agreed too.

Farooq

-Original Message-
From: Alex Rousskov [mailto:rouss...@measurement-factory.com] 
Sent: Thursday, May 23, 2013 12:38 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Re: Heap Policy

On 05/22/2013 06:10 AM, RW wrote:
> On Mon, 13 May 2013 12:47:31 +0500
> Farooq Bhatti wrote:
>> I got it; the problem is bug in squid. As it is required to  define 
>> the policy before cache_dir in squid.conf.

> IMO it's a feature rather than a bug. 

It is a bug with side effects that some may [ab]use to work around other
problems :-).


> If you take a look at the output of squidclient mgr:storedir you 
> posted, you'll see that the policy is a property of the "store 
> directory" not a global setting. Each cache_dir line uses the most 
> recently defined policy, or the default of lru.

> I think it might be a good idea to warn about cache_replacement_policy 
> lines that don't affect a cache.

Eventually, Squid will not configure cache_dirs until the entire
configuration has been parsed, removing this bug and its side effects (along
with other hidden dependencies like that). It would be better to spend
cycles implementing that than implementing the above warning IMO.


> Ideally the policy would have been a cache_dir parameter. 

Agreed.


Cheers,

Alex.






-
No virus found in this message.
Checked by AVG - www.avg.com
Version: 2012.0.2242 / Virus Database: 3162/5846 - Release Date: 05/21/13



Re: [squid-users] Squid memory usage

2013-05-28 Thread Alex Rousskov
On 05/27/2013 09:59 PM, Nathan Hoad wrote:
> I'm running Squid 3.2.9, and I am seeing huge memory and CPU usage on
> busy sites. The CPU usage is expected due to the level of traffic, but
> the memory usage not so much.
> 
>   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
> 15631 squid 25   0 2073m 2.0g 5464 R 87.4 20.3 230:11.58
> /usr/sbin/squid -N -f /etc/squid/squid.conf
> 
> cache_mem is set to 1024 MB. I would not expect Squid to need an
> additional GB of memory to manage its internals.
> 
> The output of the manager's mem page can be downloaded here:
> 
> http://www.getoffmalawn.com/static/squidmgr.mem
> 
> What can I do to reduce this memory usage? It looks like a memory leak to me.


Memory leaks increase memory usage over time. Does that happen in your
environment? If you do not know, you may want to start logging Squid
memory usage every hour or so.

Please note that the increase may come in bursts corresponding to
intervals of high usage or unusual traffic. The memory consumption may
even go down between those bursts.


Thank you,

Alex.



[squid-users] Re: Squid CPU 100% infinite loop

2013-05-28 Thread Stuart Henderson
On 2013-05-17, Alex Rousskov  wrote:
> On 05/17/2013 01:28 PM, Loïc BLOT wrote:
>
>> I have found the problem. In fact it's the problem mentionned on my
>> last mail, is right. Squid FD limit was reached, but squid doesn't
>> mentionned every time the freeze appear that it's a FD limit
>> problem, then the debug was so difficult.
>
> Squid should warn when it runs out of FDs. If it does not, it is a
> bug. If you can reproduce this, please open a bug report in bugzilla
> and post relevant logs there.
>
> FWIW, I cannot confirm or deny whether reaching FD limit causes what
> you call an infinite loop -- there was not enough information in your
> emails to do that. However, if reaching FD limit causes high CPU
> usage, it is a [minor] bug.

I've just hit this one, ktrace shows that it's in a tight loop doing
sched_yield(), I'll try and reproduce on a non-production system and open
a ticket if I get more details..




Re: [squid-users] Re: TPROXY

2013-05-28 Thread Amm




> From: Amos Jeffries 
>To: squid-users@squid-cache.org 
>Sent: Tuesday, 28 May 2013 4:15 PM
>Subject: Re: [squid-users] Re: TPROXY
> 
>
>On 28/05/2013 8:11 p.m., Amm wrote:


>> 
>>> From: alvarogp 
>>> To: squid-users@squid-cache.org
>>> Sent: Tuesday, 28 May 2013 1:28 PM
>>> Subject: [squid-users] Re: TPROXY
>>>
>>>
>>> alvarogp wrote:

 If Squid is running, the packets from the local LAN are routed correctly
 and the web pages are showed perfectly. The problem I have is that this
 accesses are not reflected in the access.log and cache.log, so could be
 possible that squid is not caching any cacheable content?




>> I have had exact same problem when I was trying TPROXY with similar
>> configuration.
>>
>> Squid would route packets but not LOG anything in access log.
>>
>> If I stop squid then clients cant access any website. (this indicates that
>> packets are indeed routing through squid).
>
>access.log would indicate that none of them are actually making it to 
>the Squid process.
>

>Perhapse the Ubuntu kernel version has a bug which makes the packets 
>work when *some* process it listening on the required port, but the 
>packets actually not getting there.


Actually I had tried on Fedora 16 kernel version is 3.6.X.
So now this bug is in Ubuntu as well as Fedora?


Dont remember squid version but it was 3.2 series.


>Or perhapse TCP packets are sending the HTTP reuqest through Squid and 
>Squid relaying it but the response not going back to Squid (direct back 
>to client). In that event Squid would wait for some time (read/write 
>timeouts are 15 minutes long) before logging the failed HTTP 
>transaction. That could be caused by some bad configuration on a router 
>outside of the Squid machine.


May be, I dont know what was happening. As I didnt give it much thought that 
time.


I will try again this week end and report back. This time I will wait for 15 
minutes.


Thanks

Amm.


Re: [squid-users] Re: Squid CPU 100% infinite loop

2013-05-28 Thread Loïc BLOT
For me the problem is resolved.
It happens when squid reach the maximum FD, squid has more and more
requests to process and then it's blocked and very very slow. I have
increased system FD to 16K and squid FD to 10K, i haven't the problem
since this modification.
-- 
Best regards,
Loïc BLOT, 
UNIX systems, security and network expert
http://www.unix-experience.fr



Le mardi 28 mai 2013 à 16:01 +, Stuart Henderson a écrit :
> On 2013-05-17, Alex Rousskov  wrote:
> > On 05/17/2013 01:28 PM, Loïc BLOT wrote:
> >
> >> I have found the problem. In fact it's the problem mentionned on my
> >> last mail, is right. Squid FD limit was reached, but squid doesn't
> >> mentionned every time the freeze appear that it's a FD limit
> >> problem, then the debug was so difficult.
> >
> > Squid should warn when it runs out of FDs. If it does not, it is a
> > bug. If you can reproduce this, please open a bug report in bugzilla
> > and post relevant logs there.
> >
> > FWIW, I cannot confirm or deny whether reaching FD limit causes what
> > you call an infinite loop -- there was not enough information in your
> > emails to do that. However, if reaching FD limit causes high CPU
> > usage, it is a [minor] bug.
> 
> I've just hit this one, ktrace shows that it's in a tight loop doing
> sched_yield(), I'll try and reproduce on a non-production system and open
> a ticket if I get more details..
> 
> 


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Squid memory usage

2013-05-28 Thread Nathan Hoad
On Tue, May 28, 2013 at 3:23 PM, Amos Jeffries  wrote:
> On 28/05/2013 3:59 p.m., Nathan Hoad wrote:
>
> I take it you are referring to the 2.0g resident size?

That is what I'm referring to, yes - the resident size has increased
to 2.5g since my previous mail, virtual to 2.6g.

>
> 1GB is within the reasonable use limit for a fully loaded Squid under peak
> traffic. The resident size reported is usually the biggest "ever" size of
> memory usage by the process.
>
> FWIW: The memory report shows about 324MB being tracked by Squid as
> currently in use for other things than cache_mem with 550 active clients
> doing 117 transactions at present. The client transaction related pools show
> that the current values are 1/3 of peak traffic, so 3x 360MB ==> ~1GB under
> peak traffic appears entirely possible for your Squid.

Out of interest, how did you come to the 324MB? I'd be interested in
learning how to read the output a bit better :)

>
> HTH
> Amos

On Wed, May 29, 2013 at 1:56 AM, Alex Rousskov
 wrote:
> On 05/27/2013 09:59 PM, Nathan Hoad wrote:
>
>
> Memory leaks increase memory usage over time. Does that happen in your
> environment? If you do not know, you may want to start logging Squid
> memory usage every hour or so.

I am happy to start doing this, but given that memory usage would
increase over time through general use anyway, I'm unsure how I could
differentiate between expected memory usage increases and a memory
leak.

Nathan.


[squid-users] Diffence between NTLM in 2.6 compared to 3.3.5 - Citrix ?

2013-05-28 Thread Kris Glynn
I've noticed that since upgrading from Squid 2.6 to Squid 3.3.5 the Citrix ICA 
Client will no longer authenticate via NTLM to squid 3.3.5 - the ICA client 
just keeps popping up asking for NTLM auth - at no stage does it fallback to 
basic auth.

Every other NTLM aware application whether it be IE, Firefox, Chrome and even 
curl works fine and can authenticate no problems via NTLM however the Citrix 
ICA client just won't work.

If I change back to squid 2.6 it works fine. Both are using exactly the same 
squid.conf with...

# Pure NTLM Auth - fallback
auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 60 startup=15 idle=10
auth_param ntlm keep_alive off

# BASIC Auth - fallback
auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic
auth_param basic children 10
auth_param basic realm Internet Access
auth_param basic credentialsttl 1 hours

Has anyone else experienced this?











The content of this e-mail, including any attachments, is a confidential 
communication between Virgin Australia Airlines Pty Ltd (Virgin Australia) or 
its related entities (or the sender if this email is a private communication) 
and the intended addressee and is for the sole use of that intended addressee. 
If you are not the intended addressee, any use, interference with, disclosure 
or copying of this material is unauthorized and prohibited. If you have 
received this e-mail in error please contact the sender immediately and then 
delete the message and any attachment(s). There is no warranty that this email 
is error, virus or defect free. This email is also subject to copyright. No 
part of it should be reproduced, adapted or communicated without the written 
consent of the copyright owner. If this is a private communication it does not 
represent the views of Virgin Australia or its related entities. Please be 
aware that the contents of any emails sent to or from Virgin Australia or its 
related entities may be periodically monitored and reviewed. Virgin Australia 
and its related entities respect your privacy. Our privacy policy can be 
accessed from our website: www.virginaustralia.com


[squid-users] tproxy on squid 2.7 errors

2013-05-28 Thread neeraj kharbanda
Hi,
this is my scenario

router(linux eth0).eth2(lusca)..eth1(wan)
policy routing done for clients to reach to lusca (clients on private
ips 172.16.x.x)
lusca can ping clients and internet
tproxy redirection done as per  :

iptables -t mangle -N DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 1
iptables -t mangle -A DIVERT -j ACCEPT
iptables -t mangle -A PREROUTING -p tcp -s 172.16.10.97 -m socket -j DIVERT
iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY
--tproxy-mark 0x1/0x1 --on-port 3129

ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100


squid.conf


http_port 127.0.0.1:3128
http_port 0.0.0.0:3129 tproxy

but browsing give error :

Invalid Request

Some aspect of the HTTP Request is invalid. Possible problems:

Missing or unknown request method
Missing URL
Missing HTTP Identifier (HTTP/1.0)
Request is too large
Content-Length missing for POST or PUT requests
Illegal character in hostname; underscores are not allowed
squid logs
[21/Apr/2013:13:04:42 +0530] "GET error:invalid-request HTTP/0.0" 400
3334 TCP_DENIED:NONE

works fine on iptables dnat and transparent directives

--

Nettlynx Networks