[squid-users] Squid 3.5.10 - slow upload speed

2018-04-25 Thread SaRaVanAn
We are using Squid 3.5.10 for caching. It looks like upload is very slow
when it goes through squid . If we disable the squid, upload speed is good.
When we analyse the packet captures, it seems squid is dividing HTTP POST
request into multiple segments of 39 bytes each even though link is capable
of pushing 10 Mbps. We don't see this issue in squid 3.1.  Do we have any
known issue in squid 3.5.10 with respect to upload?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Intercepting BITS_POST

2016-01-09 Thread Saravanan Coimbatore
Hi Amos,

MSFT uses a handshake mechanism to sync files between enterprise and Cloud. We 
use squid with icap plugins to analyze data.

The handshake is BITS_POST which is based on HTTP 1.1. When we enabled the icap 
plugin, the request was not going through. We were getting OTHER_METHOD 
response. We debugged this and fixed it where we added BITS_POST as a valid 
method/verb in Squid. We will be submitting this change for review to squid 
team.

Thanks,
Saravanan

On Jan 9, 2016 11:15 PM, Amos Jeffries <squ...@treenet.co.nz> wrote:
On 6/01/2016 2:33 p.m., Saravanan Coimbatore wrote:
> All,
>
> I would like to use Squid Proxy combined with C-ICAP or any other
> mechanism to intercept and analyze files uploaded using BITS_POST in
> OneDrive for MSFT. Is it possible?

What is this "BITS_POST" thing you speak of?

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Refresh pattern issue in squid 3.1.20

2015-12-30 Thread SaRaVanAn
Hi, All,
I tired suggested refresh pattern, still i was getting TCP_HIT/MEM_HIT.
It's not getting refreshed after 10 minutes.


*Conf*
refresh_pattern -i ^http://[a-z\-\_\.A-Z0-9]+\.wsj\.(net|net|com|edu)/ 10
200% 10 override-expire override-lastmod reload-into-ims ignore-reload

*Logs*


Wed Dec 30 21:31:44 2015.976   1915 172.19.131.180 TCP_MISS/200 619 GET
http://s.wsj.net/javascript/pushdownAd.js - DIRECT/184.86.240.217
application/x-javascript
*Wed Dec 30 21:31:44 2015.976   1915 172.19.131.180 TCP_MISS/200 667 GET
http://s.wsj.net/static_html_files/pushdownAd.css
<http://s.wsj.net/static_html_files/pushdownAd.css> - DIRECT/184.86.240.217
<http://184.86.240.217> text/css*
*Wed Dec 30 21:52:38 2015.577  0 172.19.131.180 TCP_MEM_HIT/200 676 GET
http://s.wsj.net/static_html_files/pushdownAd.css
<http://s.wsj.net/static_html_files/pushdownAd.css> - NONE/- text/css*
Wed Dec 30 21:52:38 2015.577  0 172.19.131.180 TCP_MEM_HIT/200 628 GET
http://s.wsj.net/javascript/pushdownAd.js - NONE/- application/x-javascript


I have gone through the packet captures. It looks like expiry time is
greater than min time of refresh_pattern. But i have used override options.
Here  I am confused whether precedence goes to expiry time or min time .I
am not clear on how it works.

Can you guide me on how it works and why it is not getting refreshed ? I
need experts guidance here


*pushdownAd.css repsonse header timings*

Last-Modified: Mon, 14 Dec 2015 04:37:00 GMT\r\n

Expires: Thu, 31 Dec 2015 01:10:17 GMT\r\n

Date: Wed, 30 Dec 2015 21:31:46 GMT\r\n

Regards,
Saravanan N

On Mon, Dec 28, 2015 at 6:13 AM, Matus UHLAR - fantomas <uh...@fantomas.sk>
wrote:

> On 28.12.15 06:50, Eliezer Croitoru wrote:
>
>> And you can tweak it a bit to something like:
>> refresh_pattern -i ^http://[a-z\-\_\.A-Z0-9]+\.wsj\.(net|net|com|edu)/
>> 10 200% 10 \
>> override-expire reload-into-ims
>>
>
> - I would avoid the underscore. underscore is not valid character for an
> internet hostname
> - dash at the begin or end of [] will eliminate the need for an underscore
>
> [a-zA-Z0-9.-]+ should do it.
> --
> Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
> Warning: I wish NOT to receive e-mail advertising to this address.
> Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
> "To Boot or not to Boot, that's the question." [WD1270 Caviar]
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Refresh pattern issue in squid 3.1.20

2015-12-27 Thread SaRaVanAn
Hi,
We are using squid 3.1.20 in our box. We are facing issues on configuring
and validating the refresh patterns. It looks like squid is not honoring
the refresh patterns properly.


*configuration*
*refresh_pattern -i ^http://.wsj./.* 10 200% 10 override-expire
override-lastmod reload-into-ims ignore-reload*
refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 480 100% 480 override-expire
override-lastmod reload-into-ims
refresh_pattern -i \.(htm|html|js|css)$ 480 100% 480 override-expire
override-lastmod reload-into-ims

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320


As per above refresh pattern squid should refresh the cache every ten
minutes for "*^http://.wsj./.*; .  *But I am always getting either
TCP_HIT/TCP_MEM_HIT even after hours. Why is it so? . Please find the logs
below

*Log*

1261741.309  0 172.19.131.180 TCP_MEM_HIT/200 7635 GET
http://si.wsj.net/public/resources/images/BN-LW172_DRO
http://si.wsj.net/public/resources/images/BN-LV628_CITIOF_BR_20151223170624.jpg
- NONE/- image/jpeg
1451261741.336  0 172.19.131.180 TCP_HIT/200 10595 GET
http://si.wsj.net/public/resources/images/MI-CN459_ELNINO_BR_20151227175637.jpg
- NONE/- image/jpeg
1451261741.343  0 172.19.131.180 TCP_MEM_HIT/200 3986 GET
http://si.wsj.net/public/resources/images/BN-LV846_Tape_0_Z120_20151224145546.jpg
- NONE/- image/jpeg
1451261741.354  0 172.19.131.180 TCP_MEM_HIT/200 3432 GET
http://si.wsj.net/public/resources/images/BN-LV849_Oileco_Z120_20151224150223.jpg
- NONE/- image/jpeg
1451261741.361  0 172.19.131.180 TCP_MEM_HIT/200 2385 GET
http://video-api.wsj.com/api-video/player/v2/css/play_btn_80.png - NONE/-
image/png
1451261741.389  0 172.19.131.180 TCP_MEM_HIT/200 1675 GET
http://video-api.wsj.com/api-video/player/v2/css/play_btn_50.png - NONE/-
image/png
1451261741.407756 172.19.131.180 TCP_HIT/200 534151 GET
http://vir.wsj.net/fp/assets/1e6e09e66457156e0903/SectionPage.js - NONE/-
application/javascript
1451261742.341 51 172.19.131.180 TCP_HIT/200 65486 GET
http://m.wsj.net/video/20151227/122715storms/122715storms_960x540.jpg -
NONE/- image/jpeg
1451261742.428132 172.19.131.180 TCP_HIT/200 53668 GET
http://m.wsj.net/video/20151223/121415barpilots/121415barpilots_960x540.jpg
- NONE/
NE1_D_20151227175102.jpg - NONE/- image/jpeg
1451261741.310  0 172.19.131.180 TCP_MEM_HIT/200 8302 GET
http://si.wsj.net/public/resources/images/BN-LW104_itarge_D_20151227070713.jpg
- NONE/- image/jpeg
1451261741.318  0 172.19.131.180 TCP_HIT/200 12217 GET
http://si.wsj.net/public/resources/images/BN-LW010_OVERST_D_20151225160015.jpg
- NONE/- image/jpeg


Regards,
Saravanan N
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Refresh pattern issue in squid 3.1.20

2015-12-27 Thread SaRaVanAn
Thanks for prompt response.

I want to match all the URL's which has a pattern of "wsj" (example: *.
wsj.com, *.wsj.net, *.wsj.edu ) . Does wildcard makes sense in squid
refresh pattern? Can we have something like this?

 refresh_pattern -i ^http://*\.wsj\.*/ 10 200% 10 \
override-expire reload-into-ims


- Saravanan N

On Sun, Dec 27, 2015 at 7:15 PM, Amos Jeffries <squ...@treenet.co.nz> wrote:

> On 28/12/2015 1:30 p.m., SaRaVanAn wrote:
> > Hi,
> > We are using squid 3.1.20 in our box. We are facing issues on configuring
> > and validating the refresh patterns. It looks like squid is not honoring
> > the refresh patterns properly.
> >
> >
> > *configuration*
> > *refresh_pattern -i ^http://.wsj./.* 10 200% 10 override-expire
> > override-lastmod reload-into-ims ignore-reload*
> > refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 480 100% 480 override-expire
> > override-lastmod reload-into-ims
> > refresh_pattern -i \.(htm|html|js|css)$ 480 100% 480 override-expire
> > override-lastmod reload-into-ims
> >
> > refresh_pattern ^ftp:   144020% 10080
> > refresh_pattern ^gopher:14400%  1440
> > refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
> > refresh_pattern .   0   20% 4320
> >
> >
> > As per above refresh pattern squid should refresh the cache every ten
> > minutes for "*^http://.wsj./.*; .  *But I am always getting either
> > TCP_HIT/TCP_MEM_HIT even after hours. Why is it so? . Please find the
> logs
> > below
>
> Because none of the log entries match the regex pattern "^http://
> .wsj./.*".
>
> PS. the trailing ".* is useless, the other uses of '.' only match one
> single character.
>
>
> Try this for more correct behaviour:
>   refresh_pattern -i ^http://[a-zA-Z]+\.wsj\.net/ 10 200% 10 \
> override-expire reload-into-ims
>
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] HTTP performance hit with Squid

2015-10-22 Thread SaRaVanAn
Hi ,
we have been using squid 3.1.20 comes with debian wheezy 7. We could see
there is a peformance hit in http traffic when we use Squid.

For each HTTP GET request coming from client to proxy server, Squid takes
nearly 2 seconds to generate HTTP GET in order to establish a connection
with server.

There is always a ~2 second delay between the request coming to our system
and going out of Squid. Suppose if a page has lot of embedded URL's it's
taking more time with squid in place.Suppose If I disable squid the page
loads very fast in client browser.

What could be the reason? Do I need to tweak any configuration for this?
The first page request always loads slow with Squid.


*Configuration*http_port 3128
http_port 3129 tproxy
http_port 80 accel defaultsite=example.com
acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
acl denied_status_404 http_status 404
deny_info  http://example.com/ denied_status_404
http_reply_access deny denied_status_404
acl denied_status_503 http_status 503
deny_info http://example.com denied_status_503
http_reply_access deny denied_status_503
acl SSL_ports port 443
acl Safe_ports port 80
acl Safe_ports port 21
acl Safe_ports port 443
acl Safe_ports port 70
acl Safe_ports port 210
acl Safe_ports port 1025-65535
acl Safe_ports port 280
acl Safe_ports port 488
acl Safe_ports port 591
acl Safe_ports port 777
acl CONNECT method CONNECT
acl PURGE method PURGE
http_access allow PURGE localhost
http_access deny PURGE
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT all
http_access allow all
icp_access allow all
tcp_outgoing_address x.y.z.5
acl VIDEO  url_regex ^http://example\.examplevideo\.com
cache allow VIDEO
cache_mem 100 mb
maximum_object_size_in_memory 10 kb
memory_replacement_policy heap LFUDA
cache_replacement_policy heap LFUDA
cache_dir aufs //var/logs/cache 6144 16 256
store_dir_select_algorithm round-robin
maximum_object_size  51200 kb
cache_swap_low 70
cache_swap_high 80
access_log //var/logs/access.log squid
cache_store_log none
logfile_rotate 1
mime_table //var/opt/abs/config/acpu/mime.conf
pid_filename /var/run/squid3.pid
 strip_query_terms off
cache_log //var/logs/cache.log
coredump_dir //var/cache
acl apache rep_header server ^apache
refresh_pattern -i ^http://.wsj./.* 10 200% 10 override-expire
override-lastmod reload-into-ims ignore-reload
refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 480 100% 480 override-expire
override-lastmod reload-into-ims
refresh_pattern -i \.(htm|html|js|css)$ 480 100% 480 override-expire
override-lastmod reload-into-ims
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
quick_abort_min 0 kb
quick_abort_max 0 kb
negative_ttl 1 minutes
positive_dns_ttl 1800 seconds
store_objects_per_bucket 100
forward_timeout 2 minutes
shutdown_lifetime 2 seconds
visible_hostname x.y.z.3
server_persistent_connections off
dns_nameservers x.y.z.1 x.y.z.2
ipcache_size 8192
fqdncache_size 8192
memory_pools off

Regards,

Saravanan N
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] HTTP performance hit with Squid

2015-10-22 Thread SaRaVanAn
I am using Squid version 3.1.20 running on Intel I7 processor with 16GB
RAM. Even on connecting a single client I could able to reproduce this
problem.

2015/10/22 20:34:23.146| ipcache_nbgethostbyname: Name 'mail.com'.
<<<<<<<<<<<<<<<<<<<<<<<<<<<< DNS start time
2015/10/22 20:34:23.146| ipcache_nbgethostbyname: MISS for 'mail.com'
2015/10/22 20:34:23.146| cbdataLock: 0x7f9c3f4ca628=2
2015/10/22 20:34:23.146| idnsALookup: buf is 26 bytes for mail.com, id =
0x7a4f
2015/10/22 20:34:23.146| cbdataLock: 0x7f9c3f46df28=1
2015/10/22 20:34:23.146| comm_udp_sendto: Attempt to send UDP packet to
8.8.8.8:53 using FD 8 using Port 46787
2015/10/22 20:34:23.146| event.cc(343) schedule: schedule: Adding
'idnsCheckQueue', in 1.00 seconds
2015/10/22 20:34:23.146| StoreEntry::unlock: key
'0F71D6DA8407C509D35DA6ADB5BD52BD' count=2

2015/10/22 20:34:24.114| idnsRead: FD 8: received 84 bytes from 8.8.8.8:53
2015/10/22 20:34:24.114| idnsGrokReply: ID 0x7a4f, 0 answers
2015/10/22 20:34:24.114| idnsGrokReply: mail.com has no  records.
Looking up A record instead.
2015/10/22 20:34:24.114| comm_udp_sendto: Attempt to send UDP packet to
8.8.8.8:53 using FD 8 using Port 46787
2015/10/22 20:34:24.114| comm_udp_recvfrom: FD 8 from 8.8.8.8:53
2015/10/22 20:34:25.064| idnsRead: FD 8: received 226 bytes from 8.8.8.8:53
2015/10/22 20:34:25.064| idnsGrokReply: ID 0xe9c1, 1 answers
2015/10/22 20:34:25.065| dns_internal.cc(1152) idnsGrokReply: Sending 1 DNS
results to caller.  <<<<<<<<<<<<<<<<<<<<<<<<<<<  DNS end time

It looks like almost 2 seconds spent in resolving  DNS for an URL. I guess
it could be the reason. Also it tries for IPv6 first even i configured
dns_v4_first option.
It looks bad. Suppose if an URL has many embedded pages a delay of 2 second
is added to total page load time for each embedded page request.

Regards,
Saravanan N

On Thu, Oct 22, 2015 at 3:54 PM, Eliezer Croitoru <elie...@ngtech.co.il>
wrote:

> What version of squid are you using now?
> Squid 3.1.20 is very old and it is recommended to use newer versions.
> If you are having specific troubles I think you figure out the issues
> pretty fast.
> What hardware are you using for you squid? is it a VM? RAM? CPU?Disk?
> How many clients? Have you used the squid cache manager interface?
>
> My first suggestion it to try and use squid 3.4.14 just to make sure you
> are on something more current then 3.1.20.
>
> Eliezer
>
> On 22/10/2015 21:47, SaRaVanAn wrote:
>
>> Hi ,
>> we have been using squid 3.1.20 comes with debian wheezy 7. We could see
>> there is a peformance hit in http traffic when we use Squid.
>>
>> For each HTTP GET request coming from client to proxy server, Squid takes
>> nearly 2 seconds to generate HTTP GET in order to establish a connection
>> with server.
>>
>> There is always a ~2 second delay between the request coming to our system
>> and going out of Squid. Suppose if a page has lot of embedded URL's it's
>> taking more time with squid in place.Suppose If I disable squid the page
>> loads very fast in client browser.
>>
>> What could be the reason? Do I need to tweak any configuration for this?
>> The first page request always loads slow with Squid.
>>
>>
>> *Configuration*http_port 3128
>>
>> http_port 3129 tproxy
>> http_port 80 accel defaultsite=example.com
>> acl manager proto cache_object
>> acl localhost src 127.0.0.1/32 ::1
>> acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
>> acl QUERY urlpath_regex cgi-bin \?
>> cache deny QUERY
>> acl denied_status_404 http_status 404
>> deny_info  http://example.com/ denied_status_404
>> http_reply_access deny denied_status_404
>> acl denied_status_503 http_status 503
>> deny_info http://example.com denied_status_503
>> http_reply_access deny denied_status_503
>> acl SSL_ports port 443
>> acl Safe_ports port 80
>> acl Safe_ports port 21
>> acl Safe_ports port 443
>> acl Safe_ports port 70
>> acl Safe_ports port 210
>> acl Safe_ports port 1025-65535
>> acl Safe_ports port 280
>> acl Safe_ports port 488
>> acl Safe_ports port 591
>> acl Safe_ports port 777
>> acl CONNECT method CONNECT
>> acl PURGE method PURGE
>> http_access allow PURGE localhost
>> http_access deny PURGE
>> http_access allow manager localhost
>> http_access deny manager
>> http_access deny !Safe_ports
>> http_access deny CONNECT all
>> http_access allow all
>> icp_access allow all
>> tcp_outgoing_address x.y.z.5
>> acl VIDEO  url_regex ^http://example\.examplevideo\.com
>> cach

Re: [squid-users] HTTP performance hit with Squid

2015-10-22 Thread SaRaVanAn
I tried by disabling internal dns in squid. Still i am seeing the same
problem.
What else can be looked at ?  Its really makes user experience bad if he
tries URL for the first time.



Regards,
Saravanan N

On Thu, Oct 22, 2015 at 7:34 PM, SaRaVanAn <saravanan.nagaraja...@gmail.com>
wrote:

> I am using Squid version 3.1.20 running on Intel I7 processor with 16GB
> RAM. Even on connecting a single client I could able to reproduce this
> problem.
>
> 2015/10/22 20:34:23.146| ipcache_nbgethostbyname: Name 'mail.com'.
> <<<<<<<<<<<<<<<<<<<<<<<<<<<< DNS start time
> 2015/10/22 20:34:23.146| ipcache_nbgethostbyname: MISS for 'mail.com'
> 2015/10/22 20:34:23.146| cbdataLock: 0x7f9c3f4ca628=2
> 2015/10/22 20:34:23.146| idnsALookup: buf is 26 bytes for mail.com, id =
> 0x7a4f
> 2015/10/22 20:34:23.146| cbdataLock: 0x7f9c3f46df28=1
> 2015/10/22 20:34:23.146| comm_udp_sendto: Attempt to send UDP packet to
> 8.8.8.8:53 using FD 8 using Port 46787
> 2015/10/22 20:34:23.146| event.cc(343) schedule: schedule: Adding
> 'idnsCheckQueue', in 1.00 seconds
> 2015/10/22 20:34:23.146| StoreEntry::unlock: key
> '0F71D6DA8407C509D35DA6ADB5BD52BD' count=2
>
> 2015/10/22 20:34:24.114| idnsRead: FD 8: received 84 bytes from 8.8.8.8:53
> 2015/10/22 20:34:24.114| idnsGrokReply: ID 0x7a4f, 0 answers
> 2015/10/22 20:34:24.114| idnsGrokReply: mail.com has no  records.
> Looking up A record instead.
> 2015/10/22 20:34:24.114| comm_udp_sendto: Attempt to send UDP packet to
> 8.8.8.8:53 using FD 8 using Port 46787
> 2015/10/22 20:34:24.114| comm_udp_recvfrom: FD 8 from 8.8.8.8:53
> 2015/10/22 20:34:25.064| idnsRead: FD 8: received 226 bytes from
> 8.8.8.8:53
> 2015/10/22 20:34:25.064| idnsGrokReply: ID 0xe9c1, 1 answers
> 2015/10/22 20:34:25.065| dns_internal.cc(1152) idnsGrokReply: Sending 1
> DNS results to caller.  <<<<<<<<<<<<<<<<<<<<<<<<<<<  DNS end time
>
> It looks like almost 2 seconds spent in resolving  DNS for an URL. I guess
> it could be the reason. Also it tries for IPv6 first even i configured
> dns_v4_first option.
> It looks bad. Suppose if an URL has many embedded pages a delay of 2
> second is added to total page load time for each embedded page request.
>
> Regards,
> Saravanan N
>
> On Thu, Oct 22, 2015 at 3:54 PM, Eliezer Croitoru <elie...@ngtech.co.il>
> wrote:
>
>> What version of squid are you using now?
>> Squid 3.1.20 is very old and it is recommended to use newer versions.
>> If you are having specific troubles I think you figure out the issues
>> pretty fast.
>> What hardware are you using for you squid? is it a VM? RAM? CPU?Disk?
>> How many clients? Have you used the squid cache manager interface?
>>
>> My first suggestion it to try and use squid 3.4.14 just to make sure you
>> are on something more current then 3.1.20.
>>
>> Eliezer
>>
>> On 22/10/2015 21:47, SaRaVanAn wrote:
>>
>>> Hi ,
>>> we have been using squid 3.1.20 comes with debian wheezy 7. We could see
>>> there is a peformance hit in http traffic when we use Squid.
>>>
>>> For each HTTP GET request coming from client to proxy server, Squid takes
>>> nearly 2 seconds to generate HTTP GET in order to establish a connection
>>> with server.
>>>
>>> There is always a ~2 second delay between the request coming to our
>>> system
>>> and going out of Squid. Suppose if a page has lot of embedded URL's it's
>>> taking more time with squid in place.Suppose If I disable squid the page
>>> loads very fast in client browser.
>>>
>>> What could be the reason? Do I need to tweak any configuration for this?
>>> The first page request always loads slow with Squid.
>>>
>>>
>>> *Configuration*http_port 3128
>>>
>>> http_port 3129 tproxy
>>> http_port 80 accel defaultsite=example.com
>>> acl manager proto cache_object
>>> acl localhost src 127.0.0.1/32 ::1
>>> acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
>>> acl QUERY urlpath_regex cgi-bin \?
>>> cache deny QUERY
>>> acl denied_status_404 http_status 404
>>> deny_info  http://example.com/ denied_status_404
>>> http_reply_access deny denied_status_404
>>> acl denied_status_503 http_status 503
>>> deny_info http://example.com denied_status_503
>>> http_reply_access deny denied_status_503
>>> acl SSL_ports port 443
>>> acl Safe_ports port 80
>>> acl Safe_ports port 21
>>> acl Safe_ports port 443
>

Re: [squid-users] ERROR: NAT/TPROXY lookup failed to locate original IPs

2015-10-13 Thread SaRaVanAn
Hi Amos,
I have tested squid 3.5.10 in linux kernel 3.16 compiled for debian wheezy.
But still I am seeing same kind of errors.
What could be the issue? Is there anything else we need to change?

*Linux version *
uname -r
3.16.7-ckt11-ram.custom-1.4


*Squid version*
/usr/sbin/squid -v
Squid Cache: Version 3.5.10

Regards,
Saravanan N


Regards,
Saravanan N

On Mon, Oct 12, 2015 at 4:25 AM, Amos Jeffries <squ...@treenet.co.nz> wrote:

> On 10/10/2015 12:48 p.m., SaRaVanAn wrote:
> > Hi All,
> > I have compiled squid version 3.5.10 in  debian wheezy 7.1. With the
> > updated version squid+tproxy4 is not working in debian. I am getting the
> > below error if I try to browse any webpage. Also the connection gets
> reset.
> >
>
> Wheezy kernel and system headers do not contain TPROXY support.
>
> I suggest you upgrade to one of the newer Debian releases. Or at least
> use the backports package. Those should contain all that you need to run
> Squid on the outdated Debian system.
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] ERROR: NAT/TPROXY lookup failed to locate original IPs

2015-10-10 Thread SaRaVanAn
Hi All,
I have compiled squid version 3.5.10 in  debian wheezy 7.1. With the
updated version squid+tproxy4 is not working in debian. I am getting the
below error if I try to browse any webpage. Also the connection gets reset.






*2015/10/09 18:33:24 kid1| Configuring Parent gogo.mediaroom.com/80/0
<http://gogo.mediaroom.com/80/0>2015/10/09 18:33:38 kid1| ERROR: NAT/TPROXY
lookup failed to locate  original IPs on local=163.53.78.58:80
<http://163.53.78.58:80> remote=172.25.141.180:29507
<http://172.25.141.180:29507> FD 11 flags=172015/10/09 18:33:38 kid1|
ERROR: NAT/TPROXY lookup failed to locate original IPs on
local=163.53.78.58:80 <http://163.53.78.58:80> remote=172.25.141.180:29512
<http://172.25.141.180:29512> FD 11 flags=172015/10/09 18:33:39 kid1|
ERROR: NAT/TPROXY lookup failed to locate original IPs on
local=46.137.165.131:80 <http://46.137.165.131:80>
remote=172.25.141.180:29513 <http://172.25.141.180:29513> FD 11
flags=172015/10/09 18:33:39 kid1| ERROR: NAT/TPROXY lookup failed to locate
original IPs on local=46.137.165.131:80 <http://46.137.165.131:80>
remote=172.25.141.180:29514 <http://172.25.141.180:29514> FD 11 flags=17*

I have compiled using the below flags


./configure --prefix=/usr \
--libexecdir=/usr/lib/squid \
--srcdir=. \
--datadir=/usr/share/squid \
--sysconfdir=/etc/squid \
--with-default-user=proxy \
--with-pidfile=/var/run/squid.pid \
--enable-linux-netfilter \
--enable-removal-policies="heap,lru"

Please help me on this. Squid 3.1 comes with debian 7.1 is working fine .
But from the moment I moved to latest version of squid things have messed up

Regards,
Saravanan N
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Custom PAYLOAD for 404 webserver response

2015-10-05 Thread SaRaVanAn
Hi All,
With the help of Squid I want to return a custom payload for 404 response
returned from web server. I have configured below acl to achieve the same

acl denied_status_404 http_status 404
deny_info  http://errorpage.com denied_status_404

with the above configuration squid is sending 302 redirection to client
browser . But I want squid to retain the same http status code (404) when
responds back.
Basically I want to modify  only the payload of the webserver response with
same 404 status code.
I know this can be achieved in squid version 3.2. But our server is running
with squid version 3.1.20 . Is there any way we can achieve this in squid
3.1?

Does the below configuration work?

acl denied_status_404 http_status 404 rep_mime_type -i ^text/html
deny_info  http://errorpage.com denied_status_404

Now squid the squid is spoofing all the 404 responses irrespective of
content type(html/javascript). I don't want squid to spoof the 404 response
received from API payload.
Does squid classify the difference between a webpage and API payload?

Regards,
Saravanan N
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid not responding during file upload.

2015-03-20 Thread Saravanan Coimbatore


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Thursday, March 19, 2015 11:13 PM
To: Saravanan Coimbatore; squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid not responding during file upload.

On 20/03/2015 6:26 a.m., Saravanan Coimbatore wrote:
 
 From: Amos Jeffries
 
 On 19/03/2015 8:11 p.m., Saravanan Coimbatore wrote:
 Hello all,

 I am using Squid 3.4 to inspect content that heads out to the cloud from 
 enterprise. I have two c-icap filter that does the content inspection. 

 Observation: 
 - Upload 3000 1M files to cloud passes through successfully. 
 - Upload 300 40M files to cloud results in multiple failures. Some of 
 errors: 400 Bad Request, Request Timed out.. 

 Tcpdump of the 40MB file upload tests indicate the following:
 - Boto client used to upload sends packet to squid proxy. 
 
  Squid on receiving requests sends them to the ICAP REQMOD service,
and waits for its response,
  then sends the ICAP REQMOD result to the origin server,
and waits for its response,
  then sends that to the ICAP RESPMOD service,
and waits for its response,
  then sends that to the client.
 
 So...
  What is the ICAP service and the origin server doing?
 
 SC ICAP Service inspects the data that passes through it, and does 
 selective filtering based on user policies. There are two icap services that 
 handles two different service providers. Would having two icap services cause 
 any delay. Does Squid send data to a icap service if the icap service has 
 returned 204 in the preview handler?  
 

Good to know. Though I meant more along the lines of what data its been sent 
and what Squid got back from it etc, in each step of the above theoretical 
sequence of operations. If they all worked properly there would be no hang 
problem.


 The Origin server is S3 or Box. We did tests without Squid in between, and 
 the success ratio is high. We are trying to isolate this on a component 
 basis, but the tcp dump shows that the squid did not respond at the TCP 
 level. We were wondering if this is because Squid is busy. 
 
 
 - Proxy does not acknowledge. 
 
 What type of acknowledge are you expecting here? HTTP or TCP level?
 SC TCP level. 
 

Aha. That only happens if the receive buffer in Squid is full, or Squid 
believes the request is fully received and is now waiting for the response data 
for it to deliver.

SC Is there a debug message that will be logged when Squid receive buffer is 
full?
SC What config parameter that we can use to increase Squid receive buffer?
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid not responding during file upload.

2015-03-19 Thread Saravanan Coimbatore
Hello all, 

I am using Squid 3.4 to inspect content that heads out to the cloud from 
enterprise. I have two c-icap filter that does the content inspection. 

Observation: 
- Upload 3000 1M files to cloud passes through successfully. 
- Upload 300 40M files to cloud results in multiple failures. Some of errors: 
400 Bad Request, Request Timed out.. 

Tcpdump of the 40MB file upload tests indicate the following:
- Boto client used to upload sends packet to squid proxy. 
- Proxy does not acknowledge. 
- Client sends the data again at least 6 times, Squid does not respond. 
- After 20-25 seconds of this (where Squid did not send any data to cloud), 
Cloud storage vendor returns a BAD Request response. 

Uploading 300 files seems to be a load that should be manageable by Squid. Can 
anyone guide me on how to optimize Squid for the above scenario? Are there any 
performance parameters that I can tweak so Squid handles this correctly?

Thanks, 
Saravanan
 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_DENIED_RELY 302 for error_map

2014-07-07 Thread SaRaVanAn
Hi Amos,
   Thanks for your suggestion.

Configuration
---

Squid version: 3.1.20

acl denied_status_404 http_status 404
deny_info 404:http://example.com/ denied_status_404
http_reply_access deny denied_status_404

1) If i try the above configuration browser is not redirectiong to
example.com webpage. Instead  I am getting page cannot be displayed.

2) Also access.log is still reporting  302 status code instead of 404.

263 172.19.131.179 TCP_DENIED_REPLY/302 365 GET
http://www.google.com/index1.html. 

Am I missing anything?

On Thu, Jul 3, 2014 at 7:48 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 2014-07-03 06:10, SaRaVanAn wrote:

 Hi All,
   Recently I have migrated from squid 2.7 to squid 3.2, which forces
 me to replace an existing error_map configuration with below access
 list to achieve the same

 acl denied_status_404 http_status 404
 deny_info ERR_404.html denied_status_404
 http_reply_access deny denied_status_404

 but if i use this configuration access.log is reporting the 404 http
 status as TCP_DENIED_REPLY 302. I just want to capture the HTTP 404
 error codes for statistics. Because i could able to capture
 the HTTP 404 error codes with old error_map configuration

 Is there any configuration in squid to achieve this?


 Please read the section in http://www.squid-cache.org/Doc/config/deny_info/
 on 4xx and 5xx error codes to see what is going wrong.

 The 302 is generated when you provide a full URL to deny_info. To replace an
 error status reply retain the same status code in the deny_info format:
   deny_info 404:ERR_404.html denied_status_404

 Amos


[squid-users] TCP_DENIED_RELY 302 for error_map

2014-07-02 Thread SaRaVanAn
Hi All,
  Recently I have migrated from squid 2.7 to squid 3.2, which forces
me to replace an existing error_map configuration with below access
list to achieve the same

acl denied_status_404 http_status 404
deny_info ERR_404.html denied_status_404
http_reply_access deny denied_status_404

but if i use this configuration access.log is reporting the 404 http
status as TCP_DENIED_REPLY 302. I just want to capture the HTTP 404
error codes for statistics. Because i could able to capture
the HTTP 404 error codes with old error_map configuration

Is there any configuration in squid to achieve this?


[squid-users] Working of Tproxy4 with squid

2013-12-17 Thread SaRaVanAn
Hi All,
  I have basic clarifications on working of Tproxy4 with Squid.

With tproxy2, the destination port of http packets are getting changed
to squid port  3128 and its handled by squid appropriately.

TPROXY all  --  eth0 any anywhere anywhere
   TPROXY redirect 0.0.0.0:3128

With tproxy4,I understand http packets are routed to squid via lo
interface and there is no change in destination port.

I want to understand how these packets are getting hooked by squid
even its not destined for his port (3129).

how tproxy4 works with squid?

Also, How reverse traffic is getting handled by squid ?


Regards,
Saravanan N


Re: [squid-users] CLOSE_WAIT state in Squid leads to bandwidth drop

2013-12-04 Thread SaRaVanAn
 aportal allow abp_regex
acl PURGE method PURGE
http_access allow PURGE localhost
http_access deny PURGE
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT all
http_access allow all
icp_access allow all
tcp_outgoing_address 172.19.134.2
visible_hostname 172.19.134.2
server_persistent_connections off
logfile_rotate 1
error_map http://localhost:1000/abp/squidError.do 404
memory_pools off
store_objects_per_bucket 100
strip_query_terms off
coredump_dir //var/cache
store_dir_select_algorithm round-robin
cache_peer 172.19.134.2 parent 1000 0 no-query no-digest originserver
name=aportal
cache_peer www.abc.com parent 80 0 no-query no-digest originserver name=dotcom
cache_peer guides.abc.com parent 80 0 no-query no-digest originserver
name=travelguide
cache_peer selfcare.abc.com parent 80 0 no-query no-digest
originserver name=selfcare
cache_peer abcd.mediaroom.com parent 80 0 no-query no-digest
originserver name=mediaroom
acl webtrends url_regex ^http://statse\.webtrendslive\.com
acl the_host dstdom_regex xyz\.abc\.com
acl abp_regex url_regex ^http://xyz\.abc\.com/abp
acl gbp_regex url_regex ^http://xyz\.abc\.com/gbp
acl abcdstatic_regex url_regex ^http://xyz\.goginflight\.com/static
acl dotcom_regex url_regex ^www\.abc\.com
acl dotcomstatic_regex url_regex ^www\.abc\.com/static
acl travelguide_regex url_regex ^http://guides\.abc\.com
acl selfcare_regex url_regex ^http://selfcare\.abc\.com
acl mediaroom_regex url_regex ^http://abcd\.mediaroom\.com
never_direct allow abp_regex
cache_peer_access aportal allow abp_regex
acl PURGE method PURGE
http_access allow PURGE localhost
http_access deny PURGE
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT all
http_access allow all
icp_access allow all
tcp_outgoing_address 172.19.134.2
visible_hostname 172.19.134.2
server_persistent_connections off
logfile_rotate 1
error_map http://localhost:1000/abp/squidError.do 404
memory_pools off
store_objects_per_bucket 100
strip_query_terms off
coredump_dir //var/cache
store_dir_select_algorithm round-robin
cache_peer 172.19.134.2 parent 1000 0 no-query no-digest originserver
name=aportal
cache_peer www.abc.com parent 80 0 no-query no-digest originserver name=dotcom
cache_peer guides.abc.com parent 80 0 no-query no-digest originserver
name=travelguide
cache_peer selfcare.abc.com parent 80 0 no-query no-digest
originserver name=selfcare
cache_peer abcd.mediaroom.com parent 80 0 no-query no-digest
originserver name=mediaroom
acl webtrends url_regex ^http://statse\.webtrendslive\.com
acl the_host dstdom_regex xyz\.abc\.com
acl abp_regex url_regex ^http://xyz\.abc\.com/abp
acl gbp_regex url_regex ^http://xyz\.abc\.com/gbp
acl abcdstatic_regex url_regex ^http://xyz\.goginflight\.com/static
acl dotcom_regex url_regex ^www\.abc\.com
acl dotcomstatic_regex url_regex ^www\.abc\.com/static
acl travelguide_regex url_regex ^http://guides\.abc\.com
acl selfcare_regex url_regex ^http://selfcare\.abc\.com
acl mediaroom_regex url_regex ^http://abcd\.mediaroom\.com
never_direct allow abp_regex
cache_peer_access aportal allow abp_regex
cache_peer_access aportal allow abp_regex
cache_peer_access dotcom allow dotcom_regex
cache_peer_access dotcom allow dotcomstatic_regex
cache_peer_access travelguide allow travelguide_regex
cache_peer_access selfcare allow selfcare_regex
cache_peer_access mediaroom allow mediaroom_regex
cache deny webtrends

Do i need to tune squid.conf / tcp parameters in order to address this issue?
Please share your suggestions on this.

Regards,
Saravanan N

On Tue, Nov 26, 2013 at 5:54 PM, SaRaVanAn
saravanan.nagaraja...@gmail.com wrote:
 On Tue, Nov 26, 2013 at 5:16 PM, Antony Stone
 antony.st...@squid.open.source.it wrote:
 On Tuesday 26 November 2013 at 11:37, SaRaVanAn wrote:

 Hi All,
   I am doing a small test for bandwidth measurement of  my test setup
 while squid is running. I am running a script to pump the traffic from
 client browser to Web-server via Squid box.

 Er, do you really mean you are sending data from the browser to the server?

 The script creates around 50 user sessions and tries to do wget of randomly
 selected dynamic URL's.

 That sounds more standard - wget will fetch data from the server to the
 browser.
=
   The script randomly picks the URL from the list of URL's
 defined in a file and tries to fetch that URL.


 What do you mean by dynamic URLs?  Where / how is the content actually 
 being
 generated?

 ==
Its a  standard list of URL's with question mark in the
 end to avoid  Squid caching.
 For example :  www.espncricinfo.com?

 After some time,

 Please define.

 ==
 After 15-20 minutes from the time of execution of script.

 I'm observing a drop in bandwidth of the link,

 Please define - what network setup are you using - what bandwidth are you
 getting at the start. what level does it drop to, does it return

[squid-users] CLOSE_WAIT state in Squid leads to bandwidth drop

2013-11-26 Thread SaRaVanAn
Hi All,
  I am doing a small test for bandwidth measurement of  my test setup
while squid is running. I am running a script to pump the traffic from
client browser to Web-server via Squid box.   The script creates
around 50 user sessions and tries to do wget of randomly selected
dynamic URL's.
After some time , I m observing a drop in bandwidth of the link, which
is connecting the webserver even there is no HIT in the squid cache.
I analyzed  the netstat output during the problem scenario, I could
see Recv-q gets piled up in CLOSE_WAIT  tcp state of squid and also
squid stays in CLOSE_WAIT state for more than  a minute. The number of
squid sessions to webserver are getting dropped to 5 from 70, but
still tcp sessions from client to squid are around 80.

Without Squid, there is no drop in the bandwidth with the same load.

Why bandwidth is getting dropped when squid is running? Please provide
your suggestions on this.

Logs

Squid version : 2.6.STABLE14

2013-11-25 10:17:53 Collecting netstat  statistics...
tcp   248352  0 172.19.134.2:51439  194.50.177.163:80
 CLOSE_WAIT  5477/(squid)
tcp77229  0 172.19.134.2:41998  64.15.157.134:80
 CLOSE_WAIT  5477/(squid)
tcp15853  0 172.19.134.2:55344  64.136.20.39:80
 CLOSE_WAIT  5477/(squid)
tcp30022  0 172.19.134.2:47485  50.56.161.66:80
 CLOSE_WAIT  5477/(squid)
tcp30202  0 172.19.134.2:59213  198.90.22.194:80
 CLOSE_WAIT  5477/(squid)
tcp 9787  0 172.19.134.2:52761  184.26.136.73:80
 CLOSE_WAIT  5477/(squid)
tcp   106892  0 172.19.134.2:55109  184.26.136.115:80
 CLOSE_WAIT  5477/(squid)


2013-11-25 10:18:42 Collecting netstat  statistics...

tcp   248352  0 172.19.134.2:51439  194.50.177.163:80
 CLOSE_WAIT  5477/(squid)

tcp95558  0 172.19.134.2:42559  67.192.29.225:80
 CLOSE_WAIT  5477/(squid)

tcp77229  0 172.19.134.2:41998  64.15.157.134:80
 CLOSE_WAIT  5477/(squid)

tcp15853  0 172.19.134.2:55344  64.136.20.39:80
 CLOSE_WAIT  5477/(squid)

tcp30022  0 172.19.134.2:47485  50.56.161.66:80
 CLOSE_WAIT  5477/(squid)

tcp30202  0 172.19.134.2:59213  198.90.22.194:80
 CLOSE_WAIT  5477/(squid)

tcp 9787  0 172.19.134.2:52761  184.26.136.73:80
 CLOSE_WAIT  5477/(squid)

tcp   106892  0 172.19.134.2:55109  184.26.136.115:80
 CLOSE_WAIT  5477/(squid)


Squid info :

---

Connection information for squid:
Number of clients accessing cache:  3
Number of HTTP requests received:   257549
Number of ICP messages received:0
Number of ICP messages sent:0
Number of queued ICP replies:   0
Request failure ratio:   0.00
Average HTTP requests per minute since start:   1443.2
Average ICP messages per minute since start:0.0
Select loop called: 4924570 times, 2.174 ms avg
Cache information for squid:
Request Hit Ratios: 5min: 0.0%, 60min: 0.0%
Byte Hit Ratios:5min: -0.0%, 60min: 3.2%
Request Memory Hit Ratios:  5min: 0.0%, 60min: 0.0%
Request Disk Hit Ratios:5min: 0.0%, 60min: 0.0%
Storage Swap size:  107524 KB
Storage Mem size:   8408 KB
Mean Object Size:   20.69 KB
Requests given to unlinkd:  0


Regards,
Saravanan N


Re: [squid-users] CLOSE_WAIT state in Squid leads to bandwidth drop

2013-11-26 Thread SaRaVanAn
On Tue, Nov 26, 2013 at 5:16 PM, Antony Stone
antony.st...@squid.open.source.it wrote:
 On Tuesday 26 November 2013 at 11:37, SaRaVanAn wrote:

 Hi All,
   I am doing a small test for bandwidth measurement of  my test setup
 while squid is running. I am running a script to pump the traffic from
 client browser to Web-server via Squid box.

 Er, do you really mean you are sending data from the browser to the server?

 The script creates around 50 user sessions and tries to do wget of randomly
 selected dynamic URL's.

 That sounds more standard - wget will fetch data from the server to the
 browser.
   =
  The script randomly picks the URL from the list of URL's
defined in a file and tries to fetch that URL.


 What do you mean by dynamic URLs?  Where / how is the content actually being
 generated?

==
   Its a  standard list of URL's with question mark in the
end to avoid  Squid caching.
For example :  www.espncricinfo.com?

 After some time,

 Please define.

==
After 15-20 minutes from the time of execution of script.

 I'm observing a drop in bandwidth of the link,

 Please define - what network setup are you using - what bandwidth are you
 getting at the start. what level does it drop to, does it return to the
 previous level?


 eth0   eth1
Windows Laptop  - Linux machine(Squid Running) - Internet

We are measuring the outgoing traffic in the link(eth1), which leads
to the internet in order to calculate the bandwidth usage. Eth1 link
bandwidth capability is around 10 Mbps. we are able utilize a maximum
of 7-8 Mbps when squid is running. After 15 minutes, there is a sudden
drop in bandwidth from 8Mbps to 6.5 Mbps and it comes back to 8Mbps
after 2 -3 min.


 Squid version : 2.6.STABLE14

 That is rather old (the last release of the 2.6 branch was STABLE23 September
 2009).  Is there any reason you have not upgraded to a current version?


=
There are some practical difficulties(our side) in upgrading to
newer version.

 Regards,


 Antony.

 --
 Behind the counter a boy with a shaven head stared vacantly into space,
 a dozen spikes of microsoft protruding from the socket behind his ear.

  - William Gibson, Neuromancer (1984)

 http://www.Open.Source.ITPlease reply to the list;
 The Open Source IT forum   please don't CC me.


Re: [squid-users] Re: TCP_MISS/Squid-Error: ERR_CONNECT_FAIL

2013-08-20 Thread SaRaVanAn
Hi Amos,
  I changed my configuration file as you suggested.

There is an another clarification from my side.
I could able to see TCP_HIT only when I clear browser cache manually .
The behavior is same for all the websites I have tried to connect.

Is this an expected behavior?
If not, What needs to be done in order to get TCP_HIT without manually
clearing browser cache?

Regards,
Saravanan N

On Mon, Aug 19, 2013 at 7:48 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 19/08/2013 11:29 p.m., SaRaVanAn wrote:

 Hi Amos,
 Thanks a lot for your help. There is an issue in web-server
 connectivity which has been solved as you suggested. I could able to
 connect the webserver via squid successfully.

 But there is an issue in caching webpages . I am always getting
 TCP/MISS 200 messages from squid. I could not able to see a single
 TCP_HIT message even I try to access the same webpages from browser
 again and again.

 1376909027.627211 10.1.1.1 TCP_MISS/200 416 GET
 http://b.scorecardresearch.com/p? - DIRECT/120.29.145.65 image/gif
 [Host: b.scorecardresearch.com\r\nUser-Agent: Mozilla/5.0 (X11; Linux
 i686; rv:10.0.12) Gecko/20130109 Firefox/10.0.12\r\nAccept:
 image/png,image/*;q=0.8,*/*;q=0.5\r\nAccept-Language:
 en-us,en;q=0.5\r\nAccept-Encoding: gzip, deflate\r\nConnection:
 keep-alive\r\nReferer: http://in.yahoo.com/?p=us\r\nCookie:
 UID=6cdd678-61.213.189.48-1366091370; UIDR=1366091370\r\n] [HTTP/1.1
 200 OK\r\nContent-Length: 43\r\nContent-Type: image/gif\r\nDate: Mon,
 19 Aug 2013 10:27:24 GMT\r\nConnection: keep-alive\r\nPragma:
 no-cache\r\nExpires: Mon, 01 Jan 1990 00:00:00 GMT\r\nCache-Control:
 private, no-cache, no-cache=Set-Cookie, no-store,
 proxy-revalidate\r\n\r]


 This response object has been configured explicitly and rather emphatically
 to prevent caching.
 Expires, no less than 5 ways to force MISS or at least REFRESH behaviour
 from Cache-Control, and even the invalid Pragma header in case something
 obeys it.

 Several of these are way beyond what server frameworks add by default. So it
 is clearly an explicit admin design that this object be a MISS. Perhapse it
 woudl be a good idea to let it, yes?

 Amos



 Squid.conf
 ---
 acl all src all


 Please run squid -k parse. If your Squid is not at least complaining about
 the above line being redundant then your proxy is seriously outdated.


 acl manager proto cache_object
 acl localhost src 127.0.0.1/32 ::1
 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
 acl SSL_ports port 443# https
 acl SSL_ports port 563# snews
 acl SSL_ports port 873# rsync
 acl Safe_ports port 80# http
 acl Safe_ports port 21# ftp
 acl Safe_ports port 443# https
 acl Safe_ports port 70# gopher
 acl Safe_ports port 210# wais
 acl Safe_ports port 1025-65535# unregistered ports
 acl Safe_ports port 280# http-mgmt
 acl Safe_ports port 488# gss-http
 acl Safe_ports port 591# filemaker
 acl Safe_ports port 777# multiling http
 acl Safe_ports port 631# cups
 acl Safe_ports port 873# rsync
 acl Safe_ports port 901# SWAT
 acl purge method PURGE
 acl CONNECT method CONNECT
 http_access allow manager localhost
 http_access deny manager
 http_access deny !Safe_ports
 http_reply_access allow all


 allow all is the default for http_reply_access. You can drop the above
 line entirely from your config.

 You are also missing the basic security protection for CONNECT requests:
   http_access deny CONNECT !SSL_ports


 http_port 3128
 http_port 3129 tproxy
 hierarchy_stoplist cgi-bin ?

 You can omit hierarchy_stoplist from your config.


 cache_mem 256 MB
 cache_dir ufs /var/spool/squid3 1000 16 256
 maximum_object_size 20480 KB
 access_log /var/log/squid3/access.log
 cache_log /var/log/squid3/cache.log
 mime_table /usr/share/squid3/mime.conf
 log_mime_hdrs on
 refresh_pattern ^ftp:144020%10080
 refresh_pattern ^gopher:14400%1440


 You are missing the refresh pattern instructing Squid how to safely handle
 dynamic responses without expiry information:
   refresh_pattern -i (/cgi-bin/|\?) 0 0% 0



 refresh_pattern .020%4320
 acl apache rep_header Server ^Apache
 coredump_dir /var/spool/squid3
 acl localnet src 10.1.1.0/24
 http_access allow localhost
 http_access allow localnet
 cache allow all


 allow all is teh default for the cache directive. You can omit this line
 entirely from your config file.


 request_header_access Allow allow all
 request_header_access Authorization allow all
 request_header_access WWW-Authenticate allow all
 request_header_access Proxy-Authorization allow all
 request_header_access Proxy-Authenticate allow all
 request_header_access Cache-Control allow all
 request_header_access Content-Encoding allow all
 request_header_access Content-Length allow all
 request_header_access Content-Type allow all
 request_header_access Date allow all
 request_header_access

Re: [squid-users] Re: TCP_MISS/Squid-Error: ERR_CONNECT_FAIL

2013-08-19 Thread SaRaVanAn
ESTABLISHED 3096/(squid)
tcp6   0  0 72.246.188.105:80   10.1.1.1:56477
ESTABLISHED 3096/(squid)
tcp6   0  0 23.15.10.83:80  10.1.1.1:49959
ESTABLISHED 3096/(squid)
tcp6   0  0 67.195.141.200:80   10.1.1.1:51255
ESTABLISHED 3096/(squid)
tcp6   0  0 23.15.10.49:80  10.1.1.1:49727
ESTABLISHED 3096/(squid)
tcp6   0  0 106.10.192.89:8010.1.1.1:47737
ESTABLISHED 3096/(squid)
tcp6   0  0 106.10.192.89:8010.1.1.1:47736
ESTABLISHED 3096/(squid)
tcp6   0  0 216.115.100.103:80  10.1.1.1:35249
ESTABLISHED 3096/(squid)
root@debian:/etc/squid3#

Regards,
Saravanan N


Re: [squid-users] Re: TCP_MISS/Squid-Error: ERR_CONNECT_FAIL

2013-08-17 Thread SaRaVanAn
Hi All,
   In my case, tcp connection established between browser and internet
IP's with tproxy.

root@debian:~# netstat -natp | grep squid
tcp0  0 0.0.0.0:31280.0.0.0:*
LISTEN  31895/(squid)
tcp0  0 0.0.0.0:31290.0.0.0:*
LISTEN  31895/(squid)
tcp0  1 172.30.11.122:57210 172.30.11.124:80
SYN_SENT31895/(squid)
tcp0  0 172.30.11.124:80172.30.11.122:35454
ESTABLISHED 31895/(squid)

There is an issue in establishing the tcp connection between squid and
internet IP's.
Squid is sending SYN request via loopback interface to the websever.
But the webserver is replying back to the browser directly, which in
turn initiates RST of the connection.
I am missing something in routing the packets back to the squid.

root@debian:~# ip rule list
0:  from all lookup local
32765:  from all fwmark 0x1 lookup 100
32766:  from all lookup main
32767:  from all lookup default
root@debian:~# ip route show table 100
local default dev lo  scope host
root@debian:~#

I followed the the below link for configuring the tproxy and squid stuffs
http://wiki.squid-cache.org/Features/Tproxy4

Any help would be greatly appreciated.

Regards,
Saravanan N

On Thu, Aug 15, 2013 at 4:14 PM, SaRaVanAn
saravanan.nagaraja...@gmail.com wrote:
 Hi Amos,
 I have uploaded wire-shark dump captured in web-server in the below
 link for your reference.

 https://rapidshare.com/files/3145946000/packet_capture_Squid.pcap

 (I uploaded, since I faced some problems in Sending here).

 Please use the filter tcp to make the dump more clear since it has
 some unnecessary packets.

 As per my understanding, initial TCP connection has been established
 between client - squid.But there is a problem in establishing TCP
 connection between squid-server.

 I could not able to see the SYN sent by squid in wireshark capture.
 But I could see webserver is sending SYN+ACK in response to that. The
 SYN+ACK sent by webserver was reaching client web machine. Web Client
 machine was sending RST in response to that since it has no idea about
 the port.


 % netstat -n
 --
 Active Internet connections

 Proto Recv-Q Send-Q Local Address   Foreign Address state

 tcp4  0 0   172.30.11.122:35123   172.30.11.124:80   
 ESTABLISHED
 tcp4  0 1   172.30.11.122.22080   172.30.11.124.80   SYN_SENT
 .


 The port of local address in SYN_SENT state keep on changing.
 1) Why is it so?

 I presume, Squid has to reply for SYN+ACK sent by web-server

 2) why its reaching web client machine?

 3) what is the normal working behavior?

 4) Whether squid is not able to reach web-server since its sitting on
 the same machine ?
 Note: I am accessing web-server from client machine directly using IP
 without domain name

 The same setup and configurations are working fine in case of NAT
 redirection rules without tproxy.

 Please help me since I m new to squid. I will give you more details if you 
 want.

 Regards,
 Saravanan N

 On Thu, Aug 15, 2013 at 5:23 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 2013-08-13 05:20, SaRaVanAn wrote:

 Hi All,
I observed there is a difference in tcp state machine in both
 working(without squid) and Not working scenario.(without squid)

 State machine in working scenario (without squid)
 
 client   Server
   SYN
  ---
SYN + ACK
   -
   ACK
   --
   GET
   ---
ACK
   
 TCP segment of   a resembled PDU (MTU 1514)
---
 HTTP/1.1 200 ok (MTU 293)

 then connection terminates


 State machine in Not-working scenario (with squid)


 You say with squid. But when Squid is in the picture there are *two* TCP
 connections operating asynchronously to transfer the request and response
 client-squid and squid-server.


 What you describe below appears to be a single TCP connections operations,
 except that there are things happenign on it which are impossible (RST
 followed by successful packets exchanges). TCP level aborts and resets on
 one connection affect the other in various ways defined by HTTP semantics
 and recovery (not TCP synchronous).

 So what we need is labeling the packets as per which TCP connection it
 occured on and how the packets on each are sequenced/interleaved across
 both.

 For example:


 

 client   Server
   SYN
  ---
SYN + ACK
   -
   ACK
   --
   GET

Re: [squid-users] Re: TCP_MISS/Squid-Error: ERR_CONNECT_FAIL

2013-08-15 Thread SaRaVanAn
Hi Amos,
I have uploaded wire-shark dump captured in web-server in the below
link for your reference.

https://rapidshare.com/files/3145946000/packet_capture_Squid.pcap

(I uploaded, since I faced some problems in Sending here).

Please use the filter tcp to make the dump more clear since it has
some unnecessary packets.

As per my understanding, initial TCP connection has been established
between client - squid.But there is a problem in establishing TCP
connection between squid-server.

I could not able to see the SYN sent by squid in wireshark capture.
But I could see webserver is sending SYN+ACK in response to that. The
SYN+ACK sent by webserver was reaching client web machine. Web Client
machine was sending RST in response to that since it has no idea about
the port.


% netstat -n
--
Active Internet connections

Proto Recv-Q Send-Q Local Address   Foreign Address state

tcp4  0 0   172.30.11.122:35123   172.30.11.124:80   ESTABLISHED
tcp4  0 1   172.30.11.122.22080   172.30.11.124.80   SYN_SENT
.


The port of local address in SYN_SENT state keep on changing.
1) Why is it so?

I presume, Squid has to reply for SYN+ACK sent by web-server

2) why its reaching web client machine?

3) what is the normal working behavior?

4) Whether squid is not able to reach web-server since its sitting on
the same machine ?
Note: I am accessing web-server from client machine directly using IP
without domain name

The same setup and configurations are working fine in case of NAT
redirection rules without tproxy.

Please help me since I m new to squid. I will give you more details if you want.

Regards,
Saravanan N

On Thu, Aug 15, 2013 at 5:23 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 2013-08-13 05:20, SaRaVanAn wrote:

 Hi All,
I observed there is a difference in tcp state machine in both
 working(without squid) and Not working scenario.(without squid)

 State machine in working scenario (without squid)
 
 client   Server
   SYN
  ---
SYN + ACK
   -
   ACK
   --
   GET
   ---
ACK
   
 TCP segment of   a resembled PDU (MTU 1514)
---
 HTTP/1.1 200 ok (MTU 293)

 then connection terminates


 State machine in Not-working scenario (with squid)


 You say with squid. But when Squid is in the picture there are *two* TCP
 connections operating asynchronously to transfer the request and response
 client-squid and squid-server.


 What you describe below appears to be a single TCP connections operations,
 except that there are things happenign on it which are impossible (RST
 followed by successful packets exchanges). TCP level aborts and resets on
 one connection affect the other in various ways defined by HTTP semantics
 and recovery (not TCP synchronous).

 So what we need is labeling the packets as per which TCP connection it
 occured on and how the packets on each are sequenced/interleaved across
 both.

 For example:


 

 client   Server
   SYN
  ---
SYN + ACK
   -
   ACK
   --
   GET
   ---
ACK
   
 SYN + ACK
---
 RST



 ... after a RST packet is received Squid runs through the connection
 shutdown code which *doe not* involve delivering any more HTTP on *that*
 connection.

 I assume that this is the squid-server connection dying.


   TCP previous segment not captured
--
RST

   TCP last segment not captured
--
 .

 TCP segment of   a resembled PDU (MTU 1514)
-
 TCP segment of   a resembled PDU (MTU 1514)
-
  HTTP/1.0 504 Gateway timeout (MTU 1050)
 -


 .. so this response



 then connection terminates

 In case of squid running ,
 1) Why web-server is sending SYN+ACK instead of  TCP last segment
 notcaptured PDU?


 Because Squid opened the second (squid-server) connection with a SYN
 packet that you missed out of the trace?



 2) Why there is a delay in sending TCP last segment not captured PDU?


 Unknown. What does that last segment

[squid-users] Re: TCP_MISS/Squid-Error: ERR_CONNECT_FAIL

2013-08-13 Thread SaRaVanAn
Hi All,
   I observed there is a difference in tcp state machine in both
working(without squid) and Not working scenario.(without squid)

State machine in working scenario (without squid)

client   Server
  SYN
 ---
   SYN + ACK
  -
  ACK
  --
  GET
  ---
   ACK
  
TCP segment of   a resembled PDU (MTU 1514)
   ---
HTTP/1.1 200 ok (MTU 293)
   
then connection terminates


State machine in Not-working scenario (with squid)


client   Server
  SYN
 ---
   SYN + ACK
  -
  ACK
  --
  GET
  ---
   ACK
  
SYN + ACK
   ---
RST
   
  TCP previous segment not captured
   --
   RST
   
  TCP last segment not captured
   --
.

TCP segment of   a resembled PDU (MTU 1514)
   -
TCP segment of   a resembled PDU (MTU 1514)
   -
 HTTP/1.0 504 Gateway timeout (MTU 1050)
-

then connection terminates

In case of squid running ,
1) Why web-server is sending SYN+ACK instead of  TCP last segment
notcaptured PDU?

2) Why there is a delay in sending TCP last segment not captured PDU?

 Moreover I could see there is a variation in HTTP version (1.0 and 1.1) .
Please share your views on this

Regards,
Saravanan N

On Mon, Aug 12, 2013 at 11:47 PM, SaRaVanAn
saravanan.nagaraja...@gmail.com wrote:
 Hi Team,
   I setup an apache web server and squid3 running on the same machine
 . But when I try to access the web-server pages from client machine, I
 always ended up in the ERR_CONNETC_FAIL error. I tried all the
 alternatives and configurations from Google , but it was not helping
 me to solve the issue.

 Error

 1376330104.848 179954 172.30.11.122 TCP_MISS/504 3880 GET
 http://172.30.11.124/logs/access.log - DIRECT/172.30.11.124
  text/html [Host: 172.30.11.124\r\nUser-Agent: Mozilla/5.0 (X11; Linux
 i686; rv:10.0.12) Gecko/20130109 Firefox/10.0.
 12\r\nAccept: text/html,application/xhtml+
 xml,application/xml;q=0.9,*/*;q=0.8\r\nAccept-Language:
 en-us,en;q=0.5\r\nA
 ccept-Encoding: gzip, deflate\r\nConnection: keep-alive\r\n] [HTTP/1.0
 504 Gateway Time-out\r\nServer: squid/3.1.20\r
 \nMime-Version: 1.0\r\nDate: Mon, 12 Aug 2013 17:55:04
 GMT\r\nContent-Type: text/html\r\nContent-Length: 3506\r\nX-Sq
 uid-Error: ERR_CONNECT_FAIL 110\r\nVary:
 Accept-Language\r\nContent-Language: en-us\r\n\r]

 Topology
 
 172.30.11.122(client ) -- 172.30.11.124 (webserver and squid3 running)

 Squid version and OS
 
 squid3 -v
 Squid Cache: Version 3.1.20

 Debian wheezy(7.0)

 Iptable rules
 -
 iptables -t mangle -N DIVERT
 iptables -t mangle -A DIVERT -j MARK --set-mark 1
 iptables -t mangle -A DIVERT -j ACCEPT
 iptables  -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
 iptables  -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY
 --tproxy-mark 0x1/0x1 --on-port 3129

 IP rules
 --
  ip -f inet rule add fwmark 1 lookup 100
  ip -f inet route add local default dev eth0 table 100

 squid.conf
 --
 acl all src all
 acl manager proto cache_object
 acl localhost src 127.0.0.1/32 ::1
 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
 acl SSL_ports port 443
 acl SSL_ports port 563
 acl SSL_ports port 873
 acl Safe_ports port 80
 acl Safe_ports port 21
 acl Safe_ports port 443
 acl Safe_ports port 70
 acl Safe_ports port 210
 acl Safe_ports port 1025-65535
 acl Safe_ports port 280
 acl Safe_ports port 488
 acl Safe_ports port 591
 acl Safe_ports port 777
 acl Safe_ports port 631
 acl Safe_ports port 873
 acl Safe_ports port 901
 acl purge method PURGE
 acl CONNECT method CONNECT
 http_access allow manager localhost
 http_access deny manager
 http_access deny !Safe_ports
 http_reply_access allow all
 http_port 3128
 http_port 3129 tproxy
 hierarchy_stoplist cgi-bin ?
 cache_mem 256 MB
 cache_dir ufs /var/spool/squid3 1000 16 256
 maximum_object_size 20480 KB
 access_log /var/log/squid3/access.log
 cache_log /var/log/squid3/cache.log
 mime_table /usr/share/squid3/mime.conf
 log_mime_hdrs on
 refresh_pattern ^ftp

[squid-users] TCP_MISS/Squid-Error: ERR_CONNECT_FAIL

2013-08-12 Thread SaRaVanAn
 ecr 6519328,nop,wscale 5], length 0
23:24:11.901913 IP 172.30.11.122.37138  172.30.11.124.http: Flags
[R], seq 124904408, win 0, length 0
23:24:19.917797 IP 172.30.11.124.http  172.30.11.122.37138: Flags
[S.], seq 346617285, ack 124904408, win 14480, options [mss
1460,sackOK,TS val 6521332 ecr 6521332,nop,wscale 5], length 0
23:24:19.917920 IP 172.30.11.122.37138  172.30.11.124.http: Flags
[R], seq 124904408, win 0, length 0
23:24:35.965795 IP 172.30.11.124.http  172.30.11.122.37138: Flags
[S.], seq 597367243, ack 124904408, win 14480, options [mss
1460,sackOK,TS val 6525344 ecr 6525344,nop,wscale 5], length 0
23:24:35.965906 IP 172.30.11.122.37138  172.30.11.124.http: Flags
[R], seq 124904408, win 0, length 0
23:25:04.848090 IP 172.30.11.124.http  172.30.11.122.44872: Flags
[.], seq 622394574:622396022, ack 3117157865, win 486, options
[nop,nop,TS val 6532564 ecr 1130451999], length 1448
23:25:04.848123 IP 172.30.11.124.http  172.30.11.122.44872: Flags
[.], seq 1448:2896, ack 1, win 486, options [nop,nop,TS val 6532564
ecr 1130451999], length 1448
23:25:04.848143 IP 172.30.11.124.http  172.30.11.122.44872: Flags
[P.], seq 2896:3880, ack 1, win 486, options [nop,nop,TS val 6532564
ecr 1130451999], length 984
23:25:04.848480 IP 172.30.11.122.44872  172.30.11.124.http: Flags
[.], ack 1448, win 274, options [nop,nop,TS val 1130631953 ecr
6532564], length 0
23:25:04.848572 IP 172.30.11.122.44872  172.30.11.124.http: Flags
[.], ack 2896, win 319, options [nop,nop,TS val 1130631953 ecr
6532564], length 0
23:25:04.848667 IP 172.30.11.122.44872  172.30.11.124.http: Flags
[.], ack 3880, win 364, options [nop,nop,TS val 1130631953 ecr
6532564], length 0
23:26:59.848715 IP 172.30.11.122.44872  172.30.11.124.http: Flags
[F.], seq 1, ack 3880, win 364, options [nop,nop,TS val 1130746953 ecr
6532564], length 0
23:26:59.848866 IP 172.30.11.124.http  172.30.11.122.44872: Flags
[F.], seq 3880, ack 2, win 486, options [nop,nop,TS val 6561314 ecr
1130746953], length 0
23:26:59.849005 IP 172.30.11.122.44872  172.30.11.124.http: Flags
[.], ack 3881, win 364, options [nop,nop,TS val 1130746954 ecr
6561314], length 0



Moreover its taking long time to respond connection failed error
message in browser. Without tproxy rules, webserver is working like
Gem.
I really don't know  what is going on and What I did wrong.
Please help me  since I m new to squid.

Regards,
Saravanan N


Re: [squid-users] x-forwarded-for patch for squid-2.5.Stable11

2005-10-14 Thread saravanan ganapathy
  I am posting this on both dansguardian and squid
  lists so that it can help 
  anyone with the x-forwarded-for patch.
  
  Download squid-2.5.STABLE9.tar.gz and
  follow_xff-2.5.STABLE5.patch on /tmp
  Extract the squid tar file with: tar xvfz
  squid-2.5.STABLE9.tar.gz
  copy follow_xff-2.5.STABLE5.patch to
  /tmp/squid-2.5.STABLE9
  cd to /tmp/squid-2.5.STABLE9 and execute: patch
 -p0
   
  follow_xff-2.5.STABLE5.patch
  
  you should get the following errors:
  
  FedoraCore2[/tmp/squid-2.5.STABLE9]patch -p0 
  follow_xff-2.5.STABLE5.patch
  patching file acconfig.h
  patching file bootstrap.sh
  Hunk #1 succeeded at 66 (offset 7 lines).
  patching file configure.in
  Hunk #1 succeeded at 1128 (offset 28 lines).
  patching file src/acl.c
  Hunk #1 succeeded at 2147 (offset 107 lines).
  patching file src/cf.data.pre
  Hunk #1 succeeded at 2144 (offset 29 lines).
  patching file src/client_side.c
  Hunk #2 succeeded at 185 (offset 2 lines).
  Hunk #4 succeeded at 3308 (offset 58 lines).
  patching file src/delay_pools.c
  patching file src/structs.h
  Hunk #1 FAILED at 594.
  Hunk #2 succeeded at 634 (offset 14 lines).
  Hunk #3 succeeded at 1621 (offset 2 lines).
  Hunk #4 succeeded at 1684 (offset 14 lines).
  Hunk #5 FAILED at 1697.
  2 out of 5 hunks FAILED -- saving rejects to file
  src/structs.h.rej
  
  This means that two hunks (parts) of the patch
  failed to patch src/structs.h 
  at around lines 594 and 1697.  Now look at the
  src/structs.h.rej which 
  should look like this:
  
  ***
  *** 594,599 
  int pipeline_prefetch;
  int request_entities;
  int detect_broken_server_pconns;
} onoff;
acl *aclList;
struct {
  --- 594,604 
  int pipeline_prefetch;
  int request_entities;
  int detect_broken_server_pconns;
  + #if FOLLOW_X_FORWARDED_FOR
  +int acl_uses_indirect_client;
  +int delay_pool_uses_indirect_client;
  +int log_uses_indirect_client;
  + #endif /* FOLLOW_X_FORWARDED_FOR */
} onoff;
acl *aclList;
struct {
  ***
  *** 1681,1686 
char *peer_login; /* Configured peer
  login:password */
time_t lastmod;   /* Used on
 refreshes
  */
const char *vary_headers; /* Used when
 varying
  entities are detected. 
  Chan
  ges how the store key is calculated */
};
  
struct _cachemgr_passwd {
  --- 1697,1707 
char *peer_login; /* Configured peer
  login:password */
time_t lastmod;   /* Used on
 refreshes
  */
const char *vary_headers; /* Used when
 varying
  entities are detected. 
  Chan
  ges how the store key is calculated */
  + #if FOLLOW_X_FORWARDED_FOR
  + /* XXX a list of IP addresses would be a
  better data structure
  +  * than this String */
  + String x_forwarded_for_iterator;
  + #endif /* FOLLOW_X_FORWARDED_FOR */
};
  
struct _cachemgr_passwd {
  
  As you can see the patch has found some 'issues'
 on
  line 594 where it was 
  expecting something that it did not find.  No
  problem, just open 
  src/structs.h with 'vi' and go to line 594 and
  locate the line:
  
  int detect_broken_server_pconns;
  
  which should be somewhere around there.
  now insert the following as described by the .rej
  file (remove the + which 
  means ADD)
  
  #if FOLLOW_X_FORWARDED_FOR
  int acl_uses_indirect_client;
  int delay_pool_uses_indirect_client;
  int log_uses_indirect_client;
  #endif /* FOLLOW_X_FORWARDED_FOR */
  
  so around line 594 you should now have:
  
  int detect_broken_server_pconns;
  #if FOLLOW_X_FORWARDED_FOR
  int acl_uses_indirect_client;
  int delay_pool_uses_indirect_client;
  int log_uses_indirect_client;
  #endif /* FOLLOW_X_FORWARDED_FOR */
  int balance_on_multiple_ip;
  int relaxed_header_parser;
  int accel_uses_host_header;
  int accel_no_pmtu_disc;
  } onoff;
  acl *aclList;
  
  OK, let's now go to line 1697 (more or less since
 we
  have just added a few 
  lines around 594)
  locate the line:
  
  const char *vary_headers; /* Used when varying
  entities are detected. Chan 
  ges how the store key is calculated */
  
  which should be somewhere around there.
  now insert the following as described by the .rej
  file (remove the + which 
  means ADD)
  
  #if FOLLOW_X_FORWARDED_FOR
   /* XXX a list of IP addresses would be a
 better
  data structure
* than this String */
   String x_forwarded_for_iterator;
  #endif /* FOLLOW_X_FORWARDED_FOR */
  
  so around line 1697 you should now have:
  
  char *peer_login;   /* Configured peer
  login:password */
  time_t lastmod; /* Used on
 refreshes
  */
  const char *vary_headers;   /* Used when
 varying
  entities are detected. 
  Changes how the store key is calculated */
  #if FOLLOW_X_FORWARDED_FOR
  /* 

Re: [squid-users] x-forwarded-for patch for squid-2.5.Stable11

2005-10-14 Thread saravanan ganapathy

   Download squid-2.5.STABLE9.tar.gz and
   follow_xff-2.5.STABLE5.patch on /tmp
   Extract the squid tar file with: tar xvfz
   squid-2.5.STABLE9.tar.gz
   copy follow_xff-2.5.STABLE5.patch to
   /tmp/squid-2.5.STABLE9
   cd to /tmp/squid-2.5.STABLE9 and execute: patch
  -p0

   follow_xff-2.5.STABLE5.patch
   
   you should get the following errors:
   
   FedoraCore2[/tmp/squid-2.5.STABLE9]patch -p0 
   follow_xff-2.5.STABLE5.patch
   patching file acconfig.h
   patching file bootstrap.sh
   Hunk #1 succeeded at 66 (offset 7 lines).
   patching file configure.in
   Hunk #1 succeeded at 1128 (offset 28 lines).
   patching file src/acl.c
   Hunk #1 succeeded at 2147 (offset 107 lines).
   patching file src/cf.data.pre
   Hunk #1 succeeded at 2144 (offset 29 lines).
   patching file src/client_side.c
   Hunk #2 succeeded at 185 (offset 2 lines).
   Hunk #4 succeeded at 3308 (offset 58 lines).
   patching file src/delay_pools.c
   patching file src/structs.h
   Hunk #1 FAILED at 594.
   Hunk #2 succeeded at 634 (offset 14 lines).
   Hunk #3 succeeded at 1621 (offset 2 lines).
   Hunk #4 succeeded at 1684 (offset 14 lines).
   Hunk #5 FAILED at 1697.
   2 out of 5 hunks FAILED -- saving rejects to
 file
   src/structs.h.rej
   
   This means that two hunks (parts) of the patch
   failed to patch src/structs.h 
   at around lines 594 and 1697.  Now look at the
   src/structs.h.rej which 
   should look like this:
   
   ***
   *** 594,599 
   int pipeline_prefetch;
   int request_entities;
   int detect_broken_server_pconns;
 } onoff;
 acl *aclList;
 struct {
   --- 594,604 
   int pipeline_prefetch;
   int request_entities;
   int detect_broken_server_pconns;
   + #if FOLLOW_X_FORWARDED_FOR
   +int acl_uses_indirect_client;
   +int delay_pool_uses_indirect_client;
   +int log_uses_indirect_client;
   + #endif /* FOLLOW_X_FORWARDED_FOR */
 } onoff;
 acl *aclList;
 struct {
   ***
   *** 1681,1686 
 char *peer_login; /* Configured
 peer
   login:password */
 time_t lastmod;   /* Used on
  refreshes
   */
 const char *vary_headers; /* Used when
  varying
   entities are detected. 
   Chan
   ges how the store key is calculated */
 };
   
 struct _cachemgr_passwd {
   --- 1697,1707 
 char *peer_login; /* Configured
 peer
   login:password */
 time_t lastmod;   /* Used on
  refreshes
   */
 const char *vary_headers; /* Used when
  varying
   entities are detected. 
   Chan
   ges how the store key is calculated */
   + #if FOLLOW_X_FORWARDED_FOR
   + /* XXX a list of IP addresses would be a
   better data structure
   +  * than this String */
   + String x_forwarded_for_iterator;
   + #endif /* FOLLOW_X_FORWARDED_FOR */
 };
   
 struct _cachemgr_passwd {
   
   As you can see the patch has found some 'issues'
  on
   line 594 where it was 
   expecting something that it did not find.  No
   problem, just open 
   src/structs.h with 'vi' and go to line 594 and
   locate the line:
   
   int detect_broken_server_pconns;
   
   which should be somewhere around there.
   now insert the following as described by the
 .rej
   file (remove the + which 
   means ADD)
   
   #if FOLLOW_X_FORWARDED_FOR
   int acl_uses_indirect_client;
   int delay_pool_uses_indirect_client;
   int log_uses_indirect_client;
   #endif /* FOLLOW_X_FORWARDED_FOR */
   
   so around line 594 you should now have:
   
   int detect_broken_server_pconns;
   #if FOLLOW_X_FORWARDED_FOR
   int acl_uses_indirect_client;
   int delay_pool_uses_indirect_client;
   int log_uses_indirect_client;
   #endif /* FOLLOW_X_FORWARDED_FOR */
   int balance_on_multiple_ip;
   int relaxed_header_parser;
   int accel_uses_host_header;
   int accel_no_pmtu_disc;
   } onoff;
   acl *aclList;
   
   OK, let's now go to line 1697 (more or less
 since
  we
   have just added a few 
   lines around 594)
   locate the line:
   
   const char *vary_headers; /* Used when varying
   entities are detected. Chan 
   ges how the store key is calculated */
   
   which should be somewhere around there.
   now insert the following as described by the
 .rej
   file (remove the + which 
   means ADD)
   
   #if FOLLOW_X_FORWARDED_FOR
/* XXX a list of IP addresses would be a
  better
   data structure
 * than this String */
String x_forwarded_for_iterator;
   #endif /* FOLLOW_X_FORWARDED_FOR */
   
   so around line 1697 you should now have:
   
   char *peer_login;   /* Configured
 peer
   login:password */
   time_t lastmod; /* Used on
  refreshes
   */
   const char *vary_headers;   /* Used when
  varying
   entities are detected. 
   Changes how the store key is 

[squid-users] block streaming audio/video

2005-04-15 Thread saravanan ganapathy
Hai ,

How to block audio/video streaming using squid?

I could only blocked download of audio/video
extensions and its mime types.

Pls help me to block audio/video streaming.

Sarav 






__ 
Do you Yahoo!? 
Yahoo! Mail - You care about security. So do we. 
http://promotions.yahoo.com/new_mail


Re: [squid-users] How to stop play live audio/video files in the internet

2005-04-15 Thread saravanan ganapathy

--- D  E Radel [EMAIL PROTECTED] wrote:
 
 - Original Message - 
 From: saravanan ganapathy [EMAIL PROTECTED]
 To: squid-users@squid-cache.org
 Sent: Friday, April 15, 2005 7:58 PM
 Subject: [squid-users] How to stop play live
 audio/video files in the 
 internet
 
 
  Hai ,
 
  I have done the following configuration to block
  downloading audio/video file extensions
 
  1) acl audio-video-ext urlpath_regex -i
  \.(mp3|mpeg|avi|wmf|ogg|wav|au|mov)($|\?)
 
  2) acl audio-video rep_mime_type -i ^audio/mpeg$
 
 
  But some of the users play songs online without
  downloading. How to stop it?
  Ex,
 
 http://raaga.com/channels/tamil/movie/T672.html
 
  Please suggest me
 
  Sarav
 
 Also block these filetypes:  .wmv, .wma, .asf, .rm,
 .ram, .smil, .pls, .ra, 
 .rax, .rv., .rvx, .rmx, .rm33j, .rms, .m4a,
 .m4p.
 
 grol 


Ok I included these file types also. But still
streaming of audio/video works thru proxy. How to
block audio/video streaming?

Sarav 

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


Re: [squid-users] deny_info not working

2005-03-27 Thread saravanan ganapathy

--- Henrik Nordstrom [EMAIL PROTECTED] wrote:
 
 
 On Sat, 26 Mar 2005, saravanan ganapathy wrote:
 
  Hai,
 
  My config looks like
 
  acl audio-video-ext urlpath_regex -i
  \.(mp3|mpeg|avi|wmf|ogg|wav|au|mov)($|\?)
 
  http_access deny audio-video-ext all
  deny_info ERR_NOAUDIO_VIDEO audio-video-ext
 
  Squid blocks mp3 downloads, but my custom deny
  page(ERR_NOAUDIO_VIDEO) is not coming. I have this
  file
  ERR_NOAUDIO_VIDEO in the correct path where squid
  looks.
 
 This is because your accesses are denied by the
 all acl.
 
 Just take away the all acl from your http_access
 deny line and things 
 should be fine..

If I remove all from my acl and it works. It doesn't
work if I add 'worktime' in my acl as 

  http_access deny audio-video-ext worktime

Any limitations in deny_info like this?

Sarav 




__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


RE: [squid-users] mime type based blocking on squid

2005-03-26 Thread saravanan ganapathy

--- Chris Robertson [EMAIL PROTECTED] wrote:
  -Original Message-
  From: saravanan ganapathy
 [mailto:[EMAIL PROTECTED]
  Sent: Friday, March 25, 2005 4:15 AM
  To: squid-users@squid-cache.org
  Subject: [squid-users] mime type based blocking on
 squid 
  
  
  Hai,
  
  I configured as the following in my
 squid-5.STABLE9
  
  acl audiomime rep_mime_type -i
  ^application/audio/mpeg$
  acl audiomime1 rep_mime_type -i
 application/audio/mpeg
  
  
  http_access deny audiomime all
  http_access deny audiomime1 all
  
  http_reply_access deny audiomime all
  http_reply_access deny audiomime1 all
  
  But its not working. Still my squid allows
 audio/mpeg
  type of downloads. The squid log shows the correct
  file type (audio/mpeg). But it is not denied.
  
  What would be the problem?
  
  Sarav 
 
 Currently you are blocking a mime_type of
 application/audio/mpeg, when you
 should be blocking audio/mpeg.  As you said, the
 squid log shows the
 correct file type.
 
 Chris

Thx Chris.

I have changed my acl as 

acl audio-video rep_mime_type -i ^audio/mpeg$
acl audio-video rep_mime_type -i ^audio/x-mpeg$

http_reply_access deny audio-video all

and its working fine. Is there any way to use
deny_info for http_reply_access ?

Sarav 



 



__ 
Do you Yahoo!? 
Yahoo! Mail - Helps protect you from nasty viruses. 
http://promotions.yahoo.com/new_mail


[squid-users] deny_info not working

2005-03-26 Thread saravanan ganapathy
Hai,

My config looks like 

acl audio-video-ext urlpath_regex -i
\.(mp3|mpeg|avi|wmf|ogg|wav|au|mov)($|\?)

http_access deny audio-video-ext all
deny_info ERR_NOAUDIO_VIDEO audio-video-ext

Squid blocks mp3 downloads, but my custom deny
page(ERR_NOAUDIO_VIDEO) is not coming. I have this
file 
ERR_NOAUDIO_VIDEO in the correct path where squid
looks. 

How to troubleshoot?

Sarav 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


[squid-users] mime type based blocking on squid

2005-03-25 Thread saravanan ganapathy
Hai,

I configured as the following in my squid-5.STABLE9

acl audiomime rep_mime_type -i
^application/audio/mpeg$
acl audiomime1 rep_mime_type -i application/audio/mpeg


http_access deny audiomime all
http_access deny audiomime1 all

http_reply_access deny audiomime all
http_reply_access deny audiomime1 all

But its not working. Still my squid allows audio/mpeg
type of downloads. The squid log shows the correct
file type (audio/mpeg). But it is not denied.

What would be the problem?

Sarav 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


Re: [squid-users] mime type based blocking on squid

2005-03-25 Thread saravanan ganapathy

--- saravanan ganapathy [EMAIL PROTECTED] wrote:
 Hai,
 
 I configured as the following in my squid-5.STABLE9
 
 acl audiomime rep_mime_type -i
 ^application/audio/mpeg$
 acl audiomime1 rep_mime_type -i
 application/audio/mpeg
 
 
 http_access deny audiomime all
 http_access deny audiomime1 all
 
 http_reply_access deny audiomime all
 http_reply_access deny audiomime1 all
 
 But its not working. Still my squid allows
 audio/mpeg
 type of downloads. The squid log shows the correct
 file type (audio/mpeg). But it is not denied.
 
 What would be the problem?


Any help please ?

Sarav



__ 
Do you Yahoo!? 
Yahoo! Small Business - Try our new resources site!
http://smallbusiness.yahoo.com/resources/ 


[squid-users] Re: [dansguardian] x-forwarded-for patch install problem

2005-03-13 Thread saravanan ganapathy

--- Lucia Di Occhi [EMAIL PROTECTED] wrote:

 I am posting this on both dansguardian and squid
 lists so that it can help 
 anyone with the x-forwarded-for patch.
 
 Download squid-2.5.STABLE9.tar.gz and
 follow_xff-2.5.STABLE5.patch on /tmp
 Extract the squid tar file with: tar xvfz
 squid-2.5.STABLE9.tar.gz
 copy follow_xff-2.5.STABLE5.patch to
 /tmp/squid-2.5.STABLE9
 cd to /tmp/squid-2.5.STABLE9 and execute: patch -p0
  
 follow_xff-2.5.STABLE5.patch
 
 you should get the following errors:
 
 FedoraCore2[/tmp/squid-2.5.STABLE9]patch -p0 
 follow_xff-2.5.STABLE5.patch
 patching file acconfig.h
 patching file bootstrap.sh
 Hunk #1 succeeded at 66 (offset 7 lines).
 patching file configure.in
 Hunk #1 succeeded at 1128 (offset 28 lines).
 patching file src/acl.c
 Hunk #1 succeeded at 2147 (offset 107 lines).
 patching file src/cf.data.pre
 Hunk #1 succeeded at 2144 (offset 29 lines).
 patching file src/client_side.c
 Hunk #2 succeeded at 185 (offset 2 lines).
 Hunk #4 succeeded at 3308 (offset 58 lines).
 patching file src/delay_pools.c
 patching file src/structs.h
 Hunk #1 FAILED at 594.
 Hunk #2 succeeded at 634 (offset 14 lines).
 Hunk #3 succeeded at 1621 (offset 2 lines).
 Hunk #4 succeeded at 1684 (offset 14 lines).
 Hunk #5 FAILED at 1697.
 2 out of 5 hunks FAILED -- saving rejects to file
 src/structs.h.rej
 
 This means that two hunks (parts) of the patch
 failed to patch src/structs.h 
 at around lines 594 and 1697.  Now look at the
 src/structs.h.rej which 
 should look like this:
 
 ***
 *** 594,599 
 int pipeline_prefetch;
 int request_entities;
 int detect_broken_server_pconns;
   } onoff;
   acl *aclList;
   struct {
 --- 594,604 
 int pipeline_prefetch;
 int request_entities;
 int detect_broken_server_pconns;
 + #if FOLLOW_X_FORWARDED_FOR
 +int acl_uses_indirect_client;
 +int delay_pool_uses_indirect_client;
 +int log_uses_indirect_client;
 + #endif /* FOLLOW_X_FORWARDED_FOR */
   } onoff;
   acl *aclList;
   struct {
 ***
 *** 1681,1686 
   char *peer_login; /* Configured peer
 login:password */
   time_t lastmod;   /* Used on refreshes
 */
   const char *vary_headers; /* Used when varying
 entities are detected. 
 Chan
 ges how the store key is calculated */
   };
 
   struct _cachemgr_passwd {
 --- 1697,1707 
   char *peer_login; /* Configured peer
 login:password */
   time_t lastmod;   /* Used on refreshes
 */
   const char *vary_headers; /* Used when varying
 entities are detected. 
 Chan
 ges how the store key is calculated */
 + #if FOLLOW_X_FORWARDED_FOR
 + /* XXX a list of IP addresses would be a
 better data structure
 +  * than this String */
 + String x_forwarded_for_iterator;
 + #endif /* FOLLOW_X_FORWARDED_FOR */
   };
 
   struct _cachemgr_passwd {
 
 As you can see the patch has found some 'issues' on
 line 594 where it was 
 expecting something that it did not find.  No
 problem, just open 
 src/structs.h with 'vi' and go to line 594 and
 locate the line:
 
 int detect_broken_server_pconns;
 
 which should be somewhere around there.
 now insert the following as described by the .rej
 file (remove the + which 
 means ADD)
 
 #if FOLLOW_X_FORWARDED_FOR
 int acl_uses_indirect_client;
 int delay_pool_uses_indirect_client;
 int log_uses_indirect_client;
 #endif /* FOLLOW_X_FORWARDED_FOR */
 
 so around line 594 you should now have:
 
 int detect_broken_server_pconns;
 #if FOLLOW_X_FORWARDED_FOR
 int acl_uses_indirect_client;
 int delay_pool_uses_indirect_client;
 int log_uses_indirect_client;
 #endif /* FOLLOW_X_FORWARDED_FOR */
 int balance_on_multiple_ip;
 int relaxed_header_parser;
 int accel_uses_host_header;
 int accel_no_pmtu_disc;
 } onoff;
 acl *aclList;
 
 OK, let's now go to line 1697 (more or less since we
 have just added a few 
 lines around 594)
 locate the line:
 
 const char *vary_headers; /* Used when varying
 entities are detected. Chan 
 ges how the store key is calculated */
 
 which should be somewhere around there.
 now insert the following as described by the .rej
 file (remove the + which 
 means ADD)
 
 #if FOLLOW_X_FORWARDED_FOR
  /* XXX a list of IP addresses would be a better
 data structure
   * than this String */
  String x_forwarded_for_iterator;
 #endif /* FOLLOW_X_FORWARDED_FOR */
 
 so around line 1697 you should now have:
 
 char *peer_login;   /* Configured peer
 login:password */
 time_t lastmod; /* Used on refreshes
 */
 const char *vary_headers;   /* Used when varying
 entities are detected. 
 Changes how the store key is calculated */
 #if FOLLOW_X_FORWARDED_FOR
 /* XXX a list of IP addresses would be a better
 data structure
  * than this String */
 String x_forwarded_for_iterator;
 

Re: [squid-users] x-forwarded-for patch install problem

2005-03-11 Thread saravanan ganapathy

--- saravanan ganapathy [EMAIL PROTECTED] wrote:
 
 --- Henrik Nordstrom [EMAIL PROTECTED] wrote:
  
  
  On Wed, 9 Mar 2005, saravanan ganapathy wrote:
  
   Hand edit the files, adding the changes patch
  could
   not automatically
   figure out what to do with (failed/rejected).
  
  
   What are the files to be edited? What are all
 the
   changes to be done?
  
  See the output of the patch command. There is two
  filenames mentioned...
  
  patching file src/structs.h
  2 out of 5 hunks FAILED -- saving rejects to
  file src/structs.h.rej
 
 
 Really I don't know what to be changed in
 src/structs.h  src/structs.h.rej
 
 Pls help me 
 
 Sarav

I tried to find the docs in the net,but couldn't.

Hope some of you already did this configuration. Can
you pls help me?

Sarav 



__ 
Do you Yahoo!? 
Take Yahoo! Mail with you! Get it on your mobile phone. 
http://mobile.yahoo.com/maildemo 


[squid-users] x-forwarded-for patch install problem

2005-03-09 Thread saravanan ganapathy
Hai 

When I tried to apply follow_xff-2.5.patch on
squid-2.5.STABLE9 , I am getting the following error

patching file src/structs.h
Hunk #1 FAILED at 592.
Hunk #2 succeeded at 634 (offset 16 lines).
Hunk #3 succeeded at 1619 (offset 7 lines).
Hunk #4 succeeded at 1679 (offset 16 lines).
Hunk #5 FAILED at 1692.
2 out of 5 hunks FAILED -- saving rejects to file
src/structs.h.rej

How to solve this problem?

PS : I am using redhat9.0

Sarav 





__ 
Celebrate Yahoo!'s 10th Birthday! 
Yahoo! Netrospective: 100 Moments of the Web 
http://birthday.yahoo.com/netrospective/


Re: [squid-users] x-forwarded-for patch install problem

2005-03-09 Thread saravanan ganapathy

--- Henrik Nordstrom [EMAIL PROTECTED] wrote:
 
 
 On Wed, 9 Mar 2005, saravanan ganapathy wrote:
 
  Hai
 
  When I tried to apply follow_xff-2.5.patch on
  squid-2.5.STABLE9 , I am getting the following
 error
 
  patching file src/structs.h
  Hunk #1 FAILED at 592.
  Hunk #2 succeeded at 634 (offset 16 lines).
  Hunk #3 succeeded at 1619 (offset 7 lines).
  Hunk #4 succeeded at 1679 (offset 16 lines).
  Hunk #5 FAILED at 1692.
  2 out of 5 hunks FAILED -- saving rejects to file
  src/structs.h.rej
 
  How to solve this problem?
 
 Hand edit the files, adding the changes patch could
 not automatically 
 figure out what to do with (failed/rejected).


What are the files to be edited? What are all the
changes to be done? 

Can u pls help me on this?

Sarav 

__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


[squid-users] custom acl for file upload

2005-01-03 Thread saravanan ganapathy
Hai,
  I restrict the file size upload in squid using
request_body_max_size 1 MB. But I want to increase the
limit(say 3 MB) for some sites only. 

How to write acl for this?

Pls help me

Note : I am using squid-2.4.STABLE6-6.7.3. Due to some
dependency, I am not upgrading to 2.5. So I need the
solution for my current version itself

Sarav



__ 
Do you Yahoo!? 
Jazz up your holiday email with celebrity designs. Learn more. 
http://celebrity.mail.yahoo.com


[squid-users] pac implementation

2004-09-07 Thread saravanan ganapathy
Hai,
   I am using squid-2.4.STABLE6-6.7.3 on RH7.2 for
some time and now I would like to implement pac with
this.
This is to use proxy for everything except my local
hosts in my domain.
 I already configured in most of my clients that
bypass proxy for my domain. But if I use pac, then
though the local sites are not through the proxy,
every  request will hit the proxy server.
  Am I correct? Will the proxy server load increase
due to pac implementation?
 

Please guide me.

Sarav
  

   



__
Do you Yahoo!?
Yahoo! Mail - 50x more storage than other providers!
http://promotions.yahoo.com/new_mail


[squid-users] mime type based extension blocking

2004-05-23 Thread saravanan ganapathy
Hai,

I am very new to this group and coudn't find answer
for my query in the archives.

I want to block certain extensions to get
downloaded.(for ex. exe)
It works fine with the following rule.

acl exe-filter urlpath_regex -i \.exe\?*
http_access deny exe-filter

But it also blocks urls which contains exe in it,
though its not an exe download.I heard that we can
solve this issue by implementing squid2.5 and using
http_reply_access  rep_mime_type. 

Can you please send me the correct syntax to use for
my case? 

Please help me

Sarav




__
Do you Yahoo!?
Yahoo! Domains – Claim yours for only $14.70/year
http://smallbusiness.promotions.yahoo.com/offer