[squid-users] Re: icp_query_timeout directive is not working in 3.3.8 for some reason

2013-08-24 Thread x-man
Hi Guys,

actually all the problem appeared to be happening due to ICP communication
between squid and our cache peer which also supports ICP. When we have
workers (more than 1) the ICP packets were not returning back (eventually)
to the proper worker so that's why the TIMEOUTS were happening, probably due
to the reason that this is UDP communication

After adding the following simple config 

# different icp port per worker in order to make it work
icp_port 313${process_number}



now everything is working fine with Multi worker environment and our
cache_peer

 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/icp-query-timeout-directive-is-not-working-in-3-3-8-for-some-reason-tp4661324p4661754.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: squid 3.2.0.14 with TPROXY = commBind: Cannot bind socket FD 773 to xxx.xxx.xxx.xx: (98) Address

2013-08-24 Thread x-man
Hi Amos,

I have exactly the same issue as the above described.

Running squid 3.3.8 in TPROXY mode. 

In my setup the squid is serving around 1 online subscribers, and this
problem happens when i put the whole HTTP traffic. If I'm redirecting only
half of the users - then it works fine.

I guess it's something related to LIMITS imposed by the OS or the Squid
itself. Please help to identify the exact bottleneck if this issue, because
this is scalability issue.

squidclient mgr:info |grep HTTP
HTTP/1.1 200 OK
Number of HTTP requests received:   1454792
Average HTTP requests per minute since start:   116719.5

squidclient mgr:info |grep file
Maximum number of file descriptors:   524288
Largest file desc currently in use:   132904
Number of file desc currently in use: 80893
Available number of file descriptors: 443395
Reserved number of file descriptors:   800
Store Disk files open:   0

ulimit -a from the OS

core file size  (blocks, -c) unlimited
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 386229
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 100
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 386229
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited

Some tunings applied also, but not helping much:

echo applying specific tunings
echo 500 65535  /proc/sys/net/ipv4/ip_local_port_range
echo 65000  /proc/sys/net/ipv4/tcp_max_syn_backlog
echo 600  /proc/sys/net/ipv4/tcp_keepalive_time
echo 5  /proc/sys/net/core/netdev_max_backlog

echo 15  /proc/sys/net/ipv4/tcp_keepalive_intvl
echo 5   /proc/sys/net/ipv4/tcp_keepalive_probes
echo 1  /proc/sys/net/ipv4/tcp_tw_reuse# it's ok
echo 1  /proc/sys/net/ipv4/tcp_tw_recycle# it's ok
echo 200  /proc/sys/net/ipv4/tcp_max_tw_buckets#default 262144 on
Ubuntu


Let me know what other info might be useful for you?




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-3-2-0-14-with-TPROXY-commBind-Cannot-bind-socket-FD-773-to-xxx-xxx-xxx-xx-98-Address-tp4225143p4661755.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: icp_query_timeout directive is not working in 3.3.8 for some reason

2013-08-12 Thread x-man
Hi Amos,

we were actually thinking of trying something else, which is

Configure each squid worker to interact with different cache peer on, which
is listening on different port.

We will start multiple instances of our cache peer software listening on
different ports as well.

This way every squid kid will talk to particular cache peer and I hope this
way there will be no problem with the return traffic from cacher peer to
squid kid.

Let me know your thoughts on this.

Also what is the best way in squid config to put some specific setting for
particular kid? I saw some examples with IF clauses, is this working in
3.3.8?

Thanks in advance.




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/icp-query-timeout-directive-is-not-working-in-3-3-8-for-some-reason-tp4661324p4661527.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: icp_query_timeout directive is not working in 3.3.8 for some reason

2013-08-08 Thread x-man
Hi Amos,

please let me know how to get your patch related to the above.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/icp-query-timeout-directive-is-not-working-in-3-3-8-for-some-reason-tp4661324p4661500.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: icp_query_timeout directive is not working in 3.3.8 for some reason

2013-08-06 Thread x-man
Hi Amos,

thanks for the valuable input from your side.

I did further testing and I finally got to the problem. 

Really with the new version of squid the problem is happening, but it's only
happening when I use workers  1, if I keep single worker, then everything
is ok, once I have multiple squid workers, somehow the Responses from the
Cache peer are not reaching back probably 1 of the workers, OR all responses
are going back to only 1 worker, that's why there is this
TIMEOUT_FIRSTUP_PARENT  happening.

Now the questions is why it's happening like this, because my firewall
config is kept unchanged. I'm using tproxy for both squid and my cache peer.
The squid and my cache peer are running on same machine.

Can you provide some insights based on the above?





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/icp-query-timeout-directive-is-not-working-in-3-3-8-for-some-reason-tp4661324p4661456.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: icp_query_timeout directive is not working in 3.3.8 for some reason

2013-08-06 Thread x-man
Hi Amos,

it's not only the ICP, this is affecting the communication to the cache
peer, so If I allocate two different ports for ICP and for the peer as well,
then I need to run 2 cache peers, everyone with separate storage, and they
will obviously have duplicate contents... 

I would rather try your patch in this case... can you provide me with the
patch, so I can patch my 3.3.8 version and compile and eventually test it :)

Thanks in advance



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/icp-query-timeout-directive-is-not-working-in-3-3-8-for-some-reason-tp4661324p4661478.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: icp_query_timeout directive is not working in 3.3.8 for some reason

2013-08-03 Thread x-man
Hi Amos,

I think this request time is the time needed to serve the entire request.

How is icp_query_timeout related to that, it should be only about the query
through ICP protocol?

Otherwise we are using our own cache peer which is dealing with the youtube
content, which supports ICP protocol, it's connected to squid as cache peer
and the squid (based on ACL) is sending youtube requests to the cache peer.

I'm comparing squid 3.1.9 and 3.3.8 and what I notice is that without
changing any other element of the system

with icp_query_timeout 9000 set for both test cases, 

with squid 3.1 I don't get any TIMEOUT_FIRSTUP_PARENT in the access.log, and
with squid 3.3.8 I'm getting lot's of them and this is reducing our
performance.

Please suggest what can be the difference and what I can check further.




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/icp-query-timeout-directive-is-not-working-in-3-3-8-for-some-reason-tp4661324p4661417.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: icp_query_timeout directive is not working in 3.3.8 for some reason

2013-08-02 Thread x-man
Hi,

any insights on this will be more than welcome :)



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/icp-query-timeout-directive-is-not-working-in-3-3-8-for-some-reason-tp4661324p4661411.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid 3.3.6 FATAL: Received Segment Violation...dying.

2013-07-30 Thread x-man
I produced another binary by compiling on another paltform - this time Ubuntu
12.04 and it looks like this new binary doesn't have this problem... strange
enough. 

Compiled with all the same options/flags.





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-3-6-FATAL-Received-Segment-Violation-dying-tp4661026p4661321.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] icp_query_timeout directive is not working in 3.3.8 for some reason

2013-07-30 Thread x-man
I'm setting icp_query_timeout 9000 because my cache peer is responding a
little slowly and this setting was doing great job in squid 3.1, but after
migrating in a test installation to 3.3 then again I started seeing in the 
ACCESS log

30/Jul/2013:15:12:54 +0530  28700 115.127.14.209 TCP_MISS/200 238022 GET
http://r14---sn-cvh7zn7r.c.youtube.com/videoplayback? -
TIMEOUT_FIRSTUP_PARENT/10.240.254.50 application/octet-stream
30/Jul/2013:15:12:54 +0530  23079 115.127.17.13 TCP_MISS/200 242118 GET
http://r13---sn-cvh7zn7k.c.youtube.com/videoplayback? -
TIMEOUT_FIRSTUP_PARENT/10.240.254.50 application/octet-stream
30/Jul/2013:15:12:54 +0530  22254 115.127.19.239 TCP_MISS/200 9663 GET
http://r14---sn-cvh7zn7d.c.youtube.com/videoplayback? -
TIMEOUT_FIRSTUP_PARENT/10.240.254.50 application/octet-stream
30/Jul/2013:15:12:54 +0530  22283 115.127.17.80 TCP_MISS/200 14214 GET
http://r13---sn-cvh7zn76.c.youtube.com/videoplayback? -
TIMEOUT_FIRSTUP_PARENT/10.240.254.50 application/octet-stream


this same setup was working fine with the squid 3.1 version, all rest of
settings are same

Can this icp_query_timeout directive be broken in recent versions, or can it
be related or dependent to some Compilation setting?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/icp-query-timeout-directive-is-not-working-in-3-3-8-for-some-reason-tp4661324.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid 3.3.6 FATAL: Received Segment Violation...dying.

2013-07-24 Thread x-man
cache.zip
http://squid-web-proxy-cache.1019090.n4.nabble.com/file/n4661225/cache.zip  


Hi,

I've attached zipped version of my cache.log file which clearly shows 3
times FATAL: Received Segment Violation...dying.


This session is very small - only 2-3 minutes of traffic for 1000 users, but
same happens in lab tests with 5 users, just it takes more time...

Tested squid 3.3.7 - same is happening.

Squid Cache: Version 3.3.7
configure options:  '--target=x86_64-pc-linux-gnu'
'--host=x86_64-pc-linux-gnu' '--build=x86_64-linux' '--prefix=/usr'
'--sysconfdir=/etc/squid3' '--localstatedir=/var'
'--libexecdir=/usr/lib/squid3' '--disable-maintainer-mode'
'--disable-dependency-tracking' '--disable-silent-rules'
'--datadir=/usr/share/squid3' '--enable-inline' '--enable-async-io=8'
'--enable-storeio=ufs,aufs,diskd' '--enable-removal-policies=lru,heap'
'--enable-delay-pools' '--enable-cache-digests' '--enable-underscores'
'--enable-icap-client' '--enable-follow-x-forwarded-for' '--enable-auth'
'--enable-auth-basic' '--enable-auth-digest' '--enable-auth-ntlm'
'--enable-auth-negotiate'
'--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SASL,SMB,YP,DB,POP3,getpwnam,squid_radius_auth,multi-domain-NTLM'
'--enable-ntlm-auth-helpers=smb_lm,'
'--enable-digest-auth-helpers=ldap,password'
'--enable-negotiate-auth-helpers=squid_kerb_auth'
'--enable-external-acl-helpers=ip_user,ldap_group,unix_group,wbinfo_group'
'--enable-arp-acl' '--enable-esi' '--enable-zph-qos' '--enable-wccpv2'
'--enable-ecap' '--disable-translation' '--with-logdir=/var/log/squid3'
'--with-pidfile=/var/run/squid3.pid' '--with-filedescriptors=65536'
'--with-large-files' '--with-default-user=proxy' '--enable-linux-netfilter'
'build_alias=x86_64-linux' 'host_alias=x86_64-pc-linux-gnu'
'target_alias=x86_64-pc-linux-gnu'
'CC=/flashos/build64/ct/x86_64-pc-linux-gnu/bin/x86_64-pc-linux-gnu-gcc'
'CFLAGS=-pipe -O2 -Wall -march=core2 -D_LARGEFILE_SOURCE
-D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -fno-stack-protector
-mno-stackrealign -I/flashos/s64/include -I/flashos/s64/usr/include -fPIE
-fstack-protector --param=ssp-buffer-size=4' 'LDFLAGS=-L/flashos/s64/lib
-L/flashos/s64/usr/lib' 'CPPFLAGS=-pipe -O2 -Wall -march=core2
-D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64
-fno-stack-protector -mno-stackrealign -I/flashos/s64/include
-I/flashos/s64/usr/include -D_FORTIFY_SOURCE=2'
'CXX=/flashos/build64/ct/x86_64-pc-linux-gnu/bin/x86_64-pc-linux-gnu-g++'
'CXXFLAGS=-pipe -O2 -Wall -march=core2 -D_LARGEFILE_SOURCE
-D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -fno-stack-protector
-mno-stackrealign -I/flashos/s64/include -I/flashos/s64/usr/include -fPIE
-fstack-protector --param=ssp-buffer-size=4'
'EXT_LIBECAP_CFLAGS=-I/flashos/s64/usr/include' 'EXT_LIBECAP_LIBS=-lecap'
--enable-ltdl-convenience




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-3-6-FATAL-Received-Segment-Violation-dying-tp4661026p4661225.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid 3.3.5 problems with Tproxy

2013-07-12 Thread x-man
Ye, it's a self compiled .. that's why I got some issues :)

Btw, any idea from where to get default squid.conf file for 3.3.6 version?





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-3-5-problems-with-Tproxy-tp4660968p4661023.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Squid 3.3.6 FATAL: Received Segment Violation...dying.

2013-07-12 Thread x-man
Hi,

I'm running squid 3.3.6 from a short time with simple TPROXY configuration,
which was running perfectly with 3.1.19 for a very long time, but now with
3.3.6, started with workers 2 - I noticed that workers are dying frequently
and what I see in the cache.log is 

FATAL: Received Segment Violation...dying.


What can be the reason for this behavior, even having in mind that it's very
low traffic, because I'm testing in office environment with few PCs
connected only...





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-3-6-FATAL-Received-Segment-Violation-dying-tp4661026.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid 3.3.5 problems with Tproxy

2013-07-11 Thread x-man
Thanks Amos,

the issue was related to /var/run/squid 

Once I created that folder and allowed squid user to write there the issue
was resolved.




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-3-5-problems-with-Tproxy-tp4660968p4660992.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid 3.3.5 problems with Tproxy

2013-07-10 Thread x-man
Hi,

it was compilation error and after fixing it now it's working fine when
using single Squid process...

I got problems when trying to use workers

Listen settings are like this: 

http_port 3128 
http_port 8080 transparent
http_port 8081 tproxy

workers 2

is the above correct and do I need to specify more ports with if/else
clauses for the different workers?

and now the actual issues is that I get these in my log files

2013/07/10 18:41:15 kid2| commBind: Cannot bind socket FD 19 to [::]: (2) No
such file or directory
...
2013/07/10 18:41:12 kid1| commBind: Cannot bind socket FD 18 to [::]: (2) No
such file or directory
...

here are my options that squid is compiled with:
squid3 -v

Squid Cache: Version 3.3.6
configure options:  '--target=x86_64-pc-linux-gnu'
'--host=x86_64-pc-linux-gnu' '--build=x86_64-linux' '--prefix=/usr'
'--sysconfdir=/etc/squid3' '--localstatedir=/var'
'--libexecdir=/usr/lib/squid3' '--disable-maintainer-mode'
'--disable-dependency-tracking' '--disable-silent-rules'
'--datadir=/usr/share/squid3' '--enable-inline' '--enable-async-io=8'
'--enable-storeio=ufs,aufs,diskd' '--enable-removal-policies=lru,heap'
'--enable-delay-pools' '--enable-cache-digests' '--enable-underscores'
'--enable-icap-client' '--enable-follow-x-forwarded-for' '--enable-auth'
'--enable-auth-basic' '--enable-auth-digest' '--enable-auth-ntlm'
'--enable-auth-negotiate'
'--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SASL,SMB,YP,DB,POP3,getpwnam,squid_radius_auth,multi-domain-NTLM'
'--enable-ntlm-auth-helpers=smb_lm,'
'--enable-digest-auth-helpers=ldap,password'
'--enable-negotiate-auth-helpers=squid_kerb_auth'
'--enable-external-acl-helpers=ip_user,ldap_group,unix_group,wbinfo_group'
'--enable-arp-acl' '--enable-esi' '--enable-zph-qos' '--enable-wccpv2'
'--enable-ecap' '--disable-translation' '--with-logdir=/var/log/squid3'
'--with-pidfile=/var/run/squid3.pid' '--with-filedescriptors=65536'
'--with-large-files' '--with-default-user=proxy' '--enable-linux-netfilter'
'build_alias=x86_64-linux' 'host_alias=x86_64-pc-linux-gnu'
'target_alias=x86_64-pc-linux-gnu'
'CC=/flashos/build64/ct/x86_64-pc-linux-gnu/bin/x86_64-pc-linux-gnu-gcc'
'CFLAGS=-pipe -O2 -Wall -march=core2 -D_LARGEFILE_SOURCE
-D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -fno-stack-protector
-mno-stackrealign -I/flashos/s64/include -I/flashos/s64/usr/include -fPIE
-fstack-protector --param=ssp-buffer-size=4' 'LDFLAGS=-L/flashos/s64/lib
-L/flashos/s64/usr/lib' 'CPPFLAGS=-pipe -O2 -Wall -march=core2
-D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64
-fno-stack-protector -mno-stackrealign -I/flashos/s64/include
-I/flashos/s64/usr/include -D_FORTIFY_SOURCE=2'
'CXX=/flashos/build64/ct/x86_64-pc-linux-gnu/bin/x86_64-pc-linux-gnu-g++'
'CXXFLAGS=-pipe -O2 -Wall -march=core2 -D_LARGEFILE_SOURCE
-D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -fno-stack-protector
-mno-stackrealign -I/flashos/s64/include -I/flashos/s64/usr/include -fPIE
-fstack-protector --param=ssp-buffer-size=4'
'EXT_LIBECAP_CFLAGS=-I/flashos/s64/usr/include' 'EXT_LIBECAP_LIBS=-lecap'
--enable-ltdl-convenience

Can you give any idea, what could be wrong and where is this Directory, that
the squid expects to find but cannot find actually?











--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-3-5-problems-with-Tproxy-tp4660968p4660974.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Squid 3.3.5 problems with Tproxy

2013-07-09 Thread x-man
Hello,

recently migrated from squid 3.1.19 to 3.3.5, and under 3.1.19 I was having
perfect TPROXY setup working, and now absolutely same config is failing in
3.3.5

This is the log I managed to get

91.239.13.61 is the web site i'm trying to open

and 192.168.1.106 is my test PC IP address

The browser shows error: Error 324 (net::ERR_EMPTY_RESPONSE): The server
closed the connection without sending any data.

What can be the problem, and is it somehow related to this BUG #3329:
---
2013/07/09 19:39:04.523 kid1| Connection.cc(32) ~Connection: BUG #3329:
Orphan Comm::Connection: local=91.239.13.61:80 remote=192.168.1.106:59222 FD
12 flags=17
-



2013/07/09 19:39:04.523 kid1| Intercept.cc(364) Lookup: address BEGIN:
me/client= 91.239.13.61:80, destination/me= 192.168.1.106:59222
2013/07/09 19:39:04.523 kid1| TcpAcceptor.cc(264) acceptOne: Listener:
local=[::]:8081 remote=[::] FD 15 flags=25 accepted new connection
local=91.239.13.61:80 remote=192.168.1.106:59222 FD 12 flags=17 handler
Subscription: 0x254b210*1
2013/07/09 19:39:04.523 kid1| AsyncCall.cc(18) AsyncCall: The AsyncCall
httpAccept constructed, this=0x25529e0 [call141]
2013/07/09 19:39:04.523 kid1| cbdata.cc(419) cbdataInternalLock: cbdataLock:
0x21920e8=3
2013/07/09 19:39:04.523 kid1| AsyncCall.cc(85) ScheduleCall:
TcpAcceptor.cc(292) will call httpAccept(local=91.239.13.61:80
remote=192.168.1.106:59222 FD 12 flags=17, flag=-1, data=0x21920e8)
[call141]
2013/07/09 19:39:04.523 kid1| ModEpoll.cc(139) SetSelect: FD 15, type=1,
handler=1, client_data=0x254cc58, timeout=0
2013/07/09 19:39:04.523 kid1| AsyncCallQueue.cc(51) fireNext: entering
httpAccept(local=91.239.13.61:80 remote=192.168.1.106:59222 FD 12 flags=17,
flag=-1, data=0x21920e8)
2013/07/09 19:39:04.523 kid1| AsyncCall.cc(30) make: make call httpAccept
[call141]
2013/07/09 19:39:04.523 kid1| cbdata.cc(510) cbdataReferenceValid:
cbdataReferenceValid: 0x21920e8
2013/07/09 19:39:04.523 kid1| client_side.cc(3335) httpAccept: httpAccept:
local=[::]:8081 remote=[::] FD 15 flags=25: accept failure: (0) No error.
2013/07/09 19:39:04.523 kid1| AsyncCallQueue.cc(53) fireNext: leaving
httpAccept(local=91.239.13.61:80 remote=192.168.1.106:59222 FD 12 flags=17,
flag=-1, data=0x21920e8)
2013/07/09 19:39:04.523 kid1| cbdata.cc(456) cbdataInternalUnlock:
cbdataUnlock: 0x21920e8=2
2013/07/09 19:39:04.523 kid1| Connection.cc(32) ~Connection: BUG #3329:
Orphan Comm::Connection: local=91.239.13.61:80 remote=192.168.1.106:59222 FD
12 flags=17

Please share your thoughts




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-3-3-5-problems-with-Tproxy-tp4660968.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] NULL characters in Header - how to get which sites generate this?

2012-09-13 Thread x-man
Hi,

my cache.log file is full of this kind of messages:

Content-Type: application/x-www-form-urlencoded
2012/09/13 17:39:35| WARNING: HTTP header contains NULL characters {Accept:
*/*
Content-Type: application/x-www-form-urlencoded}
NULL
{Accept: */*
Content-Type: application/x-www-form-urlencoded
2012/09/13 17:39:35| WARNING: HTTP header contains NULL characters {Accept:
*/*
Content-Type: application/x-www-form-urlencoded}
NULL
{Accept: */*
Content-Type: application/x-www-form-urlencoded

How to find out which site/sites are generating this, so I can bypass
them... 

I'm running squid 3.1.20 as transparent tproxy for many users.

Thanks in advance.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/NULL-characters-in-Header-how-to-get-which-sites-generate-this-tp4656561.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: error:invalid-request

2012-09-13 Thread x-man
Hi,

I have similar issue,

the question is how to identify the Sites that are using such unknown to
squid protocols so I can bypass them on firewall level. 

From the LOG i cannot get the dst IP or dst domain?

Then how to get this info?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/error-invalid-request-tp4656552p4656562.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] strange problem in squid with certain sites and strange log

2012-07-05 Thread x-man
Hello,

I'm running squid 3.1.19 in tproxy mode, and I have some customers
complaining that they have problem with stock exchange (share market) sites
and software after implementing the squid. What I saw in the access.log for
one of the customers is:

10.21.8.59 - - [04/Jul/2012:12:01:56 +0530] GET
http://login.angeltrade.com/AngeltradeDiet/DietOdinSingleInterface.asp?
HTTP/1.1 200 3557 TCP_MISS:DIRECT
10.21.8.59 - - [04/Jul/2012:12:01:57 +0530] NONE error:invalid-request
HTTP/0.0 400 4021 NONE:NONE
10.21.8.59 - - [04/Jul/2012:12:01:59 +0530] NONE error:invalid-request
HTTP/0.0 400 4021 NONE:NONE
10.21.8.59 - - [04/Jul/2012:12:02:00 +0530] NONE error:invalid-request
HTTP/0.0 400 4021 NONE:NONE
10.21.8.59 - - [04/Jul/2012:12:02:01 +0530] NONE error:invalid-request
HTTP/0.0 400 4021 NONE:NONE
10.21.8.59 - - [04/Jul/2012:12:02:02 +0530] NONE error:invalid-request
HTTP/0.0 400 4021 NONE:NONE
10.21.8.59 - - [04/Jul/2012:12:02:03 +0530] NONE error:invalid-request
HTTP/0.0 400 4021 NONE:NONE
10.21.8.59 - - [04/Jul/2012:12:02:04 +0530] NONE error:invalid-request
HTTP/0.0 400 4021 NONE:NONE
10.21.8.59 - - [04/Jul/2012:12:02:06 +0530] NONE error:invalid-request
HTTP/0.0 400 4021 NONE:NONE
10.21.8.59 - - [04/Jul/2012:12:02:07 +0530] NONE error:invalid-request
HTTP/0.0 400 4021 NONE:NONE
10.21.8.59 - - [04/Jul/2012:12:02:08 +0530] NONE error:invalid-request
HTTP/0.0 400 4021 NONE:NONE
10.21.8.59 - - [04/Jul/2012:12:02:09 +0530] NONE error:invalid-request
HTTP/0.0 400 4021 NONE:NONE
10.21.8.59 - - [04/Jul/2012:12:02:10 +0530] NONE error:invalid-request
HTTP/0.0 400 4021 NONE:NONE
10.21.8.59 - - [04/Jul/2012:12:02:11 +0530] NONE error:invalid-request
HTTP/0.0 400 4021 NONE:NONE
10.21.8.59 - - [04/Jul/2012:12:02:12 +0530] NONE error:invalid-request
HTTP/0.0 400 4021 NONE:NONE

What would this mean?

What can I do to further investigate the case and probably resolve it...

Any help would be appreciated.

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/strange-problem-in-squid-with-certain-sites-and-strange-log-tp4655663.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: cache peer communication about HIT/MISS between squid and and non-squid peer

2012-05-17 Thread x-man
bump.

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/cache-peer-communication-about-HIT-MISS-between-squid-and-and-non-squid-peer-tp4600931p4642629.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: cache peer communication about HIT/MISS between squid and and non-squid peer

2012-05-11 Thread x-man
Hi Amos,

please help?

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/cache-peer-communication-about-HIT-MISS-between-squid-and-and-non-squid-peer-tp4600931p4625770.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: cache peer communication about HIT/MISS between squid and and non-squid peer

2012-05-08 Thread x-man
I also found this posting from mail list:

http://www1.ro.squid-cache.org/mail-archive/squid-dev/201004/0015.html
http://www1.ro.squid-cache.org/mail-archive/squid-dev/201004/0015.html 

It looks like the guy there is having the same request as I have. 

What do you think about this Amos?


Actually I wonder about this directive: qos_flows parent-hit=0x30 

It looks like the Squid will mark the traffic if it is a HIT from parent
cache peer, BUT:

How is the parent HIT determined? Based on what? X-Cache Header or ICP
protol or...?

Regards,
Plamen

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/cache-peer-communication-about-HIT-MISS-between-squid-and-and-non-squid-peer-tp4600931p4616728.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: cache peer communication about HIT/MISS between squid and and non-squid peer

2012-05-07 Thread x-man

Amos Jeffries-2 wrote
 
 
 Waking up;
fully internally to Squid we have the X-Cache: header at present 
 being emitted with HIT/MISS details. You may be able to setup a 
 rep_header_regex ACL the scans for HIT and your upstream cache name. 
 Used on tcp_outgoing_tos to set TOS value. This is not a completely 
 reliable header though, there will be some false matches or failures 
 using it at present and future plans are to remove it from most traffic.
 
 Amos
 

Hi Amos, I think you should be talking about the clientside_tos because this
is what I need (marking the packets from squid to users). 

I had to patch my squid and recompile it because I was using 3.1.19 ubuntu
version, which had the bug - not working clientside_tos (thanks to mailing
list I found info about it) - so I did it and clientside_tos feature is now
working as a whole. 

Now, I'm trying to setup a rep_header ACL but I got some errors like:

2012/05/07 13:13:20.795| ACL::checklistMatches WARNING: 'hit' ACL is used
but there is no HTTP reply -- not matching.

So my packets are not marked...

here is part of the config:

acl hit rep_header X-Cache HIT
clientside_tos 0x30 hit

What does this error means?

Anyway I can solve this issue?

Thanks in advance for your valuable help.


--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/cache-peer-communication-about-HIT-MISS-between-squid-and-and-non-squid-peer-tp4600931p4614214.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: cache peer communication about HIT/MISS between squid and and non-squid peer

2012-05-06 Thread x-man

Amos Jeffries-2 wrote
 
 On 02.05.2012 03:12, x-man wrote:
 Hello,

 I have this question:

 If you have squid, that is working as intercept proxy for customers, 
 has
 parent cache_peer (non-squid), who is not talking ICP or HTCP 
 procols, and
 is only dealing with part of the content.

 How this parent cache_peer will communicate to the main SQUID about 
 if the
 request is  MISS or HIT, as I want the main squid to apply packet 
 mark for
 the HIT traffic.

 Or the marking should be done on the cache peer level?
 
 It is best done at the node which has all the information. Things are 
 not really as simple as HIT/MISS.
 
 You will also need TOS pass-thru features enabled in the kernel to 
 propigate the TOS from the squid-upstream packets to the client(s)-squid 
 packets. Although in you could also map the upstream TOS to a netfilter 
 MARK value and pass it through Squid then map it to TOS value on the 
 other connection(s).
 
 Amos
 


If I mark the packets from the cache_peer with netfilter mark, will the
squid preserve it? In this case do I need some kernel modifications?!?

I wonder if there is any simple way that the cache_peer will tell the MAIN
squid that this is HIT and the main squid (who is replying to the customers)
will MARK it with proper TOS? Any way.. be it with protocol or not.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/cache-peer-communication-about-HIT-MISS-between-squid-and-and-non-squid-peer-tp4600931p4613345.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: cache peer communication about HIT/MISS between squid and and non-squid peer

2012-05-02 Thread x-man
Thanks Amos,

what will change if the cache peer is talking ICP or HTCP protocol? Can I
use one of this protocols to say if the answer is HIT or MISS, so the main
squid who is in intercept mode to the customers, will mark the traffic?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/cache-peer-communication-about-HIT-MISS-between-squid-and-and-non-squid-peer-tp4600931p4602656.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: anyone knows some info about youtube range parameter?

2012-05-01 Thread x-man
I like the option to use nginx as cache_peer, who is doing the youtube
handling and I'm keen on using it.

The only think I don't know in this case is how the nginx will mark the
traffic as CACHE HIT or CACHE MISS, because I want to have the CACHE HIT
traffic marked with DSCP so I can use the Zero penalty hit in the NAS and
give high speed to users for the cached videos?

Anyone has idea about that?

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/anyone-knows-some-info-about-youtube-range-parameter-tp4584388p4600792.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] cache peer communication about HIT/MISS between squid and and non-squid peer

2012-05-01 Thread x-man
Hello, 

I have this question:

If you have squid, that is working as intercept proxy for customers, has
parent cache_peer (non-squid), who is not talking ICP or HTCP procols, and
is only dealing with part of the content.

How this parent cache_peer will communicate to the main SQUID about if the
request is  MISS or HIT, as I want the main squid to apply packet mark for
the HIT traffic.

Or the marking should be done on the cache peer level?

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/cache-peer-communication-about-HIT-MISS-between-squid-and-and-non-squid-peer-tp4600931.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: how the big guys are doing it - Caching dynamic content with squid

2012-04-26 Thread x-man
Hi Marcus, thanks for reply.

I just came to know from you about the ICAP solution.

As far as I get it, it will adapt the content so the SQUID can cache it
itself, by making the dynamic stuff in appropriate way so that squid can
consume it as non dynamic? Is that right?

I was thinking about a solution that will stay aside the squid, and will
deal with dynamic content, that's why I was thinking about connecting this
with cache peer to a squid. The squid will deal with the static content and
whatever is good for.

I'm also expecting someone from the squid team to also suggest some way of
proper doing it


--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/how-the-big-guys-are-doing-it-Caching-dynamic-content-with-squid-tp4586044p4590265.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] how the big guys are doing it - Caching dynamic content with squid

2012-04-25 Thread x-man
Hello,

I was bugging my head recently with different approaches on how to cache
dynamic content with squid, especially youtube content, but also other
similar video sites.

The solutions I found on the internet and tried ranged from url_rewriters
with php scripts running on apache, to cache_peer solutions using nginx
proxy module as cache_peer directive in squid... 

Some of them are working, and some of them are not working very well..

I was wondering how the commercial solutions, that are based on squid also,
are doing this, and what is the proper way to do it, if I have to start
writing such software right now, where to start from.

This questions is mostly directed to the Squid people here, who can give
advice according to them, what is the best approach if you want to make a
proper addon for squid, that will be dealing with dynamic content sites -
like youtube and others similar.

Of course if someone else can give idea is very much welcome in this
discussion.




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/how-the-big-guys-are-doing-it-Caching-dynamic-content-with-squid-tp4586044p4586044.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Youtube, storeurl and 302 redirection

2012-04-22 Thread x-man
I think youtube changed something in the player behavior recently, even now
for me http://code.google.com/p/youtube-cache/ is not working with squid
2.7.

Is it working for you now?

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Youtube-storeurl-and-302-redirection-tp4541941p4577847.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] how to use parent cache_peer with url_rewriter working on it

2012-04-20 Thread x-man
Hello there,

I am planning for squid implementation which consists of one main squid that
will server all the web except the  video sites and second squidbox that
will only deal with the video content. 

As I know I have to use the cache_peer directive to tell the main squid that
it has to ask the video squid about a content (it will be based on ACLs). 

The problem that I see is that the second squid who is using url_rewriter
and local apache script to cache and deliver the video content will always
reply with cache miss, to the main squid, because for the squid this is not
cached content - as it is maintained by the url_rewriter and apache php
script - then the main squid will deliver the content from the internet.

Someone can suggest workaround for this?

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/how-to-use-parent-cache-peer-with-url-rewriter-working-on-it-tp4573061p4573061.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: squid + tproxy is not working properly when using url_rewriter and local apache script for youtube caching

2012-04-19 Thread x-man
Hi,

thanks for the reply.

Initially in the url_rewriter I was pointing to
http://localhost/somescript.php. but after implementing tproxy and I saw
the request are coming from customer real Ip to 127.0.0.1 then I replaced
that line in the url_rewriter with ip address like 192.168.2.2 which is on
one of the interfaces of the squidbox, so now the destination address is
routable IP, but again same problem.

What I saw with tcpdump on the squidbox is something like this:

tcpdump -i lo -n port 80

USER-IP (10.10.10.10) -  APACHE IP (192.168.2.2)

tcpdump -i eth0 -n port 80

APACHE IP (192.168.2.2) -  (10.10.10.10)

So the Request from user is redirected through SQUID to the APACHE php
script and then I think the reply is going directly to the user - but I
think it must go through squid. Can this be the problem?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-tproxy-is-not-working-properly-when-using-url-rewriter-and-local-apache-script-for-youtube-cacg-tp4568053p4570044.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] squid + tproxy is not working properly when using url_rewriter and local apache script for youtube caching

2012-04-18 Thread x-man
Hello there, 

I'm using squid transparent proxy for caching and I have also youtube
caching done with url_rewrite and apache script running on same machine as
squid.

It was all working fine, until I decided to go with TPROXY, as it has many
benefits. When I implemented the tproxy rules in iptables, everything
continued to work except the playing of youtube videos - where the
url_rewriter and the apache script come into play.  The url_rewriter
redirects the youtube requests to a local .php script working on the Apache
(on same machine)

I think it has something to do with how squid communicates with the local
apache process (where the software for youtube caching works) and somehow
the tproxy is screwing this up, because after implementing the tproxy the
requests to the Apache are sent with the USER-IP (previous without tproxy,
it was with SQUID-IP) and the reply from the apache script probably goes
directly the user, instead of returning back through the squid process. 

Also the apache script should be able to freely communicate with the real
youtube servers, to fetch the video from there.

Here are my rules for tproxy, but I think they are pretty standart: 

iptables -t mangle -N DIVERT 
iptables -t mangle -A DIVERT -j MARK --set-mark 1 
iptables -t mangle -A DIVERT -j ACCEPT 
iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT 
iptables -t mangle -A PREROUTING -i eth1 -p tcp --dport 80 -j TPROXY
--tproxy-mark 0x1/0x1 --on-port 8081 

echo 1  /proc/sys/net/ipv4/ip_forward 
ip rule add fwmark 1 lookup 100 
ip route add local 0.0.0.0/0 dev lo table 100 


The system is UBUNTU 12.04 with squid version 3.1.19. 

Anyone experienced same problem and eventually some workaround with the
Squid options or with iptables rules? 


--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-tproxy-is-not-working-properly-when-using-url-rewriter-and-local-apache-script-for-youtube-cacg-tp4568053p4568053.html
Sent from the Squid - Users mailing list archive at Nabble.com.