[squid-users] Re: TPROXY surf as client

2014-06-21 Thread Omid Kosari
I want to create fake traffic for website with 1000 different ip's within few
minutes . Something like you say to 1000 different clients/IPs to surf that
site from 11:00 to 11:15 . I want to achieve this with help of squid tproxy
and without need to disconnect users .

Squid is doing something like that with tproxy because users requests routed
to it . so it could do that job if a script runs on squid box . I just don't
know how to spoof requested source ip in that script .



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TPROXY-surf-as-client-tp4666439p4666448.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: TPROXY surf as client

2014-06-21 Thread Omid Kosari
Eliezer Croitoru-2 wrote
> On 06/21/2014 06:12 PM, Amos Jeffries wrote:
>> TCP does not permit that. The SYN-ACK will fail.
>>
>> Amos
> Unless it will come from the proxy server but still it's not recommended 
> and in many cases is even illegal and can be considered as a real series 
> crime and abusive use of IP address.
> 
> Eliezer

Thanks . Please more description . I want to run the script on proxy server
. it may use same iptables rules which squid uses for tproxy job . Please
guide me .



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TPROXY-surf-as-client-tp4666439p4666446.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: TPROXY surf as client

2014-06-21 Thread Omid Kosari
Amos Jeffries wrote
> User and IP address are not the same thing. TPROXY only deals with IP
> addresses, not users.

I mean exactly the ip address . Is there a way to send request as user
source ip while user is online ?




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TPROXY-surf-as-client-tp4666439p4666441.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] TPROXY surf as client

2014-06-21 Thread Omid Kosari
We have full TPROXY in our network . Is there a way to surf an address with
clients IP addresses ?
Lets think we have 1000 ip addresses . I want Squid opens google.com with
those 1000 IPs .
Something like fake traffic from different users .
I know i may use squidclient or a script on squid box but they uses squids
own ip and not all client ip . Also please suggest a way to don't create for
current online users .
Thanks .



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/TPROXY-surf-as-client-tp4666439.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Transparent proxy cache on BGP multihome

2014-06-20 Thread Omid Kosari
I asked this question in
http://serverfault.com/questions/606373/transparent-proxy-cache-on-bgp-multihome
please answer me here or there .

Provider A have transparent caching with squid .

In the situation which a client has multihome BGP with provider A and
provider B then client does not send its outgoing traffic (upload) to
provider A but its incoming traffic (download) comes to/from provider A .

In that situation what happens . clients will have problem to loading pages
? Cache works fine ? 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Transparent-proxy-cache-on-BGP-multihome-tp4666421.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Automatic StoreID ?

2014-05-19 Thread Omid Kosari
Alex Rousskov wrote
> It is possible to avoid caching duplicate content, but that allows you
> to handle cache hits more efficiently. It does not help with cache
> misses (when the URL requested by the client has not been seen before).
> 
> If content publishers start publishing content checksums and browsers
> automatically add those checksums to requests, then you would have the
> Utopia you dream about :-). This will not happen while content
> publishers benefit from getting client requests more than they suffer
> from serving those requests.

I mean the contents which Squid is aware of them , like contents which Squid
accessed until now . 




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Automatic-StoreID-tp4665140p4666002.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: FileSystem mount options and other parameters

2014-03-18 Thread Omid Kosari
Thanks for reply .

First part of post ignored ? any suggestion about my configs ?






--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/FileSystem-mount-options-and-other-parameters-tp4665275p4665283.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] FileSystem mount options and other parameters

2014-03-18 Thread Omid Kosari
AFAIK there is no complete guide for using FS types for squid . Historically
i am using ReiserFS 3.6 on Ubuntu 12.10 64bit .

Here is my /etc/fstab

/dev/sda1  /cache1  reiserfs 
notail,noatime,nodiratime,data=writeback,barrier=none,async,commit=10  0  0
/dev/sdb1  /cache2  reiserfs 
notail,noatime,nodiratime,data=writeback,barrier=none,async,commit=10  0  0
/dev/sdc1  /cache3  reiserfs 
notail,noatime,nodiratime,data=writeback,barrier=none,async,commit=10  0  0


and 

root@cache:~# cat /sys/block/sd*/queue/scheduler
noop [deadline] cfq


And some references 
https://reiser4.wiki.kernel.org/index.php/Mount
http://doc.opensuse.org/products/draft/SLES/SLES-tuning_sd_draft/cha.tuning.io.html

sda id SSD and sdb,sdc are SCSI 19k RPM and i think they should not be same
. 

Note :For people who are not aware , i suggest investigating on these
configs because they are very important for performance tuning of cache
server .

Anybody has suggestions ?


2 more questions .
1 - Why squid does not going to implement its own FS ? even it may based on
other filesystems .
2 - Why squid experts does not share their such configs and customizations
on wiki ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/FileSystem-mount-options-and-other-parameters-tp4665275.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Automatic StoreID ?

2014-03-14 Thread Omid Kosari
Amos Jeffries-2 wrote
> You just described how Store-ID feature works today.
> 
> The map of urlA == urlB == urlC is inside the helper. You can make it a 
> static list of regex patterns like the original Squid-2 helpers, a DB 
> text file of patterns like the bundled Squid-3 helper, or anything else 
> you like inside the helper.
>   Squid learns the mappings by asking the helper about each URL. There is 
> a helper response cache on these lookups same as other helpers and 
> prevent complex/slow mappings having much impact on hot objects.
> 
> Amos

Really ? Squid has it's own learning mechanism without need human hand ?
Also it can GUESS new urls which it was not aware till now ?

One more question . Squid will delete current duplicate objects ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Automatic-StoreID-tp4665140p4665189.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Automatic StoreID ?

2014-03-13 Thread Omid Kosari
What about a learning mechanism . For example ObjectX is at urlA , urlB ,
urlC . It is no problem if squid downloads ObjectX from all of them for ONE
time but after that it should delete it from cache storage and serve all of
them from one file . And then squid should never download them again until
they changed ( based on mechanisms which currently using to check that an
object should redownload or not )

Also it would be more powerful if squid could learn some relation between
urlA , urlB , urlC and if ObjectY requested from one of them , then squid
GUESS the same behavior .

I know i am dreaming Utopia but discussion is better than silence about it .



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Automatic-StoreID-tp4665140p4665174.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Automatic StoreID ?

2014-03-11 Thread Omid Kosari
Is it possible for Squid to automatically find every similar object based on
something like md5 of objects and serve them to clients without need custom
DB ?
I know it is complicated task but i think the Utopia of a cache should be
that we just have one instance of an object in all Squid Farm
(automatically) and serve it as different URLs .



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Automatic-StoreID-tp4665140.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: IpIntercept.cc(137) NetfilterInterception: NF getsockopt(SO_ORIGINAL_DST) failed on FD 4125: (2) No such file or directory

2013-11-19 Thread Omid Kosari
Also i didn't have any success with acls other than url_regex . other acl
types like dst and dstdomain does not catch by deny_info ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/IpIntercept-cc-137-NetfilterInterception-NF-getsockopt-SO-ORIGINAL-DST-failed-on-FD-4125-2-No-such-fy-tp4662558p4663364.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: IpIntercept.cc(137) NetfilterInterception: NF getsockopt(SO_ORIGINAL_DST) failed on FD 4125: (2) No such file or directory

2013-11-19 Thread Omid Kosari
And what happens if we delete that line ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/IpIntercept-cc-137-NetfilterInterception-NF-getsockopt-SO-ORIGINAL-DST-failed-on-FD-4125-2-No-such-fy-tp4662558p4663363.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: IpIntercept.cc(137) NetfilterInterception: NF getsockopt(SO_ORIGINAL_DST) failed on FD 4125: (2) No such file or directory

2013-11-19 Thread Omid Kosari
Amos Jeffries-2 wrote
> This is how you do the exact same thing with only Squid instead of using 
> jesred:
> 
> acl domains_to_redirect dstdomain
> "/etc/squid3/to_redirect_program.acl"
> acl netshar_regex url_regex "/etc/squid3/netshar_regex.acl"
> deny_info 302:http://www.netshahr.com/website-unavailable/
> netshar_regex
> adapted_http_access deny domains_to_redirect redirect_regex

One more question . What is the job of last line ? I mean
"adapted_http_access deny domains_to_redirect redirect_regex" . Seems it has
no effect !




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/IpIntercept-cc-137-NetfilterInterception-NF-getsockopt-SO-ORIGINAL-DST-failed-on-FD-4125-2-No-such-fy-tp4662558p4663359.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: squidaio_queue_request: WARNING - Queue congestion

2013-11-15 Thread Omid Kosari
Off topic but may i ask you to write some examples about b)
I am not sure about "flows that do not need to be tracked"
Maybe useful for others .
Thanks



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squidaio-queue-request-WARNING-Queue-congestion-tp1558044p4663331.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid naps each 3600 seconds !

2013-10-27 Thread Omid Kosari
Following grabbed from cachemgr.cgi when digest is enabled . May i be sure
that digest is not choosed by squid itself and is it safe for me to
"digest_generation off"?



Peer Selection Algorithms wrote
> no guess stats for all peers available
> 
> Per-peer statistics:
> 
> peer digest from 1.1.1.12
> no guess stats for 1.1.1.12 available
> 
> event timestamp   secs from now   secs from init
> initialized   1382745606  -119863 +0
> needed1382745843  -119626 +237
> requested 1382822346  -43123  +76740
> received  1382822386  -43083  +76780
> next_check1382899186  +33717  +153580
> 
> peer digest state:
>   needed: yes, usable:  no, requested:  no
> 
>   last retry delay: 76800 secs
>   last request response time: 40 secs
>   last request result: Forbidden
> 
> peer digest traffic:
>   requests sent: 9, volume: 0 KB
>   replies recv:  9, volume: 2 KB
> 
> peer digest structure:
>   no in-memory copy
> 
> 
> No peer digest from 1.1.1.12
> 
> 
> Algorithm usage:
> Cache Digest:   0 (  0%)
> Icp:29887 (100%)
> Total:  29887 (100%)





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-naps-each-3600-seconds-tp4662811p4662944.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: 3x cpu usage after upgrade 3.1.20 to 3.3.8

2013-10-27 Thread Omid Kosari


Amos Jeffries-2 wrote
> Is traffic speed 2-3 times faster or higher as well?

No .



Amos Jeffries-2 wrote
> Is disk I/O processing higher? (a large rock store swapping data to/from 
> disk would cause both CPU and disk increases)

No .

Everything is same as before . I have 2 squid boxes . one of them has rock ,
another even doesn't has rock . just upgraded and both boxes has increased
cpu usage .






--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/3x-cpu-usage-after-upgrade-3-1-20-to-3-3-8-tp4662906p4662943.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: 3x cpu usage after upgrade 3.1.20 to 3.3.8

2013-10-26 Thread Omid Kosari
No SMP is not enabled . I tried to change minimum configs to easier debugging
.
Forgot to say this graph is taken from squid SNMP CPU usage .

I am using rock store for past 4~5 days but the CPU usage grows right after
upgrading to new version  . also the CPU usage is higher even several hours
after squid start . everything in squid looks normal but CPU usage of squid
is triple times more.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/3x-cpu-usage-after-upgrade-3-1-20-to-3-3-8-tp4662906p4662929.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] 3x cpu usage after upgrade 3.1.20 to 3.3.8

2013-10-26 Thread Omid Kosari
After upgrade to 3.3.8 from 3.1.20 the cpu usage of squid grows triple times
without change to config .
Please look at attached image of cacti graph


 

my configs are available in following posts 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-naps-each-3600-seconds-tp4662811.html
http://squid-web-proxy-cache.1019090.n4.nabble.com/rock-questions-tp4662816.html
http://squid-web-proxy-cache.1019090.n4.nabble.com/rock-problem-on-each-squid-restart-tp4662864.html





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/3x-cpu-usage-after-upgrade-3-1-20-to-3-3-8-tp4662906.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid naps each 3600 seconds !

2013-10-25 Thread Omid Kosari
Alex Rousskov wrote
> On 10/24/2013 07:43 AM, Omid Kosari wrote:
> 
>> "digest_generation off" temporary solved problem but needs restart . I
>> have
>> tested with reload before .
> 
> Sounds like you have detected the source of the blocking Squid problem
> and confirmed it! The fact that digest generation makes your Squid
> unresponsive is essentially a Squid bug, but you might be able to help
> Squid by adjusting its configuration. If you want to try it, I suggest
> re-enabling digest generation and setting
> digest_rebuild_chunk_percentage to 1.
> 
> If that works around your problem, great. Unfortunately, I suspect Squid
> may still be unresponsive during digest generation time because of how
> regeneration steps are scheduled and/or because of an old bug in the
> low-level scheduling code of "heavy" events. However, using the minimum
> digest_rebuild_chunk_percentage value (1) is still worth trying.

Tried . Unfortunately problem persists .



Alex Rousskov wrote
>> But i could not use digest benefits anymore . Is  there big penalty if
>> both
>> caches are in same gigabit switch ?
> 
> The digest optimization can be significant if the two Squids share a lot
> of popular content _and_ it takes a long time for each Squid to get that
> content from the internet during a miss.
> 
> 
> HTH,
> 
> Alex.

I mean if disable digest and just use HTCP .





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-naps-each-3600-seconds-tp4662811p4662873.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid naps each 3600 seconds !

2013-10-25 Thread Omid Kosari
Alex Rousskov wrote
> On 10/24/2013 07:43 AM, Omid Kosari wrote:
> 
>> "digest_generation off" temporary solved problem but needs restart . I
>> have
>> tested with reload before .
> 
> Sounds like you have detected the source of the blocking Squid problem
> and confirmed it! The fact that digest generation makes your Squid
> unresponsive is essentially a Squid bug, but you might be able to help
> Squid by adjusting its configuration. If you want to try it, I suggest
> re-enabling digest generation and setting
> digest_rebuild_chunk_percentage to 1.
> 
> If that works around your problem, great. Unfortunately, I suspect Squid
> may still be unresponsive during digest generation time because of how
> regeneration steps are scheduled and/or because of an old bug in the
> low-level scheduling code of "heavy" events. However, using the minimum
> digest_rebuild_chunk_percentage value (1) is still worth trying.

Tried . Unfortunately problem persists .



Alex Rousskov wrote
>> But i could not use digest benefits anymore . Is  there big penalty if
>> both
>> caches are in same gigabit switch ?
> 
> The digest optimization can be significant if the two Squids share a lot
> of popular content _and_ it takes a long time for each Squid to get that
> content from the internet during a miss.
> 
> 
> HTH,
> 
> Alex.

I mean if disable digest and just use HTCP .





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-naps-each-3600-seconds-tp4662811p4662876.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: rock problem on each squid restart

2013-10-25 Thread Omid Kosari
Eliezer Croitoru-2 wrote
> Just wondering?
> Why do you need to restart squid?

Because some squid changes needs restart . i am investigating some squid
bugs and need restart .



Eliezer Croitoru-2 wrote
> What version of squid? "squid -v"

Squid Cache: Version 3.3.8
configure options:  '--build=x86_64-linux-gnu' '--prefix=/usr'
'--includedir=${prefix}/include' '--mandir=${prefix}/share/man'
'--infodir=${prefix}/share/info' '--sysconfdir=/etc' '--localstatedir=/var'
'--libexecdir=${prefix}/lib/squid3' '--srcdir=.' '--disable-maintainer-mode'
'--disable-dependency-tracking' '--disable-silent-rules'
'--datadir=/usr/share/squid3' '--sysconfdir=/etc/squid3'
'--mandir=/usr/share/man' '--enable-inline' '--enable-async-io=8'
'--enable-storeio=ufs,aufs,diskd,rock' '--enable-removal-policies=lru,heap'
'--enable-delay-pools' '--enable-cache-digests' '--enable-underscores'
'--enable-icap-client' '--enable-follow-x-forwarded-for'
'--enable-auth-basic=DB,fake,getpwnam,LDAP,MSNT,MSNT-multi-domain,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB'
'--enable-auth-digest=file,LDAP' '--enable-auth-negotiate=kerberos,wrapper'
'--enable-auth-ntlm=fake,smb_lm'
'--enable-external-acl-helpers=file_userip,kerberos_ldap_group,LDAP_group,session,SQL_session,unix_group,wbinfo_group'
'--enable-url-rewrite-helpers=fake' '--enable-eui' '--enable-esi'
'--enable-icmp' '--enable-zph-qos' '--enable-ecap' '--disable-translation'
'--with-swapdir=/var/spool/squid3' '--with-logdir=/var/log/squid3'
'--with-pidfile=/var/run/squid3.pid' '--with-filedescriptors=65536'
'--with-large-files' '--with-default-user=proxy' '--enable-linux-netfilter'
'build_alias=x86_64-linux-gnu' 'CFLAGS=-g -O2 -fPIE -fstack-protector
--param=ssp-buffer-size=4 -Wformat -Werror=format-security -Wall'
'LDFLAGS=-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now'
'CPPFLAGS=-D_FORTIFY_SOURCE=2' 'CXXFLAGS=-g -O2 -fPIE -fstack-protector
--param=ssp-buffer-size=4 -Wformat -Werror=format-security'



Eliezer Croitoru-2 wrote
> how long do the restart takes?

restart itself take normal time less than 1 minute but that part i mentioned
take about 20 minutes .


Eliezer Croitoru-2 wrote
> What the cache.log output on startup and shutdown?

2013/10/25 17:05:01| Preparing for shutdown after 930469 requests
2013/10/25 17:05:01| Waiting 30 seconds for active connections to finish
2013/10/25 17:05:02| Closing HTTP port [::]:3128
2013/10/25 17:05:02| Closing HTTP port 0.0.0.0:3127
2013/10/25 17:05:02| Closing HTTP port [::]:3129
2013/10/25 17:05:02| Stop accepting HTCP on [::]:4827
2013/10/25 17:05:02| Closing Pinger socket on FD 20
2013/10/25 17:05:02| Closing SNMP receiving port [::]:3444
2013/10/25 17:05:02| Shutdown: NTLM authentication.
2013/10/25 17:05:02| Shutdown: Negotiate authentication.
2013/10/25 17:05:02| Shutdown: Digest authentication.
2013/10/25 17:05:02| Shutdown: Basic authentication.
2013/10/25 17:05:08 kid3| Creating missing swap directories
2013/10/25 17:05:08 kid1| Creating missing swap directories
2013/10/25 17:05:08 kid1| /cache3 exists
2013/10/25 17:05:08 kid1| /cache3/00 exists
2013/10/25 17:05:08 kid1| Making directories in /cache3/00
2013/10/25 17:05:08 kid2| Creating missing swap directories
2013/10/25 17:05:08 kid2| Skipping existing Rock db: /cache2/rock
2013/10/25 17:05:08 kid1| /cache3/01 exists
2013/10/25 17:05:08 kid1| Making directories in /cache3/01
2013/10/25 17:05:08 kid1| /cache3/02 exists
2013/10/25 17:05:08 kid1| Making directories in /cache3/02
2013/10/25 17:05:08 kid1| /cache3/03 exists
2013/10/25 17:05:08 kid1| Making directories in /cache3/03
2013/10/25 17:05:08 kid1| /cache3/04 exists
2013/10/25 17:05:08 kid1| Making directories in /cache3/04
2013/10/25 17:05:08 kid1| /cache3/05 exists
2013/10/25 17:05:08 kid1| Making directories in /cache3/05
2013/10/25 17:05:08 kid1| /cache3/06 exists
2013/10/25 17:05:08 kid1| Making directories in /cache3/06
2013/10/25 17:05:08 kid1| /cache3/07 exists
2013/10/25 17:05:08 kid1| Making directories in /cache3/07
2013/10/25 17:05:08 kid1| /cache3/08 exists
2013/10/25 17:05:08 kid1| Making directories in /cache3/08
2013/10/25 17:05:08 kid1| /cache3/09 exists
2013/10/25 17:05:08 kid1| Making directories in /cache3/09
2013/10/25 17:05:08 kid1| /cache3/0A exists
2013/10/25 17:05:08 kid1| Making directories in /cache3/0A
2013/10/25 17:05:08 kid1| /cache3/0B exists
2013/10/25 17:05:08 kid1| Making directories in /cache3/0B
2013/10/25 17:05:08 kid1| /cache3/0C exists
2013/10/25 17:05:08 kid1| Making directories in /cache3/0C
2013/10/25 17:05:08 kid1| /cache3/0D exists
2013/10/25 17:05:08 kid1| Making directories in /cache3/0D
2013/10/25 17:05:08 kid1| /cache3/0E exists
2013/10/25 17:05:08 kid1| Making directories in /cache3/0E
2013/10/25 17:05:08 kid1| /cache3/0F exists
2013/10/25 17:05:08 kid1| Making directories in /cache3/0F
2013/10/25 17:05:08 kid1| /cache3/10 exists
2013/10/25 17:05:08 kid1| Making directories in /cache3/10
2013/10/25 17:05:08 kid1| /cache3/11 exists
2013/10/25 17:05:08 kid1| Making directories in /ca

[squid-users] rock problem on each squid restart

2013-10-25 Thread Omid Kosari
on each squid restart the drive which is rock have some problems .
that drive is very very busy after several minutes . the following line
shows iostat after 18 minutes

Device:tpskB_read/skB_wrtn/skB_readkB_wrtn
sdb3004.00 41324.00 0.00  41324  0


cache.log includes a lot of following lines 
WARNING: cache_dir[0]: Ignoring malformed cache entry meta data at
51174042469

if again i restart squid it takes same time . in that time squid performance
decreases and sometimes could not serve http . is this a normal behavior of
rock ?

here is my squid.conf
cache_dir rock /cache2 101000 min-size=0 max-size=32767

fstab
/cache2  ext4 
noatime,nodiratime,discard,errors=remount-ro,data=writeback,barrier=0,async 
0  0


drive is OCZ-VERTEX3 120GB


Store Directory #0 (rock): /cache2
FS Block Size 1024 Bytes

Maximum Size: 103424000 KB
Current Size: 43675803.08 KB 42.23%
Maximum entries:   3232098
Current entries:   1364910 42.23%
Pending operations: 127 out of 0
Flags: SELECTED

the config is available in  this post

  



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/rock-problem-on-each-squid-restart-tp4662864.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid naps each 3600 seconds !

2013-10-24 Thread Omid Kosari
Correction,

"digest_generation off" temporary solved problem but needs restart . I have
tested with reload before .

But i could not use digest benefits anymore . Is  there big penalty if both
caches are in same gigabit switch ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-naps-each-3600-seconds-tp4662811p4662851.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid naps each 3600 seconds !

2013-10-23 Thread Omid Kosari
Thanks a lot . 
Is there a way other than compile ? i prefer using
http://packages.ubuntu.com/saucy/squid3 
Unfortunately "digest_generation off" does not solve it .



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-naps-each-3600-seconds-tp4662811p4662832.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid naps each 3600 seconds !

2013-10-23 Thread Omid Kosari
This problem did not occur in 3.1.20 . nothing changed from that version .



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-naps-each-3600-seconds-tp4662811p4662820.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid naps each 3600 seconds !

2013-10-23 Thread Omid Kosari
Unfortunately even the cgi could not get at the time of outage .
more investigating shows the disks are completely idle but squid uses 100%
cpu .

Also i was able to refresh the event 1 second before the time of outage and
it shows "storeDigestRebuildStart" and "storeDigestRewriteStart" would start
1.438 later .

the info part of squid is here because i couldn't find anything .

before


after





--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-naps-each-3600-seconds-tp4662811p4662817.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] rock questions ?

2013-10-23 Thread Omid Kosari
I am using rock on one of SSD drives to check its performance .


before choosing rock the filesystem was reiserfs because it shows good
performance in huge number of little files but i read somewhere the rock
uses one big file . so i choose ext4 with discard to got the benefits of
discard/trim .

the rock creates one file with the name rock in /cache2 directory with the
size 105906MB from begin and still it has same size . but "df " shows 27%
used and grows each day .

1. is the max-size=31000 is the maximum size rock may store ? if not what is
the max safe value ?
2. is my fstab line shows bellow optimal for rock ? i have created it with
"mkfs.ext4 /dev/sdb" on OCZ-VERTEX3 120GB

3. it seems the the disk is more idle since switched to rock . how to force
it use the disk more aggressive ?

4. the following is output of dumpe2fs of that drive . is it better to
resize the block size to 32KB which is the size of rock store size ? any
other suggestion for filesystem ?

dumpe2fs /dev/sdb | more






--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/rock-questions-tp4662816.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Squid naps each 3600 seconds !

2013-10-23 Thread Omid Kosari
I have 2 squid boxes with the name cache1 and cache2 . the config is
available in  this post

  




the problem is each squid stops working EXACTLY each 1 hour . stop means
stop serving content . but the process is working 


but cache.log from cache1 shows



and cache.log from cache2 shows



the time between DEAD and REVIVED the squid does not serve and whole
bandwidth falls .
i have searched in "Current Squid Configuration" from cachemgr.cgi and found
following 3600 

I doubt to digest_  . how i can see what are squid doing ?
any help ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-naps-each-3600-seconds-tp4662811.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: question in "WARNING: Closing client x.x.78.133 connection due to lifetime timeout"

2013-10-17 Thread Omid Kosari
How we can say squid don't log this specific log ?
Actually i am using grep -v 'Closing client connection due to lifetime
timeout' /var/log/squid3/cache.log
but it does not work for this kind of log because it logs in 2 line and
second line is http://nameofwebsite.com/... and grep -v 'http://' may also
hide useful logs .



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/question-in-WARNING-Closing-client-x-x-78-133-connection-due-to-lifetime-timeout-tp4661780p4662665.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: question in "WARNING: Closing client x.x.78.133 connection due to lifetime timeout"

2013-10-16 Thread Omid Kosari
How we can say squid don't log this specific log ?
Actually i am using grep -v 'Closing client connection due to lifetime
timeout' /var/log/squid3/cache.log
but it does not work for this kind of log because it logs in 2 line and
second line is http://nameofwebsite.com/... and grep -v 'http://' may also
hide useful logs .



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/question-in-WARNING-Closing-client-x-x-78-133-connection-due-to-lifetime-timeout-tp4661780p4662701.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: IpIntercept.cc(137) NetfilterInterception: NF getsockopt(SO_ORIGINAL_DST) failed on FD 4125: (2) No such file or directory

2013-10-13 Thread Omid Kosari
I see some logs right when hickup occurs

in cache.log of cache1 . In that time nothing logs in cache2


and vice versa in cache.log of cache2 . In that time nothing logs in cache1








--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/IpIntercept-cc-137-NetfilterInterception-NF-getsockopt-SO-ORIGINAL-DST-failed-on-FD-4125-2-No-such-fy-tp4662558p4662620.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Apache Traffic Server vs Squid

2013-10-11 Thread Omid Kosari
I love squid and working with it for several years . For many years i was
betting on squid at least in forwarding proxy cache .
But recently a new competitor comes to open source market
http://trafficserver.apache.org/
I did not even tested it but i am going to prepare myself to defend squid
against ATS fan :)
His main sentence is Squid is like Apache http server (has many features but
lazy) and ATS is like nginx (less features but lightweight) .

I reviewed ATS . in theorical world it has most of critical features that i
need that even some commercial caches doesn't have like TPROXY . even it
supports long time squid feature request cacheurl .
http://trafficserver.apache.org/docs/trunk/admin/plugins/cacheurl/index.en.html

Also they created a section in wiki to translate squid configuration to ATS
config .
https://cwiki.apache.org/confluence/display/TS/SquidConfigTranslation

A simple search in google shows some articles about comparing "apache
traffic server vs squid" like :
http://static.usenix.org/events/lisa11/tech/slides/hedstrom.pdf
http://archive.apachecon.com/na2013/presentations/27-Wednesday/A_Patchy_Web/16:15-Apache_Traffic_Server.pdf

Now i am trying to inform Squid community to create some comparisons and
articles in web to reinforcement squid .

I really love to see good informative comments .

Thanks



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Apache-Traffic-Server-vs-Squid-tp4662589.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: IpIntercept.cc(137) NetfilterInterception: NF getsockopt(SO_ORIGINAL_DST) failed on FD 4125: (2) No such file or directory

2013-10-11 Thread Omid Kosari
Amos Jeffries-2 wrote
> Would your proxy happen to be receiving the inbound traffic to 
> www.netshahr.com port 80 ?

Let answer like this . netshahr.com is one of our customers . customers dst
port 80 will be routed to squid except if dst address is another customer .
so if netshahr.com wants access yahoo.com it goes to squid . but if clients
wants to open netshahr.com it does not goes through squid and vice versa .
Another thing i did not investigate a lot but when i removed 302: from
jesred.rules the redirection does not work and browser waits several minutes
for response .



Amos Jeffries-2 wrote
> I mean a new line above them:
> http_port 12345
> 
> or whatever you like for the port value. It does not have to be used,
> but will help prevent traffic going to the interception ports when it
> was not intercepted. 

ok got it . i changed it to following lines .
http_port 3127 intercept
http_port 3128
http_port 3129 tproxy

after that following appears in headers

X-Cache MISS from cache.xx.com
X-Cache-Lookup MISS from cache.xx.com:3127
Via 1.0 cache.xx.com (squid)

is the X-Cache-Lookup line ok ? it should show 3127 ?!


Amos Jeffries-2 wrote
> Okay. The ORIGINAL_DST security checks are not present in 3.1, so the
> NAT error is a non-fatal event for you at the moment. If it is
> encountered by a 3.2 or later proxy it is a transaction blocking event.
> In 3.1 the NAT lookup is rather strangely done after parsing each HTTP
> request, even on persistent connections, so it may just be something
> related to NAT table entries expiring while buffered requests are
> processed. Or the NAT system being overloaded with useless lookups on a
> heavily loaded machine - both those should be kind of rare though. 

But it is fatal event for my network :)
root@cache:~# echo $( cat /proc/sys/net/netfilter/nf_conntrack_count ) / $(
cat /proc/sys/net/netfilter/nf_conntrack_max )
351452 / 524288
root@cache:~# grep conntrack /proc/slabinfo | awk '{ SUM += $3 * $4 } END {
print SUM / 1024 / 1024 " MB" }'
109.316 MB
Can you guide me to NOTRACK usefulness conntracks ? for example may i safely
notrack htcp traffic between 2 squid boxes ? what kind of other traffics ? i
hate try and false in production .

As i said in first post this problem appears after those 3 changes .
problems with nat existed before but this problem appears recently . BTW i
need help to clears unused conntrack .

If you say i can try to upgrade my squid package from
http://packages.ubuntu.com/saucy/squid3 .


Amos Jeffries-2 wrote
> It would be worth it for testing this problem at least. If requests were
> being looped through the proxy twice having it on will produce a warning
> message. 

Via turned on sir . but one question . how loop may occur ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/IpIntercept-cc-137-NetfilterInterception-NF-getsockopt-SO-ORIGINAL-DST-failed-on-FD-4125-2-No-such-fy-tp4662558p4662588.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: IpIntercept.cc(137) NetfilterInterception: NF getsockopt(SO_ORIGINAL_DST) failed on FD 4125: (2) No such file or directory

2013-10-11 Thread Omid Kosari
First of all thanks for professional comments about configs . i was looking
for that 


Amos Jeffries-2 wrote
> Possibly the URL-rewriter. Depending on whether it is rewriting URLs to 
> point anywhere back at this proxy.

my jesred.rules contains

regexi  ^http://(.+\.||)server.cn/.*
302:http://www.netshahr.com/website-unavailable/
regexi  ^http://cpe.management/.*   
302:http://www.netshahr.com/website-unavailable/
regexi  ^http://wpad.domain.name/.* 
302:http://www.netshahr.com/website-unavailable/
regexi  ^http://isatap.home/.*  
302:http://www.netshahr.com/website-unavailable/
regexi  ^http://(.+\.||)scorecardresearch.com/.*
302:http://www.netshahr.com/website-unavailable/



Amos Jeffries-2 wrote
> Also, Squid serves some content directly. Such as embeded objects in
> error pages, icons on FTP listing pages, cachemgr reports, cache peer
> communications. These require a regular forward-proxy http_port without
> intercept/tproxy options. Requests for these are being rejected by your
> config (to_mysef ACL) but will also get these NAT failures first. 

But these rules existed before and that problem did not occur . BTW i
commented those 2 lines to see what happens


Amos Jeffries-2 wrote
> What version of Squid are you using? 3.2 and later will silence the
> above problem most of the time but it is still corrupting your logs.

Sorry forgot to say .
Ubuntu Linux 12.10 x86_64  squid 3.1.20-1ubuntu1.1  . packages are default
ubuntu packages .


Amos Jeffries-2 wrote
> Please run "squid -k parse" over this config and fix anything it
> highlights. 

Highlights ?! you mean Warnings ? only following warnings appears after your
comments done . a bit explain please .

2013/10/11 13:46:12| WARNING: use of 'ignore-reload' in 'refresh_pattern'
violates HTTP
2013/10/11 13:46:12| WARNING: use of 'ignore-no-cache' in 'refresh_pattern'
violates HTTP
2013/10/11 13:46:12| WARNING: use of 'ignore-no-store' in 'refresh_pattern'
violates HTTP
2013/10/11 13:46:12| WARNING: use of 'ignore-private' in 'refresh_pattern'
violates HTTP
2013/10/11 13:46:12| WARNING: HTTP requires the use of Via


Amos Jeffries-2 wrote
> So what is the objection to via?
> 
>   Note that the special access controls you have to use to avoid the
> probems removing it is causing will not prevent relay loops which happen
> as 2-hop loops via the peer and will break the URLs being served up
> directly by this proxy. 

Tried to hide the proxy as possible . you suggest turn it on ?




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/IpIntercept-cc-137-NetfilterInterception-NF-getsockopt-SO-ORIGINAL-DST-failed-on-FD-4125-2-No-such-fy-tp4662558p4662578.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] IpIntercept.cc(137) NetfilterInterception: NF getsockopt(SO_ORIGINAL_DST) failed on FD 4125: (2) No such file or directory

2013-10-10 Thread Omid Kosari
I have 2 squid boxes worked fine for long time . recently i have change a
little bit in configs after that i see hickups in realtime graph and http
hangups right when following error appears in cache.log of one of squid
boxes.

IpIntercept.cc(137) NetfilterInterception:  NF getsockopt(SO_ORIGINAL_DST)
failed on FD xx: (2) No such file or directory

changes i made few days ago
1. enabled  access_log /var/log/squid3/access.log
2. added (.+\.||) at start of refresh_pattern rules
3. started to use jesred . there were no url_rewrite_program before

Which one can create the problem ?

My squid.conf

acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl trustedwebserver src xxx.xxx.160.170
acl trustednetworks src xxx.xxx.160.0/24
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access allow manager trustedwebserver
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
#Don't forget firewall to allow also
acl allowed_hosts src xxx.xxx.160.0/19
acl allowed_hosts src 1.1.1.0/24
acl allowed_hosts src xxx:xxx::/32
#bottom two lines are because of
http://bugs.squid-cache.org/show_bug.cgi?id=2798
acl to_myself dst 127.0.0.0/8 xxx.xxx.160.171 10.234.56.12 1.1.1.12
http_access deny to_myself
#up two lines are because of
http://bugs.squid-cache.org/show_bug.cgi?id=2798
http_access allow allowed_hosts
http_access deny all
http_port 3128 intercept
http_port 3129 tproxy
coredump_dir /var/spool/squid3
cache_mem 3 GB
maximum_object_size 150 MB
cache_replacement_policy heap LFUDA
memory_replacement_policy heap GDSF
cache_dir aufs /cache2 101000 36 256
cache_dir aufs /cache3 101000 36 256
cache_dir aufs /cache4 101000 36 256
dns_nameservers xxx.xxx.160.172 208.67.222.222 208.67.220.220
refresh_pattern -i
(.+\.||)microsoft.com/.*\.(cab|exe|dll|ms[i|u|f]|asf|wm[v|a]|dat|zip|iso|psf)
10080 100% 172800 ignore-no-cache ignore-no-store ignore-reload
ignore-private
refresh_pattern -i
(.+\.||)windowsupdate.com/.*\.(cab|exe|dll|ms[i|u|f]|asf|wm[v|a]|dat|zip|iso|psf)
10080 100% 172800 ignore-no-cache ignore-no-store ignore-reload
ignore-private
refresh_pattern -i
(.+\.||)eset.com/.*\.(cab|exe|dll|ms[i|u|f]|asf|wm[v|a]|dat|zip|ver|nup)
10080 100% 172800 ignore-no-cache ignore-no-store ignore-reload
ignore-private
refresh_pattern -i
(.+\.||)avg.com/.*\.(cab|exe|dll|ms[i|u|f]|asf|wm[v|a]|dat|zip|ctf|bin|gz)
10080 100% 172800 ignore-no-cache ignore-no-store ignore-reload
ignore-private
refresh_pattern -i
(.+\.||)grisoft.com/.*\.(cab|exe|dll|ms[i|u|f]|asf|wm[v|a]|dat|zip|ctf|bin|gz)
10080 100% 172800 ignore-no-cache ignore-no-store ignore-reload
ignore-private
refresh_pattern -i
(.+\.||)grisoft.cz/.*\.(cab|exe|dll|ms[i|u|f]|asf|wm[v|a]|dat|zip|ctf|bin|gz)
10080 100% 172800 ignore-no-cache ignore-no-store ignore-reload
ignore-private
refresh_pattern -i
(.+\.||)avast.com/.*\.(cab|exe|dll|ms[i|u|f]|asf|wm[v|a]|dat|zip|vpx|vpu|vpa|vpaa|def|stamp)
10080 100% 172800 ignore-no-cache ignore-no-store ignore-reload
ignore-private
refresh_pattern -i
(.+\.||)kaspersky-labs.com/.*\.(cab|zip|exe|msi|msp|bz2|avc|kdc|klz|dif|dat|kdz|kdl|kfb)
10080 100% 172800 ignore-no-cache ignore-no-store ignore-reload
ignore-private
refresh_pattern -i
(.+\.||)kaspersky.com/.*\.(cab|zip|exe|msi|msp|bz2|avc|kdc|klz|dif|dat|kdz|kdl|kfb)
10080 100% 172800 ignore-no-cache ignore-no-store ignore-reload
ignore-private
refresh_pattern -i (.+\.||)nai.com/.*\.(gem|zip|mcs|tar|exe|) 10080 100%
172800 ignore-no-cache ignore-no-store ignore-reload ignore-private
refresh_pattern -i (.+\.||)adobe.com/.*\.(cab|aup|exe|msi|upd|msp) 10080
100% 172800 ignore-no-cache ignore-no-store ignore-reload ignore-private
refresh_pattern -i (.+\.||)symantecliveupdate.com/.*\.(zip|exe|msi) 10080
100% 172800 ignore-no-cache ignore-no-store ignore-reload ignore-private

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
tcp_outgoing_address xxx.xxx.160.171
cache_mgr ad...@xx.com
httpd_suppress_version_string on
visible_hostname cache.xx.com
unique_hostname cache.xx.com
hostname_aliases ns2.xx.com

cachemgr_passwd xx all
store_avg_object_size 80 KB
uri_whitespace allow
strip_query_terms off
ignore_unknown_nameservers off
#memory_pools should be off http://bugs.squid-cache.org/show_bug.cgi?id=1956
memory_pools off
memory_pools_limit 0
#error_directory /usr/share/squid3/errors/en-us

[squid-users] Re: Squid transparent proxy connection fails on specific sites?

2013-03-07 Thread Omid Kosari
Unfortunately this problem forced me to completely disable squid until found
a way to solve it . before disabling my job was just listen to user reports
and put problematic websites in bypass list of router which wastes a lot of
time and makes users angry :(



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-transparent-proxy-connection-fails-on-specific-sites-tp4658746p4658871.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid transparent proxy connection fails on specific sites?

2013-03-05 Thread Omid Kosari
Sorry i didn't understand the idea from your post ?! Maybe my bad english .
How i can trace TPROXY ? is there any more details needed ? even i can give
full access to you (Amos) to help solve the problem for me and others.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-transparent-proxy-connection-fails-on-specific-sites-tp4658746p4658838.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid transparent proxy connection fails on specific sites?

2013-03-04 Thread Omid Kosari
New finding from my other topic at serverfault 
http://serverfault.com/questions/483038/squid-transparent-proxy-connection-fails-on-specific-sites

  

The problem caused by TPROXY . when using REDIRECT the problem disappeared
and when switching back to TPROXY it occurs again . but it is not a solution



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-transparent-proxy-connection-fails-on-specific-sites-tp4658746p4658828.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid transparent proxy connection fails on specific sites?

2013-03-03 Thread Omid Kosari
No i have not try IPv6 .

I don't know related or not but i also have following configs

echo 1025 65000 > /proc/sys/net/ipv4/ip_local_port_range
echo 0 > /proc/sys/net/ipv4/tcp_syncookies
echo 131072 > /proc/sys/net/ipv4/tcp_max_syn_backlog
echo 1 > /proc/sys/net/ipv4/ip_forward
echo 0 > /proc/sys/net/ipv4/conf/lo/rp_filter
echo 1 > /proc/sys/net/ipv4/ip_forward
echo 2 > /proc/sys/net/ipv4/conf/default/rp_filter
echo 2 > /proc/sys/net/ipv4/conf/all/rp_filter
echo 0 > /proc/sys/net/ipv4/conf/p2p1/rp_filter
echo 524288 > /proc/sys/net/netfilter/nf_conntrack_max

and enabled/disabled following 

#echo 0 > /proc/sys/net/ipv4/tcp_window_scaling
#echo 0 > /proc/sys/net/ipv4/tcp_ecn

Here is route rules

/sbin/iptables -t mangle -N DIVERT
/sbin/iptables -t mangle -A DIVERT -j MARK --set-mark 1
/sbin/iptables -t mangle -A DIVERT -j ACCEPT
/sbin/iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
/sbin/iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY
--tproxy-mark 0x1/0x1 --on-port 3129
ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100


*Connection to providers is over GRE Tunnel .*




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-transparent-proxy-connection-fails-on-specific-sites-tp4658746p4658826.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid transparent proxy connection fails on specific sites?

2013-03-03 Thread Omid Kosari
Here is the tcpdump when i use tproxy and the problem appears
http://paste.ubuntu.com/5581724/   

And here is the tcpdump when i manually set proxy in my browser and
everyhting works fine
http://paste.ubuntu.com/5581726/   



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-transparent-proxy-connection-fails-on-specific-sites-tp4658746p4658811.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: tproxy and "disable-pmtu-discovery=always"

2013-03-03 Thread Omid Kosari
I have kernel 3.5.0-25 so there is no need "disable-pmtu-discovery=always" in
tproxy port ?
Is it 100% safe . please explain more .



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/tproxy-and-disable-pmtu-discovery-always-tp3753485p4658810.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Squid transparent proxy connection fails on specific sites?

2013-03-02 Thread Omid Kosari
When squid setup as transparent proxy , returns ERR_CONNECT_FAIL on some
sites . 
The server which squid runs on it is able to open those sites by
lynx,wget,curl etc . 
Even if we manually set proxy in browser , those sites could open .

Squid Cache: Version 3.1.20
Linux cache 3.5.0-22-generic #34-Ubuntu SMP Tue Jan 8 21:47:00 UTC 2013
x86_64 x86_64 x86_64 GNU/Linux

Here is tcpdump from one of clients when tries to open one of those sites

GET / HTTP/1.0

Host: 80.75.1.4

Accept: text/html, text/plain, text/css, text/sgml, */*;q=0.01

Accept-Encoding: gzip, compress, bzip2

Accept-Language: en

User-Agent: Lynx/2.8.8dev.2 libwww-FM/2.14 SSL-MM/1.4.1



HTTP/1.0 504 Gateway Time-out

Server: squid

Mime-Version: 1.0

Date: Wed, 27 Feb 2013 15:39:03 GMT

Content-Type: text/html

Content-Length: 376353

X-Squid-Error: ERR_CONNECT_FAIL 110

X-Cache: MISS from cache.mysquid.com

X-Cache-Lookup: MISS from cache.mysquid.com:3128

Connection: close

Also tried
http://wiki.squid-cache.org/SquidFaq/SystemWeirdnesses#Can.27t_connect_to_some_sites_through_Squid

>From this page http://wiki.squid-cache.org/SquidFaq/InterceptionProxy

It causes path-MTU (PMTUD) to fail, possibly making some remote sites
inaccessible. This is not usually a problem if your client machines are
connected via Ethernet or DSL PPPoATM where the MTU of all links between the
cache and client is 1500 or more. If your clients are connecting via DSL
PPPoE then this is likely to be a problem as PPPoE links often have a
reduced MTU (1472 is very common).

But i have same problem with ethernet

Here is tcpdump from one client:

GET / HTTP/1.0
Host: 80.75.1.4
Accept: text/html, text/plain, text/css, text/sgml, */*;q=0.01
Accept-Encoding: gzip, compress, bzip2
Accept-Language: en
User-Agent: Lynx/2.8.8dev.2 libwww-FM/2.14 SSL-MM/1.4.1

HTTP/1.0 504 Gateway Time-out
Server: squid
Mime-Version: 1.0
Date: Wed, 27 Feb 2013 15:39:03 GMT
Content-Type: text/html
Content-Length: 376353
X-Squid-Error: ERR_CONNECT_FAIL 110
X-Cache: MISS from cache.mysquid.com
X-Cache-Lookup: MISS from cache.mysquid.com:3128
Connection: close

ping -s 1500 80.75.1.4

PING 80.75.1.4 (80.75.1.4) 1500(1528) bytes of data.
1508 bytes from 80.75.1.4: icmp_req=1 ttl=58 time=5.28 ms
1508 bytes from 80.75.1.4: icmp_req=2 ttl=58 time=3.96 ms
1508 bytes from 80.75.1.4: icmp_req=3 ttl=58 time=4.28 ms

ping -s 1473 80.75.1.4 -M do

PING 80.75.1.4 (80.75.1.4) 1473(1501) bytes of data.
>From 109.110.160.171 icmp_seq=1 Frag needed and DF set (mtu = 1500)
>From 109.110.160.171 icmp_seq=1 Frag needed and DF set (mtu = 1500)

--- 80.75.1.4 ping statistics ---
0 packets transmitted, 0 received, +2 errors

ping -s 1472 80.75.1.4 -M do

PING 80.75.1.4 (80.75.1.4) 1472(1500) bytes of data.
1480 bytes from 80.75.1.4: icmp_req=1 ttl=58 time=4.33 ms
1480 bytes from 80.75.1.4: icmp_req=2 ttl=58 time=4.32 ms

--- 80.75.1.4 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 4.320/4.329/4.338/0.009 ms

traceroute --mtu 80.75.1.4

traceroute to 80.75.1.4 (80.75.1.4), 30 hops max, 65000 byte packets
 1  x.x.x.10 (x.x.x.10)  0.820 ms F=1500  0.681 ms  0.243 ms
 2  y.y.y.1 (y.y.y.1)  2.969 ms  3.185 ms  2.994 ms
 3  217.218.181.193 (217.218.181.193)  2.836 ms  2.381 ms  2.487 ms
 4  217.218.185.22 (217.218.185.22)  3.617 ms  2.957 ms  3.176 ms
 5  78.38.119.237 (78.38.119.237)  2.050 ms  1.808 ms  2.264 ms
 6  217.11.30.250 (217.11.30.250)  3.522 ms  3.962 ms  2.674 ms
 7  * 80.75.1.4 (80.75.1.4)  3.507 ms *

tracepath 80.75.1.4

 1:  cache.mysquid.com 0.092ms pmtu 1500
 1:  x.x.x.10  0.380ms
 1:  x.x.x.10  0.309ms
 2:  y.y.y.1   3.390ms asymm  7
 3:  217.218.181.193   2.671ms asymm  5
 4:  217.218.185.222.944ms asymm  5
 5:  78.38.119.237 1.684ms
 6:  217.11.30.250 4.020ms
 7:  80.75.1.4 3.915ms reached
 Resume: pmtu 1500 hops 7 back 58




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-transparent-proxy-connection-fails-on-specific-sites-tp4658746p4658809.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: NULL characters in Header - how to get which sites generate this?

2012-11-30 Thread Omid Kosari
is it possible to remove that NULL by something like squid header modify ? or
a way to ignore it ? 
a workaround to it work even if it is http violation ?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/NULL-characters-in-Header-how-to-get-which-sites-generate-this-tp4656561p4657516.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Browsing slow after adding squid proxy.

2012-06-28 Thread Omid Kosari
Any suggestion for Filesystem on cache SSD ?
Right now i am using Reiserfs4 on ssd but it does not support TRIM . any
benefits with EXT4+TRIM or other fs ?

I would be happy if Amos give a clear response for my convenience.


--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Browsing-slow-after-adding-squid-proxy-tp3679353p468.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] RE: Unable to access a website through Suse/Squid.

2012-06-14 Thread Omid Kosari
The problem still exist . i did i lot of investigation . no success 

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Unable-to-access-a-website-through-Suse-Squid-tp1019434p4655390.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] RE: Unable to access a website through Suse/Squid.

2012-06-08 Thread Omid Kosari
Again i carefully checked Tproxy4 page , no success .

i repeat again . when i manually set proxy in my browser those problematic
sites work so squid does not have any problem to open them .

if i have no proxy , also the site works fine so i ( my ip ) can open site .

but if i activate the tproxy rules in router , the site will not open (
again with my ip )

one example is http://www.nic.ir/

i also may give ssh access privately to you Amos if you want . and please
say a secure way .

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Unable-to-access-a-website-through-Suse-Squid-tp1019434p4655303.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] RE: Unable to access a website through Suse/Squid.

2012-06-07 Thread Omid Kosari
i have checked everything in this thread . also i have done every tips from 
http://squidproxy.wordpress.com/2007/06/05/thinsg-to-look-at-if-websites-are-hanging/
http://squidproxy.wordpress.com/2007/06/05/thinsg-to-look-at-if-websites-are-hanging/
 
but unfortunately some websites does not open through squid.

note1: web sites will open if i manually set proxy settings in my browser (
port 3128 ) but when route the traffic to squid ( port 3129 tproxy ) they
don't open .

note2: squid server can open those websites simply with lynx .

note3: i tested with changing mss even below 500 , changing mtu of router
interface and squid interface below 1200 , disabling ECN and WSS etc . no
success 

nothing special in log files even with debug level more than 3 .

ubuntu 12.04 server LTS 64bit . squid Version 3.1.19 

most of websites work fine but few of them have problem 

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Unable-to-access-a-website-through-Suse-Squid-tp1019434p4655294.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: possible SYN flooding on port 3128. Sending cookies

2011-06-14 Thread Omid Kosari
Thanks . but as i said before it is already disabled
net.ipv4.tcp_syncookies=0

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/possible-SYN-flooding-on-port-3128-Sending-cookies-tp2242687p3595942.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: possible SYN flooding on port 3128. Sending cookies

2011-06-13 Thread Omid Kosari
Squid Cache: Version 3.1.12.1
Linux 2.6.38-8-server #42-Ubuntu SMP Mon Apr 11 03:49:04 UTC 2011 x86_64
x86_64 x86_64 GNU/Linux
/proc/sys/net/ipv4/tcp_max_syn_backlog   is   65536
/proc/sys/net/ipv4/tcp_syncookies is   0

Average HTTP requests per minute since start:   11700.1

File descriptor usage for squid:
Maximum number of file descriptors:   16384
Largest file desc currently in use:   4246


/sbin/iptables -t mangle -N DIVERT
/sbin/iptables -t mangle -A DIVERT -j MARK --set-mark 1
/sbin/iptables -t mangle -A DIVERT -j ACCEPT
/sbin/iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
/sbin/iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY
--tproxy-mark 0x1/0x1 --on-port 3129
ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100



but unfortunately i have thousands of this message in dmesg

Jun 13 15:46:17 cache kernel: [98235.807838] net_ratelimit: 19 callbacks
suppressed
Jun 13 15:46:17 cache kernel: [98235.807847] TCP: Possible SYN flooding on
port 80. Dropping request.
Jun 13 15:46:17 cache kernel: [98235.808762] TCP: Possible SYN flooding on
port 80. Dropping request.
Jun 13 15:46:17 cache kernel: [98235.808831] TCP: Possible SYN flooding on
port 80. Dropping request.
Jun 13 15:46:17 cache kernel: [98235.808880] TCP: Possible SYN flooding on
port 80. Dropping request.
Jun 13 15:46:17 cache kernel: [98235.898484] TCP: Possible SYN flooding on
port 80. Dropping request.
Jun 13 15:46:17 cache kernel: [98236.150304] TCP: Possible SYN flooding on
port 80. Dropping request.
Jun 13 15:46:17 cache kernel: [98236.156344] TCP: Possible SYN flooding on
port 80. Dropping request.
Jun 13 15:46:17 cache kernel: [98236.172954] TCP: Possible SYN flooding on
port 80. Dropping request.
Jun 13 15:46:18 cache kernel: [98236.311873] TCP: Possible SYN flooding on
port 80. Dropping request.
Jun 13 15:46:18 cache kernel: [98236.330858] TCP: Possible SYN flooding on
port 80. Dropping request.
Jun 13 15:46:22 cache kernel: [98240.914019] net_ratelimit: 256 callbacks
suppressed
Jun 13 15:46:22 cache kernel: [98240.914027] TCP: Possible SYN flooding on
port 80. Dropping request.
Jun 13 15:46:22 cache kernel: [98240.952442] TCP: Possible SYN flooding on
port 80. Dropping request.
Jun 13 15:46:22 cache kernel: [98241.023632] TCP: Possible SYN flooding on
port 80. Dropping request.
Jun 13 15:46:22 cache kernel: [98241.031661] TCP: Possible SYN flooding on
port 80. Dropping request.
Jun 13 15:46:22 cache kernel: [98241.031770] TCP: Possible SYN flooding on
port 80. Dropping request.
Jun 13 15:46:22 cache kernel: [98241.031883] TCP: Possible SYN flooding on
port 80. Dropping request.
Jun 13 15:46:22 cache kernel: [98241.031911] TCP: Possible SYN flooding on
port 80. Dropping request.
Jun 13 15:46:22 cache kernel: [98241.039737] TCP: Possible SYN flooding on
port 80. Dropping request.
Jun 13 15:46:22 cache kernel: [98241.040034] TCP: Possible SYN flooding on
port 80. Dropping request.
Jun 13 15:46:22 cache kernel: [98241.080768] TCP: Possible SYN flooding on
port 80. Dropping request.


if more info needed just say the command to run .




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/possible-SYN-flooding-on-port-3128-Sending-cookies-tp2242687p3593626.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Squid suddenly crashes (Maybe a bug)

2009-05-19 Thread Omid Kosari

The main reason for using 3.1 is TPROXY so i can not use 3.0.
How can i provide full info to Amos ?


Jeff Pang-4 wrote:
> 
> Omid Kosari:
>> Anyone?
>> This problem occurs 5 times a day (average). and each time the following
>> message appears in cache.log
>> 
>> assertion failed: comm.cc:2016: "!fd_table[fd].closing()"
>> 
> 
> b/c squid-3.1 is a beta version, so anything can be happened.
> you may provide the full info to Amos, and roll the software version to
> 3.0.
> 
> -- 
> Jeff Pang
> DingTong Technology
> www.dtonenetworks.com
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Squid-suddenly-crashes-%28Maybe-a-bug%29-tp23593693p23610858.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] Squid suddenly crashes (Maybe a bug)

2009-05-18 Thread Omid Kosari

Anyone?
This problem occurs 5 times a day (average). and each time the following
message appears in cache.log

assertion failed: comm.cc:2016: "!fd_table[fd].closing()"

I have configured squid with:
./configure --datadir=/usr/share/squid3 --sysconfdir=/etc/squid3
--mandir=/usr/share/man --localstatedir=/var --with-logdir=/var/log/squid
--prefix=/usr --enable-inline --enable-async-io=8
--enable-storeio="ufs,aufs" --enable-removal-policies="lru,heap"
--enable-delay-pools --enable-cache-digests --enable-underscores
--enable-icap-client --enable-follow-x-forwarded-for
--with-filedescriptors=65536 --with-default-user=proxy --enable-large-files
--enable-linux-netfilter


Omid Kosari wrote:
> 
> Maybe useful , Squid is under high load
> 
> Average HTTP requests per minute since start: 5211.3
> 
> 
> Omid Kosari wrote:
>> 
>> Simply squid crashes after this message in cache.log
>> assertion failed: comm.cc:2016: "!fd_table[fd].closing()"
>> 
>> Squid 3.1.0.7
>> Kernel 2.6.28-11 (Ubuntu 9.04 Jaunty)
>> CPU AMD Athlon(tm) 64 Processor 3000+ 
>> RAM 8GB
>> 
>> Any suggestion appreciated.
>> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Squid-suddenly-crashes-%28Maybe-a-bug%29-tp23593693p23610453.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] Squid suddenly crashes (Maybe a bug)

2009-05-18 Thread Omid Kosari

Maybe useful , Squid is under high load

Average HTTP requests per minute since start:   5211.3


Omid Kosari wrote:
> 
> Simply squid crashes after this message in cache.log
> assertion failed: comm.cc:2016: "!fd_table[fd].closing()"
> 
> Squid 3.1.0.7
> Kernel 2.6.28-11 (Ubuntu 9.04 Jaunty)
> CPU AMD Athlon(tm) 64 Processor 3000+ 
> RAM 8GB
> 
> Any suggestion appreciated.
> 

-- 
View this message in context: 
http://www.nabble.com/Squid-suddenly-crashes-%28Maybe-a-bug%29-tp23593693p23595216.html
Sent from the Squid - Users mailing list archive at Nabble.com.



[squid-users] Squid suddenly crashes (Maybe a bug)

2009-05-18 Thread Omid Kosari

Simply squid crashes after this message in cache.log
assertion failed: comm.cc:2016: "!fd_table[fd].closing()"

Squid 3.1.0.7
Kernel 2.6.28-11 (Ubuntu 9.04 Jaunty)
CPU AMD Athlon(tm) 64 Processor 3000+ 
RAM 8GB

Any suggestion appreciated.
-- 
View this message in context: 
http://www.nabble.com/Squid-suddenly-crashes-%28Maybe-a-bug%29-tp23593693p23593693.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] TProxy not faking source address.

2009-05-17 Thread Omid Kosari

I solved the problem . I have installed 

aptitude install libcap2 libcap2-dev

and then recompiled squid and tproxy problem solved.
Thank you Amos for http://wiki.squid-cache.org/Features/Tproxy4  . please
also edit troubleshooting section for Ubuntu 9.04 (Jaunty) users to install
libcap2 libcap2-dev before compiling squid . 
AFAIK the simplest way to running the TPROXY is in Ubuntu 9.04 (Jaunty) .


Amos Jeffries-2 wrote:
> 
>>
>> Another thing maybe helpful
>> when i enable
>> http_port 3128 intercept
>> in squid.conf , following message appears in cache.log
>>
>> cache squid[14701]: IpIntercept.cc(132) NetfilterInterception:  NF
>> getsockopt(SO_ORIGINAL_DST) failed on FD 24: (11) Resource temporarily
>> unavailable
>>
> 
> I'm aware of that. 'intercept' is a NAT lookup, will throw up errors on
> any non-NAT input. 'tproxy' is a spoofed SOCKET lookup.
> 
> I don't think any of the basic Ubuntu kernels have the TPROXY options set
> yet. That would account for your custom ones working but the general
> kernels not.
> 
> Amos
> 
>>
>>
>> Omid Kosari wrote:
>>>
>>> I have Ubuntu 9.04 (Jaunty)  but also squid->client spoofing does not
>>> work
>>> . it shows squid's ip in tproxy mode .
>>>
>>> dmesg shows
>>> [   21.186636] ip_tables: (C) 2000-2006 Netfilter Core Team
>>> [   21.319881] NF_TPROXY: Transparent proxy support initialized, version
>>> 4.1.0
>>> [   21.319884] NF_TPROXY: Copyright (c) 2006-2007 BalaBit IT Ltd.
>>>
>>> and squid.conf has
>>>
>>> http_port 3128
>>> http_port 3129 tproxy
>>>
>>> i have compiled squid with these settings
>>> ./configure --datadir=/usr/share/squid3 --sysconfdir=/etc/squid3
>>> --mandir=/usr/share/man --localstatedir=/var
>>> --with-logdir=/var/log/squid
>>> --prefix=/usr --enable-inline --enable-async-io=8
>>> --enable-storeio="ufs,aufs" --enable-removal-policies="lru,heap"
>>> --enable-delay-pools --enable-cache-digests --enable-underscores
>>> --enable-icap-client --enable-follow-x-forwarded-for
>>> --with-filedescriptors=65536 --with-default-user=proxy
>>> --enable-large-files --enable-linux-netfilter
>>> and squid is 3.1.0.7
>>>
>>> the debug_options ALL,1 89,6 output is like when we have not
>>> debug_options
>>> at all !!
>>>
>>> i had tproxy with my custom kernels but upgraded to Ubuntu 9.04 (Jaunty)
>>> to prevent custom compiling of kernel and iptables but it does not work
>>>
>>>
>>>
>>> Amos Jeffries-2 wrote:
>>>>
>>>> rihad wrote:
>>>>> Looks like I'm the only one trying to use TProxy? Somebody else,
>>>>> please?
>>>>> To summarize: Squid does NOT spoof client's IP address when initiating
>>>>> connections on its own. Just as if there weren't a thing named
>>>>> "TProxy".
>>>>
>>>> We have had a fair few trying it with complete success when its the
>>>> only
>>>> thing used. This kind of thing seems to crop up with WCCP, for you and
>>>> one other.
>>>>
>>>> I'm not sure yet what the problem seems to be. Can you check your
>>>> cache.log for messages about "Stopping full transparency", the rest of
>>>> the message says why. I've updated the wiki troubleshooting section to
>>>> list the messages that appear when tproxy is turned off automatically
>>>> and what needs to be done to fix it.
>>>>
>>>> If you can't see any of those please can you set:
>>>>debug_options ALL,1 89,6
>>>>
>>>> to see whats going on?
>>>>
>>>> I know the squid->client link should be 100% spoofed.  I'm not fully
>>>> certain the quid->server link is actually spoofed in all cases. Though
>>>> one report indicates it may be, I have not been able to test it locally
>>>> yet.
>>>>
>>>>
>>>> Amos
>>>>
>>>>
>>>>>
>>>>> Original message follows (not to be confused with top-posting):
>>>>>
>>>>>> Hello, I'm trying to get TProxy 4.1 to work as outlined here:
>>>>>> http://wiki.squid-cache.org/Features/Tproxy4
>>>>>> namely under Ubuntu 9.04 stable/testing mix with the following:
>>>>>> linux-i

Re: [squid-users] TProxy not faking source address.

2009-05-17 Thread Omid Kosari

Another thing maybe helpful
when i enable 
http_port 3128 intercept
in squid.conf , following message appears in cache.log

cache squid[14701]: IpIntercept.cc(132) NetfilterInterception:  NF
getsockopt(SO_ORIGINAL_DST) failed on FD 24: (11) Resource temporarily
unavailable



Omid Kosari wrote:
> 
> I have Ubuntu 9.04 (Jaunty)  but also squid->client spoofing does not work
> . it shows squid's ip in tproxy mode .
> 
> dmesg shows
> [   21.186636] ip_tables: (C) 2000-2006 Netfilter Core Team
> [   21.319881] NF_TPROXY: Transparent proxy support initialized, version
> 4.1.0
> [   21.319884] NF_TPROXY: Copyright (c) 2006-2007 BalaBit IT Ltd.
> 
> and squid.conf has
> 
> http_port 3128
> http_port 3129 tproxy
> 
> i have compiled squid with these settings
> ./configure --datadir=/usr/share/squid3 --sysconfdir=/etc/squid3
> --mandir=/usr/share/man --localstatedir=/var --with-logdir=/var/log/squid
> --prefix=/usr --enable-inline --enable-async-io=8
> --enable-storeio="ufs,aufs" --enable-removal-policies="lru,heap"
> --enable-delay-pools --enable-cache-digests --enable-underscores
> --enable-icap-client --enable-follow-x-forwarded-for
> --with-filedescriptors=65536 --with-default-user=proxy
> --enable-large-files --enable-linux-netfilter
> and squid is 3.1.0.7
> 
> the debug_options ALL,1 89,6 output is like when we have not debug_options
> at all !!
> 
> i had tproxy with my custom kernels but upgraded to Ubuntu 9.04 (Jaunty)
> to prevent custom compiling of kernel and iptables but it does not work
> 
> 
> 
> Amos Jeffries-2 wrote:
>> 
>> rihad wrote:
>>> Looks like I'm the only one trying to use TProxy? Somebody else, please?
>>> To summarize: Squid does NOT spoof client's IP address when initiating 
>>> connections on its own. Just as if there weren't a thing named "TProxy".
>> 
>> We have had a fair few trying it with complete success when its the only 
>> thing used. This kind of thing seems to crop up with WCCP, for you and 
>> one other.
>> 
>> I'm not sure yet what the problem seems to be. Can you check your 
>> cache.log for messages about "Stopping full transparency", the rest of 
>> the message says why. I've updated the wiki troubleshooting section to 
>> list the messages that appear when tproxy is turned off automatically 
>> and what needs to be done to fix it.
>> 
>> If you can't see any of those please can you set:
>>debug_options ALL,1 89,6
>> 
>> to see whats going on?
>> 
>> I know the squid->client link should be 100% spoofed.  I'm not fully 
>> certain the quid->server link is actually spoofed in all cases. Though 
>> one report indicates it may be, I have not been able to test it locally
>> yet.
>> 
>> 
>> Amos
>> 
>> 
>>> 
>>> Original message follows (not to be confused with top-posting):
>>> 
>>>> Hello, I'm trying to get TProxy 4.1 to work as outlined here:
>>>> http://wiki.squid-cache.org/Features/Tproxy4
>>>> namely under Ubuntu 9.04 stable/testing mix with the following:
>>>> linux-image-2.6.28-11-server 2.6.28-11.42
>>>> iptables 1.4.3.2-2ubuntu1
>>>> squid-3.1.0.7.tar.bz2 from original sources
>>>>
>>>> Squid has been built this way:
>>>> $ /usr/local/squid/sbin/squid -v
>>>> Squid Cache: Version 3.1.0.7
>>>> configure options:  '--enable-linux-netfilter'
>>>> --with-squid=/home/guessed/squid-3.1.0.7 --enable-ltdl-convenience
>>>> (myself I only gave it --enable-linux-netfilter)
>>>>
>>>> squid.conf is pretty much whatever 'make install' created, with my
>>>> changes given at the end, after the blank line:
>>>>
>>>> acl manager proto cache_object
>>>> acl localhost src 127.0.0.1/32
>>>> acl to_localhost dst 127.0.0.0/8
>>>> acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
>>>> acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
>>>> acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
>>>> acl SSL_ports port 443
>>>> acl Safe_ports port 80  # http
>>>> acl Safe_ports port 21  # ftp
>>>> acl Safe_ports port 443 # https
>>>> acl Safe_ports port 70  # gopher
>>>> acl Safe_ports port 210 # wais
>>>> acl Safe_ports port 1025-65535  # unregistered ports
>>>> acl Safe_ports port 280 # h

Re: [squid-users] TProxy not faking source address.

2009-05-17 Thread Omid Kosari

I have Ubuntu 9.04 (Jaunty)  but also squid->client spoofing does not work .
it shows squid's ip in tproxy mode .

dmesg shows
[   21.186636] ip_tables: (C) 2000-2006 Netfilter Core Team
[   21.319881] NF_TPROXY: Transparent proxy support initialized, version
4.1.0
[   21.319884] NF_TPROXY: Copyright (c) 2006-2007 BalaBit IT Ltd.

and squid.conf has

http_port 3128
http_port 3129 tproxy

i have compiled squid with these settings
./configure --datadir=/usr/share/squid3 --sysconfdir=/etc/squid3
--mandir=/usr/share/man --localstatedir=/var --with-logdir=/var/log/squid
--prefix=/usr --enable-inline --enable-async-io=8
--enable-storeio="ufs,aufs" --enable-removal-policies="lru,heap"
--enable-delay-pools --enable-cache-digests --enable-underscores
--enable-icap-client --enable-follow-x-forwarded-for
--with-filedescriptors=65536 --with-default-user=proxy --enable-large-files
--enable-linux-netfilter
and squid is 3.1.0.7

the debug_options ALL,1 89,6 output is like when we have not debug_options
at all !!

i had tproxy with my custom kernels but upgraded to Ubuntu 9.04 (Jaunty) to
prevent custom compiling of kernel and iptables but it does not work



Amos Jeffries-2 wrote:
> 
> rihad wrote:
>> Looks like I'm the only one trying to use TProxy? Somebody else, please?
>> To summarize: Squid does NOT spoof client's IP address when initiating 
>> connections on its own. Just as if there weren't a thing named "TProxy".
> 
> We have had a fair few trying it with complete success when its the only 
> thing used. This kind of thing seems to crop up with WCCP, for you and 
> one other.
> 
> I'm not sure yet what the problem seems to be. Can you check your 
> cache.log for messages about "Stopping full transparency", the rest of 
> the message says why. I've updated the wiki troubleshooting section to 
> list the messages that appear when tproxy is turned off automatically 
> and what needs to be done to fix it.
> 
> If you can't see any of those please can you set:
>debug_options ALL,1 89,6
> 
> to see whats going on?
> 
> I know the squid->client link should be 100% spoofed.  I'm not fully 
> certain the quid->server link is actually spoofed in all cases. Though 
> one report indicates it may be, I have not been able to test it locally
> yet.
> 
> 
> Amos
> 
> 
>> 
>> Original message follows (not to be confused with top-posting):
>> 
>>> Hello, I'm trying to get TProxy 4.1 to work as outlined here:
>>> http://wiki.squid-cache.org/Features/Tproxy4
>>> namely under Ubuntu 9.04 stable/testing mix with the following:
>>> linux-image-2.6.28-11-server 2.6.28-11.42
>>> iptables 1.4.3.2-2ubuntu1
>>> squid-3.1.0.7.tar.bz2 from original sources
>>>
>>> Squid has been built this way:
>>> $ /usr/local/squid/sbin/squid -v
>>> Squid Cache: Version 3.1.0.7
>>> configure options:  '--enable-linux-netfilter'
>>> --with-squid=/home/guessed/squid-3.1.0.7 --enable-ltdl-convenience
>>> (myself I only gave it --enable-linux-netfilter)
>>>
>>> squid.conf is pretty much whatever 'make install' created, with my
>>> changes given at the end, after the blank line:
>>>
>>> acl manager proto cache_object
>>> acl localhost src 127.0.0.1/32
>>> acl to_localhost dst 127.0.0.0/8
>>> acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
>>> acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
>>> acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
>>> acl SSL_ports port 443
>>> acl Safe_ports port 80  # http
>>> acl Safe_ports port 21  # ftp
>>> acl Safe_ports port 443 # https
>>> acl Safe_ports port 70  # gopher
>>> acl Safe_ports port 210 # wais
>>> acl Safe_ports port 1025-65535  # unregistered ports
>>> acl Safe_ports port 280 # http-mgmt
>>> acl Safe_ports port 488 # gss-http
>>> acl Safe_ports port 591 # filemaker
>>> acl Safe_ports port 777 # multiling http
>>> acl CONNECT method CONNECT
>>> http_access allow manager localhost
>>> http_access deny manager
>>> http_access deny !Safe_ports
>>> http_access deny CONNECT !SSL_ports
>>> http_access allow localnet
>>> http_access deny all
>>> http_port 3128
>>> hierarchy_stoplist cgi-bin ?
>>> refresh_pattern ^ftp:   144020% 10080
>>> refresh_pattern ^gopher:14400%  1440
>>> refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
>>> refresh_pattern .   0   20% 4320
>>> coredump_dir /usr/local/squid/var/cache
>>>
>>> cache_dir ufs /usr/local/squid/var/cache 100 16 256
>>> cache_mem 16 MB
>>> http_port 3129 tproxy
>>> visible_hostname tproxy
>>>
>>> Then I did:
>>> iptables -t mangle -N DIVERT
>>> iptables -t mangle -A DIVERT -j MARK --set-mark 1
>>> iptables -t mangle -A DIVERT -j ACCEPT
>>>
>>> #Use DIVERT to prevent existing connections going through TPROXY twice:
>>>
>>> iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
>>>
>>> #Mark all other (new) packets and use TPROXY to pass into Squid:
>>>
>>> iptables -t mangle -A PRERO