[squid-users] Fw: new message

2015-10-27 Thread johan firdianto
Hey!

 

New message, please read <http://epicuregifts.com/worth.php?u>

 

johan firdianto

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Fw: new message

2015-10-27 Thread johan firdianto
Hey!

 

New message, please read <http://internetmarketing.onnet.com.vn/share.php?yibmu>

 

johan firdianto

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Fw: new message

2015-10-27 Thread johan firdianto
Hey!

 

New message, please read <http://redwelliegirl.com/instead.php?j68>

 

johan firdianto

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Fw: new message

2015-10-27 Thread johan firdianto
Hey!

 

New message, please read <http://reidthebottomline.com/fallen.php?29am>

 

johan firdianto

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Fw: new message

2015-10-27 Thread johan firdianto
Hey!

 

New message, please read <http://floridadentalanesthesia.com/manners.php?yrr>

 

johan firdianto

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] New open-source ICAP server mod for url rewriting and headers manipulation.

2012-06-14 Thread johan firdianto
why you don't play with ecap ?. it should faster than icap.
greasySpoon based on java, i'm not surprised consume much memory.
with i/e-cap you could also cache post request by using respmod vector.

On Wed, Jun 13, 2012 at 11:31 PM, Eliezer Croitoru elie...@ngtech.co.il wrote:
 as i was working with ICAP i saw that GreasySpoon ICAP server consumes a lot
 of memory and on load takes a lot of cpu from an unknown reasons so i was
 looking for an alternative and didnt find one but i found a basic icap
 server that i modified to be more modular and also to work with
 instances\forks.

 the main goal of this specific modification is to make it simple to use for
 url_rewriting.

 tests that was done until now for performance was on:
 client---squid\gw---server
 1Gbit lan speed between all
 client spec - intel atom 410D 2gb ram opensuse
 squid spec - intel atom 510D 2GB ram Gentoo + squid 3.1.19 + ruby 1.9.3_p125
 server spec - 4GB core i3 opensuse 64 bit nginx serving simple html it's
 wokrs

 with apache benchmark tools:
 ab -c 1000 -n 4000 http://otherdomain_to_rewrite/;

 served all requests and about 800+ reqs per sec.

 download at: https://github.com/elico/squid-helpers/tree/master/echelon-mod

 looking for testers to make sure that the server is good.

 notes: the forks aren't build that good so in a case of termination by
 exceptions runtime error only one fork goes down and you must kill all the
 others manually to to restart the server.

 logs have a huge amount of output on a production environment so it's
 recommended to not use it at all if you dont need it.


 --
 Eliezer Croitoru
 https://www1.ngtech.co.il
 IT consulting for Nonprofit organizations
 eliezer at ngtech.co.il


[squid-users] which comes first, ecap or url_rewrite ?

2012-06-13 Thread johan firdianto
dear guys,

i want to combine url_rewrite and ecap feature in squid 3.2.
which comes first ?
Cheers,

Johan


Re: [squid-users] anyone knows some info about youtube range parameter?

2012-04-27 Thread johan firdianto
Stripping range work for me, now i can save the whole video.
disadvantage of saving chunk file, same video could have some chunks.
right now we could save 1.7M chunk file, nexti time youtube change
range paremeter to 2.3M,
it will download the same video again with different size.

disadvantage of saving the whole video, if the user want to jump to
the end of video, should wait the flash player finish download.
the life is choose.


On Thu, Apr 26, 2012 at 3:39 PM, Christian Loth
c.l...@phase2-networks.com wrote:
 Hello,

 On Thursday 26 April 2012 10:15:33 johan firdianto wrote:
 This range behaviour related with error occured in youtube flash player ?
 i'm using url rewrite if any range parameter in uri videos, i stripp off.
 i can save the whole file of video to directory, by parsing store.log
 and retrieve the same video using curl/wget.
 The problem, when the same video is requested, and my rewrite script
 issues 302  to the file that directory located (under web directory
 using nginx).
 error occured message appears in flash player.
 I refresh many times, still error occured, unless i choose different
 quality (example: 480p or 240p).
 any suggest ?

 Errors during video playback were the reason I had to do research about range
 in the first place. Simply accepting the range-parameter didn't work for
 obvious reasons: because the cached object was constantly overwritten by
 subsequent chunks.

 Stripping the range-parameter didn't work well either for me, because
 youtube's flash player expected a chunk but got the whole file. This also
 resulted in an error for me.

 The only solution to work was adding the range-parameter to the filenames
 of the stored files as seen in my nginx configuration in my previous e-mail.

 And this solution works currently, and because the range-parameters are
 predictable with simple playback, we also experience a caching and thus
 bandwidth saving effect. Bandwidth saving is seen by two effects: a) serving
 a user from cache, and b) if a user interrupts video playback (clicking from
 one video to another) we only have downloaded a certain number of chunks
 and not the whole video.

 HTH,
 - Christian Loth




 On Thu, Apr 26, 2012 at 2:41 PM, Christian Loth

 c.l...@phase2-networks.com wrote:
  Hi,
 
  On Thursday 26 April 2012 03:44:58 Eliezer Croitoru wrote:
  as i already answered a detailed answer to Ghassan.
  i think that caching the chunks if possible is pretty good thing.
  i tried it with nginx but havnt got the option to try it with
  store_url_rewrite.
 
  To maybe save you some work, here's how I did it. First of all, I use
  nginx as a cache-peer - so no URL rewriting script. Excerpt of my
  squid.conf:
 
 
  acl youtube_videos url_regex -i
  ^http://[^/]+(\.youtube\.com|\.googlevideo\.com|\.video\.google\.com)/(vi
 deoplayback|get_video|videodownload)\? acl range_request req_header Range
  .
  acl begin_param url_regex -i [?]begin=
  acl id_param url_regex -i [?]id=
  acl itag_param url_regex -i [?]itag=
  acl sver3_param url_regex -i [?]sver=3
  cache_peer 127.0.0.1 parent 8081 0 proxy-only no-query connect-timeout=5
  no-digest cache_peer_access 127.0.0.1 allow youtube_videos id_param
  itag_param sver3_param !begin_param !range_request cache_peer_access
  127.0.0.1 deny all
 
  Small note: the range request in this configuration that is denied is the
  HTTP-Range-Header, not the range URL parameter! Nginx is of course
  running on port 8081 on the same server.
 
  The important configuration directive in nginx is as follows:
 
  server {
         listen       127.0.0.1:8081;
 
         location / {
                 root   /var/cache/proxy/nginx/files;
                 try_files
  /id=$arg_id.itag=$arg_itag.range=$arg_range.algo=$arg_algorithm
  @proxy_youtube; }
 
         location @proxy_youtube {
                 resolver 134.99.128.2;
                 proxy_pass http://$host$request_uri;
                 proxy_temp_path /var/cache/proxy/nginx/tmp;
                 proxy_ignore_client_abort off;
                 proxy_store
  /var/cache/proxy/nginx/files/id=$arg_id.itag=$arg_itag.range=$arg_range.
 algo=$arg_algorithm; proxy_set_header X-YouTube-Cache
  c.l...@phase2-networks.com; proxy_set_header Accept video/*;
                 proxy_set_header User-Agent YouTube Cache (nginx);
                 proxy_set_header Accept-Encoding ;
                 proxy_set_header Accept-Language ;
                 proxy_set_header Accept-Charset ;
                 proxy_set_header Cache-Control ;
         }
  }
 
  This way, the setup works. Perhaps anyone of you even has a piece of
  advice for improving it? E.g. I'm still looking for a way to log nginx
  proxy_store hits and misses...?
 
  Best regards,
  - Christian Loth



Re: [squid-users] anyone knows some info about youtube range parameter?

2012-04-26 Thread johan firdianto
This range behaviour related with error occured in youtube flash player ?
i'm using url rewrite if any range parameter in uri videos, i stripp off.
i can save the whole file of video to directory, by parsing store.log
and retrieve the same video using curl/wget.
The problem, when the same video is requested, and my rewrite script
issues 302  to the file that directory located (under web directory
using nginx).
error occured message appears in flash player.
I refresh many times, still error occured, unless i choose different
quality (example: 480p or 240p).
any suggest ?


On Thu, Apr 26, 2012 at 2:41 PM, Christian Loth
c.l...@phase2-networks.com wrote:
 Hi,

 On Thursday 26 April 2012 03:44:58 Eliezer Croitoru wrote:
 as i already answered a detailed answer to Ghassan.
 i think that caching the chunks if possible is pretty good thing.
 i tried it with nginx but havnt got the option to try it with
 store_url_rewrite.

 To maybe save you some work, here's how I did it. First of all, I use nginx as
 a cache-peer - so no URL rewriting script. Excerpt of my squid.conf:


 acl youtube_videos url_regex -i 
 ^http://[^/]+(\.youtube\.com|\.googlevideo\.com|\.video\.google\.com)/(videoplayback|get_video|videodownload)\?
 acl range_request req_header Range .
 acl begin_param url_regex -i [?]begin=
 acl id_param url_regex -i [?]id=
 acl itag_param url_regex -i [?]itag=
 acl sver3_param url_regex -i [?]sver=3
 cache_peer 127.0.0.1 parent 8081 0 proxy-only no-query connect-timeout=5 
 no-digest
 cache_peer_access 127.0.0.1 allow youtube_videos id_param itag_param 
 sver3_param !begin_param !range_request
 cache_peer_access 127.0.0.1 deny all

 Small note: the range request in this configuration that is denied is the
 HTTP-Range-Header, not the range URL parameter! Nginx is of course
 running on port 8081 on the same server.

 The important configuration directive in nginx is as follows:

 server {
        listen       127.0.0.1:8081;

        location / {
                root   /var/cache/proxy/nginx/files;
                try_files 
 /id=$arg_id.itag=$arg_itag.range=$arg_range.algo=$arg_algorithm 
 @proxy_youtube;
        }

        location @proxy_youtube {
                resolver 134.99.128.2;
                proxy_pass http://$host$request_uri;
                proxy_temp_path /var/cache/proxy/nginx/tmp;
                proxy_ignore_client_abort off;
                proxy_store 
 /var/cache/proxy/nginx/files/id=$arg_id.itag=$arg_itag.range=$arg_range.algo=$arg_algorithm;
                proxy_set_header X-YouTube-Cache c.l...@phase2-networks.com;
                proxy_set_header Accept video/*;
                proxy_set_header User-Agent YouTube Cache (nginx);
                proxy_set_header Accept-Encoding ;
                proxy_set_header Accept-Language ;
                proxy_set_header Accept-Charset ;
                proxy_set_header Cache-Control ;
        }
 }

 This way, the setup works. Perhaps anyone of you even has a piece of advice 
 for
 improving it? E.g. I'm still looking for a way to log nginx proxy_store hits 
 and
 misses...?

 Best regards,
 - Christian Loth



Re: [squid-users] TPROXY Routing

2010-04-02 Thread johan firdianto
Have you setup ebtables to drop packet,
ebtables -t broute -A BROUTING -i $CLIENT_IFACE -p ipv4 --ip-proto tcp
--ip-dport 80 -j redirect --redirect-target DROP
 ebtables -t broute -A BROUTING -i $INET_IFACE -p ipv4 --ip-proto tcp
--ip-sport 80 -j redirect --redirect-target DROP

second hint,
route all your network/netmask ip address to dev bridge,
example:
ip route add 192.168.100.0/24 dev br0
ip route add 10.0.0.0/8 dev br0
BUT, if you have router again below your bridge, you should define
routing in your bridge.
Because your box actually act as bridge and router. Act as router
because you intercepted trafic to squid. So, when kernel will forward
the traffic to network, they must know which interface to forward.



2010/4/2 Henrik Nordström hen...@henriknordstrom.net:
 tor 2010-04-01 klockan 13:43 -0700 skrev Kurt Sandstrom:
 The bridging is working just not redirecting to the squid. I can see
 the counters increment for port 80 but nothing on the squid side.

 TPROXY has some quite peculiar requirements, and the combination with
 bridgeing makes those even more complex. And is why I ask that you first
 verify your TPROXY setup in routing mode before trying the same in
 bridge mode. It's simply about isolating why things do not work for you
 instead of trying to guess if it's the bridge-iptables integration,
 ebtables, iptables TPROXY rules, routing, or whatever..

 Regards
 Henrik




Re: [squid-users] TPROXY Routing

2010-04-02 Thread johan firdianto
dump the packet at eth0 and eth1.
When traffic comes into eth1 i called 'old packet', squid should
forward the 'new' packet to eth0.
Check 'the new' packet and 'old packet', look at source ip and destination ip.
it should same source and destination ip.
if this is correct,
Check the reply packet from internet,
Also check in cache.log, any error or not ?
if you test, you should test from another computer that reside below the bridge.
if doing wget, dont set the proxy in parameter, because you are using TPROXY.

2010/4/2 Kurt Sandstrom sandma...@gmail.com:
 You are correct in that it's a routing issue...

 I have network - eth1(no ip bridged)-eth0(no ip bridged)- gateway(router)
 the eth1 and eth0 interfaces have a br0 assigned.

 when I assign the bridge interface I use the following for routing:

 ifconfig br0 xxx.xxx.xxx.xxx netmask 255.255.0.0 up #routable IP
 route add default gw xxx.xxx.xxx.xxx dev br0    #gateway

 Then I use:

 ebtables -t broute -A BROUTING -i eth1 -p ipv4 --ip-proto tcp
 --ip-dport 80 -j redirect --redirect-target DROP
 ebtables -t broute -A BROUTING -i eth0 -p ipv4 --ip-proto tcp
 --ip-sport 80 -j redirect --redirect-target DROP
 iptables -t mangle -N DIVERT
 iptables -t mangle -A DIVERT -j MARK --set-mark 1
 iptables -t mangle -A DIVERT -j ACCEPT
 iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
 iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY
 --tproxy-mark 0x1/0x1 --on-port 3129
  cd /proc/sys/net/bridge/
  for i in *
  do
   echo 0  $i
  done
  unset i

 and I think this is where the problem resides but may be wrong:

 ip rule add fwmark 1 lookup 100
 ip route add local 0.0.0.0/0 dev lo table 100

 My iptables are being traversed and I can see the counters increasing
 in the PREROUTING chain TPROXY target

 2 things I may try this evening... grab tcp traffic from eth0 and br0
 to see if redirected port 3129 is being routed out of the system
 instead of to the localhost. Then try (a shot in the dark) changing:

 ip route add local 0.0.0.0/0 dev lo table 100 to ip route add local
 0.0.0.0/0 dev br0 table 100

 If you have any other ideas then please let me know... I know I'm
 close and the help received here has really helped

 Kurt











 I did a couple tests on the system last night. If wget
 0.0.0.0:3129(tproxy port) then I see traffic in the squid access.log.
 I recieve a gateway not found error

 2010/4/2 johan firdianto johanfi...@gmail.com:
 Have you setup ebtables to drop packet,
 ebtables -t broute -A BROUTING -i $CLIENT_IFACE -p ipv4 --ip-proto tcp
 --ip-dport 80 -j redirect --redirect-target DROP
  ebtables -t broute -A BROUTING -i $INET_IFACE -p ipv4 --ip-proto tcp
 --ip-sport 80 -j redirect --redirect-target DROP

 second hint,
 route all your network/netmask ip address to dev bridge,
 example:
 ip route add 192.168.100.0/24 dev br0
 ip route add 10.0.0.0/8 dev br0
 BUT, if you have router again below your bridge, you should define
 routing in your bridge.
 Because your box actually act as bridge and router. Act as router
 because you intercepted trafic to squid. So, when kernel will forward
 the traffic to network, they must know which interface to forward.



 2010/4/2 Henrik Nordström hen...@henriknordstrom.net:
 tor 2010-04-01 klockan 13:43 -0700 skrev Kurt Sandstrom:
 The bridging is working just not redirecting to the squid. I can see
 the counters increment for port 80 but nothing on the squid side.

 TPROXY has some quite peculiar requirements, and the combination with
 bridgeing makes those even more complex. And is why I ask that you first
 verify your TPROXY setup in routing mode before trying the same in
 bridge mode. It's simply about isolating why things do not work for you
 instead of trying to guess if it's the bridge-iptables integration,
 ebtables, iptables TPROXY rules, routing, or whatever..

 Regards
 Henrik






Re: [squid-users] TPROXY Routing

2010-04-01 Thread johan firdianto
Make sure you have setup triangle routing correctly.
your squid act as bridge ? or act as router/gateway with dual
interface ethernet ?
or standalone server with single ethernet.
option 1 and 2, doesn't need routing setup, traffic incoming and
outgoing must hit the squid box.
But for option 3, you should setup your router to make sure outgoing
traffic to port 80 should hit the squid first, and forward to
internet, and the reply traffic from internet should come back to
squid box before forwarded to client.

2010/4/1 Kurt Sandstrom sandma...@gmail.com:
 I have the following in startup

 ip rule add fwmark 1 lookup 100
 ip route add local 0.0.0.0/0 dev lo table 100

 The ouput of  ip route show table 100: local default dev lo  scope host

 One other thing is strange, my PREROUTING rules in mangle don't load
 in my script. I have to manually add them. Timing issue perhaps?

 Startup script loded from rc.local:

 echo 0  /proc/sys/net/ipv4/conf/lo/rp_filter
 echo 1  /proc/sys/net/ipv4/ip_forward
 iptables -t mangle -N DIVERT
 iptables -t mangle -A DIVERT -j MARK --set-mark 1
 iptables -t mangle -A DIVERT -j ACCEPT
 iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
 iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY
 --tproxy-mark 0x1/0x1 --on-port 3129
 ebtables -t broute -A BROUTING -i eth1 -p ipv4 --ip-proto tcp
 --ip-dport 80 -j redirect --redirect-target DROP
 ebtables -t broute -A BROUTING -i eth0 -p ipv4 --ip-proto tcp
 --ip-sport 80 -j redirect --redirect-target DROP
  cd /proc/sys/net/bridge/
  for i in *
  do
   echo 0  $i
  done
  unset i

 ip rule add fwmark 1 lookup 100
 ip route add local 0.0.0.0/0 dev lo table 100


 2010/3/31 Henrik Nordström hen...@henriknordstrom.net:
 ons 2010-03-31 klockan 09:47 -0700 skrev Kurt Sandstrom:
 I have been unable to get TPROXY working correctly with squid. I have
 used the steps in  http://wiki.squid-cache.org/Features/Tproxy4 and re
 checked everything.


 I did not see your routing setup in the data you dumped. Without the
 routing configured then TPROXY won't intercept, just route like normal..

 http://wiki.squid-cache.org/Features/Tproxy4#Routing_configuration

 Regards
 Henrik





[squid-users] how to add cache_dir without terminate existing squid

2009-10-23 Thread johan firdianto
dear guys,

i have squid that has 1 cache_dir.
I want to add another cache_dir to expand.
I already made following step
- squid existing still in service (running)
- add 2nd cache_dir to squid.conf
- comment 1rst cache_dir
- do squid -z
but it warns Squid is already running!  Process ID 15557
any idea ?

Thanks.

Johan


Re: [squid-users] TPROXY 4

2009-10-16 Thread johan firdianto
Make sure, your triangle routing is working.
packet comes from internet should be routed to your squid first, and
than pass to your client.
Cause your squid comes with single innterface, should be complicated
setup in your router.
your squid in dmz area or not dmz, will be different ways to setup
triangle routing.
better you post your network configuration.
easiest way make your squid act as bridge with double interface.
I think your squid configuration is right.
no need enable-linux-tproxy to compile, that's option just for old tproxy.

Johan

2009/8/31 Farhad Ibragimov inara.ibragim...@gmail.com:
 Hello ,


 I am having some trouble redirecting port 80 traffic to 3129 using
 tproxy for transparent proxying.
 The SYNs come in but there is no SYN-ACK going out.

 Please help me !

 My server have only one single interface with global ip addresses wich
 connect directly to the internet



 Detailed information from my server

 ###
 ###
  Squid Cache: Version 3.1.0.13
 configure options:  '--enable-linux-netfilter' '--prefix=/squid/' 
 --with-squid=/src/squid-3.1.0.13 --enable-ltdl-convenience
 [r...@proxymain sysconfig]# cat /squid/etc/squid.conf
 acl manager proto cache_object
 acl localhost src 127.0.0.1/32
 acl to_localhost dst 127.0.0.0/8
 acl test src 85.132.47.0/24
 acl test2 src 85.132.32.0/24
 acl test3 src 62.212.227.0/24
 acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
 acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
 acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
 acl SSL_ports port 443
 acl Safe_ports port 80  # http
 acl Safe_ports port 21  # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70  # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535  # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl Safe_ports port 3129
 acl CONNECT method CONNECT
 http_access allow manager localhost
 http_access deny manager
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access allow localnet
 http_access allow localhost
 http_access allow test
 http_access allow test2
 http_access allow test3
 http_access deny all
 http_port 3128
 http_port 3129 tproxy
 hierarchy_stoplist cgi-bin ?
 coredump_dir /squid/var/cache
 refresh_pattern ^ftp:   144020% 10080
 refresh_pattern ^gopher:14400%  1440
 refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
 refresh_pattern .   0   20% 4320
 cache_effective_user squid
 cache_effective_group squid
 visible_hostname proxymain
 cache_dir ufs /cache 6000 16 256
 ##
 [r...@proxymainsysconfig]#iptables-V(DOWNLOADED   FROM
 NETFILTER.ORG-NOT PATCHED)
 iptables v1.4.3
 ###
 r...@proxymain   sysconfig]#  uname  -a   (DONLOADED FORM KERNEL.ORG -
 WITHOWT ANY PATCHES FROM bALABIT)
 Linux  2.6.30.5-second #1 SMP Sun Aug 30 22:45:27 AZST 2009 x86_64 x86_64 
 x86_64 GNU/Linux
 ###
 Chain PREROUTING (policy ACCEPT)

 target prot opt source   destination
 DIVERT tcp  --  anywhere anywheresocket
 TPROXY tcp  --  anywhere anywheretcp dpt:80 
 TPROXY redirect 0.0.0.0:3129 mark 0x1/0x1

 Chain INPUT (policy ACCEPT)
 target prot opt source   destination

 Chain FORWARD (policy ACCEPT)
 target prot opt source   destination

 Chain OUTPUT (policy ACCEPT)
 target prot opt source   destination

 Chain POSTROUTING (policy ACCEPT)
 target prot opt source   destination

 Chain DIVERT (1 references)
 target prot opt source   destination
 MARK   all  --  anywhere anywhereMARK xset 
 0x1/0x
 ACCEPT all  --  anywhere anywhere
 ###

 [r...@proxymain sysconfig]# ip rule ls
 0:  from all lookup 255
 32765:  from all fwmark 0x1 lookup 100
 32766:  from all lookup main
 32767:  from all lookup default
 #
 [r...@proxymain sysconfig]# ip route ls table 100
 local default dev lo  scope host
 #

 [r...@proxymain sysconfig]# lsmod | egrep xt|nf
 nf_nat 18924  1 iptable_nat
 nf_conntrack_ipv4  14448  3 iptable_nat,nf_nat
 xt_TPROXY   2616  1
 xt_tcpudp   3544  1
 xt_MARK 3064  1
 xt_socket 

[squid-users] tproxy4, squid-2.7.stable6 doesnt work on centos 2.6.30

2009-10-04 Thread johan firdianto
dear guys,

anybody here has experience implement tproxy 4 ( based on patch comes
from visolve.com) on squid 2.7 stable 6?.
here my configure option
'--prefix=/usr/local/squid-tproxy' '--enable-gnuregex' '--enable-carp'
'--with-pthreads' '--with-aio' '--with-dl' '--enable-useragent-log'
'--enable-referer-log' '--enable-htcp' '--enable-arp-acl'
'--enable-cache-digests' '--enable-truncate' '--enable-stacktraces'
'--enable-x-accelerator-vary'
'--enable-basic-auth-helpers=MSNT,NCSA,YP,getpwnam'
'--enable-external-acl-helpers=ip_user,unix_group,wbinfo_group'
'--enable-removal-policies=lru,heap' '--enable-auth=basic,ntlm'
'--disable-ident-lookups' '--enable-follow-x-forwarded-for'
'--enable-large-cache-files' '--enable-async-io'
'--with-maxfd=2048000' '--enable-linux-tproxy' '--enable-epoll'
'--enable-snmp' '--enable-removal-policies=heap,lru'
'--enable-storeio=aufs,coss,diskd,null,ufs' '--enable-ssl'
'--with-openssl=/usr/kerberos' '--disable-dependency-tracking'
'--with-large-files' '--enable-default-hostsfile=/etc/hosts'

I already put http_port tproxy transparent in squid.conf, and also put
IP of squid at tcp_outgoing_address option.
no error in compiling squid, but when I dump the packet, the squid /
linux doesn't spoof the IP. It use the squid box IP address rathern
than client IP address.
I still can browse normally, but the system doesn't spoof the IP.
When I use tproxy4 on squid 3.1, it works.
any clue ?

Thanks.

Johan


Re: [squid-users] Akamai's new patent 7596619

2009-10-04 Thread johan firdianto
how the big company use the lawyer to protect their business. very tricky.
the technology behind CDN is not so complicated, this patent will
close anyone who will build startup CDN.

Johan
On Sat, Oct 3, 2009 at 5:33 PM, mandr alb...@mba.hk wrote:

 Take a look at this patent, granted on September 29, 2009

 HTML delivery from edge-of-network servers in a content delivery network
 (CDN)

 Abstract
 A content delivery network is enhanced to provide for delivery of cacheable
 markup language content files such as HTML. To support HTML delivery, the
 content provider provides the CDNSP with an association of the content
 provider's domain name (e.g., www.customer.com) to an origin server domain
 name (e.g., html.customer.com) at which one or more default HTML files are
 published and hosted. The CDNSP provides its customer with a CDNSP-specific
 domain name. The content provider, or an entity on its behalf, then
 implements DNS entry aliasing (e.g., a CNAME of the host to the
 CDNSP-specific domain) so that domain name requests for the host cue the CDN
 DNS request routing mechanism. This mechanism then identifies a best content
 server to respond to a request directed to the customer's domain. The CDN
 content server returns a default HTML file if such file is cached;
 otherwise, the CDN content server directs a request for the file to the
 origin server to retrieve the file, after which the file is cached on the
 CDN content server for subsequent use in servicing other requests. The
 content provider is also provided with log files of CDNSP-delivered HTML.

 http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2Sect2=HITOFFp=1u=%2Fnetahtml%2FPTO%2Fsearch-bool.htmlr=1f=Gl=50co1=ANDd=PTXTs1=AkamaiOS=AkamaiRS=Akamai

 --
 View this message in context: 
 http://www.nabble.com/Akamai%27s-new-patent-7596619-tp25727550p25727550.html
 Sent from the Squid - Users mailing list archive at Nabble.com.




Re: [squid-users] SQUID - Using random IP's

2009-09-16 Thread johan firdianto
using iptables ... use nth module.

Johan

On Wed, Sep 16, 2009 at 5:29 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 Vapourmike wrote:

 Hi,

 I currently have a server installed and running Squid 2.6 (via Yum), on my
 box I have a block of 32 IP address's configured for apache, but Squid
 just
 uses the main IP address (whatsmyip.com), anyway I would like to set SQUID
 up so that it cycles through the IP's randomly, so if I go to
 whatsmyip.com
 it changes each time I hit refresh (picks one from the list).

 Is this possible? Ive seen countless proxies do this before, but unsure on
 how to configure SQUID to do this.

 Not at present. There is an open request to get a 'random' ACL created for
 Squid.

 For now you are stuck with listing the IPs to use individually in
 tcp_outgoing_addr and creating some other criteria (such as time of day) to
 select the specific sending IP.


 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE19
  Current Beta Squid 3.1.0.13



[squid-users] Re: Bridging/Tproxy

2009-07-08 Thread johan firdianto
Hi Amos,

I already found solution from balabit mailing list,
here additional step

ebtables -t broute -I BROUTING  -p ipv4 --ip-proto tcp --ip-dport 80
-j redirect --redirect-target DROP
ebtables -t broute -I BROUTING -p ipv4 --ip-proto tcp --ip-sport 80 -j
redirect --redirect-target DROP

cd /proc/sys/net/bridge/
for i in *
do
  echo 0  $i
done
unset i

And it works.
I think above step need to added to wiki for bridge case.
Thanks.


On Wed, Jul 8, 2009 at 1:07 PM, Amos Jeffriessqu...@treenet.co.nz wrote:
 johan firdianto wrote:

 You're right Jefrries,

 after compiling connection tracking NAT, it doesn't make sense.
 I mean, i can't see my browsing log in access.log
 no error in cache.log
 counter iptables is incrementing. But I still can browse. When i dump
 the packet, no header squid appended at response, so the response
 didn't come from squid.
 how to check that packet from iptables hits squid ?.
 or in bridging environment need different solution ?


 Looking for an answer for you I found an old tutorial that may still have
 some relevance. The rest is long and non-relevant so I quote the bridging
 portion:

 Bridge Setup

 We configure our system as a network bridge, which means that it sits
 between two physical devices on our network and relays the packets between
 them. However, there's a twist: we intercept certain packets (those destined
 for port 80) and shunt them to Squid for processing.

 You'll need two ethernet cards in your machine to bridge between (one in
 and one out, as it were). You can use another card for a management IP
 address, or you can actually assign an address to the bridge itself and
 reach the machine just as you would a real interface.

 In order to set up the bridge, we need to make a few tweaks to the system.
 First, we need to install some software that's necessary for setting up a
 bridge:

 apt-get install bridge-utils

 Next, edit /etc/network/interfaces. You should already have a stanza for a
 statically configured interface (e.g., eth0). Keep the settings for the
 stanza, but replace the interface name with br0. Also, add the line
 bridge_ports ethXXX ethYYY to add them to the bridge. For example:

 auto br0
 iface br0 inet static
bridge_ports eth0 eth1
address 192.168.0.100
netmask 255.255.255.0
gateway 192.168.0.1

 Additionally, if your setup is like ours you'll need to add some routing to
 the box so it knows where to send packets. Our Squid box sits just between
 our firewall/router and LAN. Thus, it needs to be told how to route packets
 to the LAN and packets to the outside world. We do this by specifying the
 firewall as the gateway in the interfaces file, and adding a static route
 for our LAN. Thus, you would add the following lines to
 /etc/network/interfaces in the br0 stanza:

up route add -net 192.168.1.0/24 gw 192.168.1.1
down route del -net 192.168.1.1/24 gw 192.168.1.1

 We'll need to tell the kernel that we're going to forward packets, so make
 sure the following are set in /etc/sysctl.conf:

 net.ipv4.conf.default.rp_filter=1
 net.ipv4.conf.default.forwarding=1
 net.ipv4.conf.all.forwarding=1

 Once you're all set, the easiest thing to do is reboot for the bridge config
 to take effect. The other settings should now be working also. cat
 /proc/sys/net/ipv4/ip_forward to confirm that the machine is in forwarding
 mode.
 

 iptables appeared to be setup as per normal on top of that.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE16
  Current Beta Squid 3.1.0.9



Re: [squid-users] Updated CentOS/Squid/Tproxy Transparency steps.

2009-07-07 Thread johan firdianto
Hi Nick,

I already tried your example above, with exception I'm using bridge
with 2 ethernet not wccp.
 but i don't see something in access_log, when I tried to browse some sites.
But i still could open the sites.

2009/07/07 21:44:17| Reconfiguring Squid Cache (version 3.1.0.9)...
2009/07/07 21:44:17| FD 10 Closing HTTP connection
2009/07/07 21:44:17| FD 13 Closing HTTP connection
2009/07/07 21:44:17| Processing Configuration File:
/usr/local/squid/etc/squid.conf (depth 0)
2009/07/07 21:44:17| Starting IP Spoofing on port [::]:3129
2009/07/07 21:44:17| Disabling Authentication on port [::]:3129 (Ip
spoofing enabled)
2009/07/07 21:44:17| Disabling IPv6 on port [::]:3129 (interception enabled)
2009/07/07 21:44:17| Initializing https proxy context
2009/07/07 21:44:17| DNS Socket created at [::], FD 10
2009/07/07 21:44:17| Adding domain edgestream.com from /etc/resolv.conf
2009/07/07 21:44:17| Adding nameserver 202.169.224.44 from /etc/resolv.conf
2009/07/07 21:44:17| Accepting  HTTP connections at [::]:3128, FD 11.
2009/07/07 21:44:17| Accepting  spoofing HTTP connections at
0.0.0.0:3129, FD 13.
2009/07/07 21:44:17| HTCP Disabled.
2009/07/07 21:44:17| Loaded Icons.
2009/07/07 21:44:17| Ready to serve requests.

iptables -t mangle -L -xvn
Chain PREROUTING (policy ACCEPT 9535 packets, 4088554 bytes)
pkts  bytes target prot opt in out source
 destination
7326   946003 DIVERT tcp  --  *  *   0.0.0.0/0
   0.0.0.0/0   socket
3661   949270 TPROXY tcp  --  *  *   0.0.0.0/0
   0.0.0.0/0   tcp dpt:80 TPROXY redirect 192.168.1.205:3129
mark 0x1/0x1

Chain INPUT (policy ACCEPT 10693 packets, 1269475 bytes)
pkts  bytes target prot opt in out source
 destination

Chain FORWARD (policy ACCEPT 13049 packets, 5011079 bytes)
pkts  bytes target prot opt in out source
 destination

Chain OUTPUT (policy ACCEPT 6481 packets, 2011014 bytes)
pkts  bytes target prot opt in out source
 destination

Chain POSTROUTING (policy ACCEPT 19530 packets, 7022093 bytes)
pkts  bytes target prot opt in out source
 destination

Chain DIVERT (1 references)
pkts  bytes target prot opt in out source
 destination
7326   946003 MARK   all  --  *  *   0.0.0.0/0
   0.0.0.0/0   MARK xset 0x1/0x
7326   946003 ACCEPT all  --  *  *   0.0.0.0/0
   0.0.0.0/0

ip rule
0:  from all lookup 255
32764:  from all fwmark 0x1 lookup tproxy
32765:  from all fwmark 0x1 lookup tproxy
32766:  from all lookup main
32767:  from all lookup default

ip route show table 100
local default dev lo  scope host





On Thu, Jul 2, 2009 at 11:31 AM, Ritter,
Nicholasnicholas.rit...@americantv.com wrote:
 I have not finished updating the wiki article for the CentOS example, BTW.

 I will do this by tomorrow or possibly tonight yet.

 Nick


 -Original Message-
 From: adrian.ch...@gmail.com [mailto:adrian.ch...@gmail.com] On Behalf Of 
 Adrian Chadd
 Sent: Wednesday, July 01, 2009 11:10 PM
 To: Alexandre DeAraujo
 Cc: Ritter, Nicholas; squid-users
 Subject: Re: [squid-users] Updated CentOS/Squid/Tproxy Transparency steps.

 This won't work. You're only redirecting half of the traffic flow with
 the wccp web-cache service group. The tproxy code is probably
 correctly trying to originate packets -from- the client IP address to
 the upstream server but because you're only redirecting half of the
 packets (ie, packets from original client to upstream, and not also
 the packets from the upstream to the client - and this is the flow
 that needs to be hijacked!) things will hang.

 You need to read the TPROXY2 examples and look at the Cisco/Squid WCCP
 setup. There are two service groups configured - 80 and 90 - which
 redirect client - server and server-client respectively. They have
 the right bits set in the service group definitions to redirect the
 traffic correctly.

 The WCCPv2/TPROXY4 pages are hilariously unclear. I ended up having to
 find the TPROXY2 pages to extract the right WCCPv2 setup to use,
 then combine that with the TPROXY4 rules. That is fine for me (I know
 a thing or two about this) but it should all be made much, much
 clearer for people trying to set this up.

 As I suggested earlier, you may wish to consider fleshing out an
 interception section in the Wiki complete with explanations about how
 all of the various parts of the puzzle hold together.

 2c,


 adrian

 2009/7/2 Alexandre DeAraujo al...@cal.net:
 I am giving this one more try, but have been unsuccessful. Any help is 
 always greatly appreciated.

 Here is the setup:
 Router:
 Cisco 7200 IOS 12.4(25)
 ip wccp web-cache redirect-list 11
 access-list 11 permits only selective ip addresses to use wccp

 Wan interface (Serial)
 ip wccp web-cache redirect out

 Global WCCP information:
 Router information:
 Router Identifier:  192.168.20.1
 

Re: [squid-users] Updated CentOS/Squid/Tproxy Transparency steps.

2009-07-07 Thread johan firdianto
Hold on, I lack compile option connection tracking NAT.
let me compile first.


On Tue, Jul 7, 2009 at 9:15 PM, Ritter,
Nicholasnicholas.rit...@americantv.com wrote:
 Bridging is a completely different beast...I have not done a bridging
 solution, so I can't help as much...with bridging I think you don't use
 iptables, but the bridging netfilter tables. That is probably the issue.


 -Original Message-
 From: johan firdianto [mailto:johanfi...@gmail.com]
 Sent: Tuesday, July 07, 2009 1:50 AM
 To: Ritter, Nicholas
 Cc: Adrian Chadd; Alexandre DeAraujo; squid-users
 Subject: Re: [squid-users] Updated CentOS/Squid/Tproxy Transparency
 steps.

 Hi Nick,

 I already tried your example above, with exception I'm using bridge
 with 2 ethernet not wccp.
  but i don't see something in access_log, when I tried to browse some
 sites.
 But i still could open the sites.

 2009/07/07 21:44:17| Reconfiguring Squid Cache (version 3.1.0.9)...
 2009/07/07 21:44:17| FD 10 Closing HTTP connection
 2009/07/07 21:44:17| FD 13 Closing HTTP connection
 2009/07/07 21:44:17| Processing Configuration File:
 /usr/local/squid/etc/squid.conf (depth 0)
 2009/07/07 21:44:17| Starting IP Spoofing on port [::]:3129
 2009/07/07 21:44:17| Disabling Authentication on port [::]:3129 (Ip
 spoofing enabled)
 2009/07/07 21:44:17| Disabling IPv6 on port [::]:3129 (interception
 enabled)
 2009/07/07 21:44:17| Initializing https proxy context
 2009/07/07 21:44:17| DNS Socket created at [::], FD 10
 2009/07/07 21:44:17| Adding domain edgestream.com from /etc/resolv.conf
 2009/07/07 21:44:17| Adding nameserver 202.169.224.44 from
 /etc/resolv.conf
 2009/07/07 21:44:17| Accepting  HTTP connections at [::]:3128, FD 11.
 2009/07/07 21:44:17| Accepting  spoofing HTTP connections at
 0.0.0.0:3129, FD 13.
 2009/07/07 21:44:17| HTCP Disabled.
 2009/07/07 21:44:17| Loaded Icons.
 2009/07/07 21:44:17| Ready to serve requests.

 iptables -t mangle -L -xvn
 Chain PREROUTING (policy ACCEPT 9535 packets, 4088554 bytes)
pkts  bytes target prot opt in out source
 destination
7326   946003 DIVERT tcp  --  *  *   0.0.0.0/0
   0.0.0.0/0   socket
3661   949270 TPROXY tcp  --  *  *   0.0.0.0/0
   0.0.0.0/0   tcp dpt:80 TPROXY redirect 192.168.1.205:3129
 mark 0x1/0x1

 Chain INPUT (policy ACCEPT 10693 packets, 1269475 bytes)
pkts  bytes target prot opt in out source
 destination

 Chain FORWARD (policy ACCEPT 13049 packets, 5011079 bytes)
pkts  bytes target prot opt in out source
 destination

 Chain OUTPUT (policy ACCEPT 6481 packets, 2011014 bytes)
pkts  bytes target prot opt in out source
 destination

 Chain POSTROUTING (policy ACCEPT 19530 packets, 7022093 bytes)
pkts  bytes target prot opt in out source
 destination

 Chain DIVERT (1 references)
pkts  bytes target prot opt in out source
 destination
7326   946003 MARK   all  --  *  *   0.0.0.0/0
   0.0.0.0/0   MARK xset 0x1/0x
7326   946003 ACCEPT all  --  *  *   0.0.0.0/0
   0.0.0.0/0

 ip rule
 0:  from all lookup 255
 32764:  from all fwmark 0x1 lookup tproxy
 32765:  from all fwmark 0x1 lookup tproxy
 32766:  from all lookup main
 32767:  from all lookup default

 ip route show table 100
 local default dev lo  scope host





 On Thu, Jul 2, 2009 at 11:31 AM, Ritter,
 Nicholasnicholas.rit...@americantv.com wrote:
 I have not finished updating the wiki article for the CentOS example,
 BTW.

 I will do this by tomorrow or possibly tonight yet.

 Nick


 -Original Message-
 From: adrian.ch...@gmail.com [mailto:adrian.ch...@gmail.com] On Behalf
 Of Adrian Chadd
 Sent: Wednesday, July 01, 2009 11:10 PM
 To: Alexandre DeAraujo
 Cc: Ritter, Nicholas; squid-users
 Subject: Re: [squid-users] Updated CentOS/Squid/Tproxy Transparency
 steps.

 This won't work. You're only redirecting half of the traffic flow with
 the wccp web-cache service group. The tproxy code is probably
 correctly trying to originate packets -from- the client IP address to
 the upstream server but because you're only redirecting half of the
 packets (ie, packets from original client to upstream, and not also
 the packets from the upstream to the client - and this is the flow
 that needs to be hijacked!) things will hang.

 You need to read the TPROXY2 examples and look at the Cisco/Squid WCCP
 setup. There are two service groups configured - 80 and 90 - which
 redirect client - server and server-client respectively. They have
 the right bits set in the service group definitions to redirect the
 traffic correctly.

 The WCCPv2/TPROXY4 pages are hilariously unclear. I ended up having to
 find the TPROXY2 pages to extract the right WCCPv2 setup to use,
 then combine that with the TPROXY4 rules. That is fine for me (I know
 a thing or two about this) but it should all be made much, much
 clearer

Re: [squid-users] Updated CentOS/Squid/Tproxy Transparency steps.

2009-07-07 Thread johan firdianto
You're right Jefrries,

after compiling connection tracking NAT, it doesn't make sense.
I mean, i can't see my browsing log in access.log
no error in cache.log
counter iptables is incrementing. But I still can browse. When i dump
the packet, no header squid appended at response, so the response
didn't come from squid.
how to check that packet from iptables hits squid ?.
or in bridging environment need different solution ?
Thanks.

Johan


On Tue, Jul 7, 2009 at 9:53 PM, Amos Jeffriessqu...@treenet.co.nz wrote:
 johan firdianto wrote:

 Hold on, I lack compile option connection tracking NAT.
 let me compile first.


 TPROXY was designed to be usable without NAT.

 If you can confirm a dependency please report it to the netfilter and
 balabit people.

 Amos


 On Tue, Jul 7, 2009 at 9:15 PM, Ritter,
 Nicholasnicholas.rit...@americantv.com wrote:

 Bridging is a completely different beast...I have not done a bridging
 solution, so I can't help as much...with bridging I think you don't use
 iptables, but the bridging netfilter tables. That is probably the issue.


 -Original Message-
 From: johan firdianto [mailto:johanfi...@gmail.com]
 Sent: Tuesday, July 07, 2009 1:50 AM
 To: Ritter, Nicholas
 Cc: Adrian Chadd; Alexandre DeAraujo; squid-users
 Subject: Re: [squid-users] Updated CentOS/Squid/Tproxy Transparency
 steps.

 Hi Nick,

 I already tried your example above, with exception I'm using bridge
 with 2 ethernet not wccp.
  but i don't see something in access_log, when I tried to browse some
 sites.
 But i still could open the sites.

 2009/07/07 21:44:17| Reconfiguring Squid Cache (version 3.1.0.9)...
 2009/07/07 21:44:17| FD 10 Closing HTTP connection
 2009/07/07 21:44:17| FD 13 Closing HTTP connection
 2009/07/07 21:44:17| Processing Configuration File:
 /usr/local/squid/etc/squid.conf (depth 0)
 2009/07/07 21:44:17| Starting IP Spoofing on port [::]:3129
 2009/07/07 21:44:17| Disabling Authentication on port [::]:3129 (Ip
 spoofing enabled)
 2009/07/07 21:44:17| Disabling IPv6 on port [::]:3129 (interception
 enabled)
 2009/07/07 21:44:17| Initializing https proxy context
 2009/07/07 21:44:17| DNS Socket created at [::], FD 10
 2009/07/07 21:44:17| Adding domain edgestream.com from /etc/resolv.conf
 2009/07/07 21:44:17| Adding nameserver 202.169.224.44 from
 /etc/resolv.conf
 2009/07/07 21:44:17| Accepting  HTTP connections at [::]:3128, FD 11.
 2009/07/07 21:44:17| Accepting  spoofing HTTP connections at
 0.0.0.0:3129, FD 13.
 2009/07/07 21:44:17| HTCP Disabled.
 2009/07/07 21:44:17| Loaded Icons.
 2009/07/07 21:44:17| Ready to serve requests.

 iptables -t mangle -L -xvn
 Chain PREROUTING (policy ACCEPT 9535 packets, 4088554 bytes)
   pkts  bytes target prot opt in out source
destination
   7326   946003 DIVERT tcp  --  *  *   0.0.0.0/0
  0.0.0.0/0   socket
   3661   949270 TPROXY tcp  --  *  *   0.0.0.0/0
  0.0.0.0/0   tcp dpt:80 TPROXY redirect 192.168.1.205:3129
 mark 0x1/0x1

 Chain INPUT (policy ACCEPT 10693 packets, 1269475 bytes)
   pkts  bytes target prot opt in out source
destination

 Chain FORWARD (policy ACCEPT 13049 packets, 5011079 bytes)
   pkts  bytes target prot opt in out source
destination

 Chain OUTPUT (policy ACCEPT 6481 packets, 2011014 bytes)
   pkts  bytes target prot opt in out source
destination

 Chain POSTROUTING (policy ACCEPT 19530 packets, 7022093 bytes)
   pkts  bytes target prot opt in out source
destination

 Chain DIVERT (1 references)
   pkts  bytes target prot opt in out source
destination
   7326   946003 MARK   all  --  *  *   0.0.0.0/0
  0.0.0.0/0   MARK xset 0x1/0x
   7326   946003 ACCEPT all  --  *  *   0.0.0.0/0
  0.0.0.0/0

 ip rule
 0:  from all lookup 255
 32764:  from all fwmark 0x1 lookup tproxy
 32765:  from all fwmark 0x1 lookup tproxy
 32766:  from all lookup main
 32767:  from all lookup default

 ip route show table 100
 local default dev lo  scope host





 On Thu, Jul 2, 2009 at 11:31 AM, Ritter,
 Nicholasnicholas.rit...@americantv.com wrote:

 I have not finished updating the wiki article for the CentOS example,

 BTW.

 I will do this by tomorrow or possibly tonight yet.

 Nick


 -Original Message-
 From: adrian.ch...@gmail.com [mailto:adrian.ch...@gmail.com] On Behalf

 Of Adrian Chadd

 Sent: Wednesday, July 01, 2009 11:10 PM
 To: Alexandre DeAraujo
 Cc: Ritter, Nicholas; squid-users
 Subject: Re: [squid-users] Updated CentOS/Squid/Tproxy Transparency

 steps.

 This won't work. You're only redirecting half of the traffic flow with
 the wccp web-cache service group. The tproxy code is probably
 correctly trying to originate packets -from- the client IP address to
 the upstream server but because you're only redirecting half of the
 packets (ie, packets from original client to upstream, and not also
 the packets from

[squid-users] failed squid running up using ecap gzip adapter

2009-07-03 Thread johan firdianto
Dear all,

we are using squid 3.1.0.9
libecap
and vigos-ecap-gzip-adapter
here my squid.conf
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access deny all
http_port 3128
cache_dir aufs /cache0 5 128 256
hierarchy_stoplist cgi-bin ?
coredump_dir /usr/local/squid/var/cache
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
ecap_enable on
ecap_service gzip_service respmod_precache 0 ecap://www.vigos.com/ecap_gzip
loadable_modules /usr/local/lib/ecap_adapter_gzip.so
adaptation_access gzip_service allow all

When i started squid -d 9, appears following error
(squid): adapter.cc:17: virtual void
libecap::adapter::Service::start(): Assertion `false' failed.

if i disable ecap in squid.conf, squid would run up.
any idea ?.

Thanks.

Johan


Re: [squid-users] tproxy vs DNAT

2009-05-30 Thread johan firdianto
for using TPROXY, you should set up triangle routing that sometimes
make headache. if you don't want triangle routing, you could make your
proxy act as bridge.
It's true that we have invisible proxy, for better you could strip off
the squid signature at the below error page by modifyng the source.
For some sites (like rapidshare) that requires uniqe ip address for
concurency download, it's solved by using tproxy.
for isp, using this proxy (tproxy feature) below bandwidth limiter of
client (not located at above), the client will get faster access, but
not saving the backbone bandwidth.
But if you put this proxy above the bandwidth limiter, it could hog
your backbone. you should put double limiter between this proxy to get
saving backbone bandwidth.


On Sat, May 30, 2009 at 10:23 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 Gavin McCullagh wrote:

 Hi,

 there's been a lot of talk about TPROXY being added back into the linux
 kernel and squid changing to support it.

 Currently, we do transparent proxying by policy routing port 80 traffic to
 the proxy server then using DNAT (iptables) on the proxy server.
 Could someone point me to something that explains the benefit of TPROXY
 over DNAT?  We would look to migrate over if there's a substantial
 benefit.

 Thanks in advance,
 Gavin


 The only documentation I know of that attempts to compare is the readme by
 Balabit.
 http://www.balabit.com/downloads/files/tproxy/README.txt

 The following is based on my knowledge of TPROXYv4, I can't speak for the
 older obsolete TPROXYv2.

 Not requiring NAT to operate it is not limited in quite the same ways. It's
 also much more efficient from an application viewpoint and has the
 possibility of being coded to support other protocols such as IPv6 where NAT
 is not possible. (Though kernel support still has to be written for
 non-IPv4).

 The other side is that it is a true source-spoofing mechanism which is both
 a pro and con. It's a real invisible proxy. But triangle-of-doom routing
 causes greater havoc and much harder to fix.


 Overall, I see it as a much better alternative to the NAT methods if both
 are available to you and one needs to be used. But is not really something
 to normally go out and look for specially.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE15
  Current Beta Squid 3.1.0.8 or 3.0.STABLE16-RC1



[squid-users] how to redirect HEAD method to url redirect program

2009-05-25 Thread johan firdianto
usually, url redirect program read GET and POST request method.
I capture the request url, no head method is redirected to the program.
How to redirect HEAD method to url redirect program ?
Thanks.

Johan


Re: [squid-users] Squid server as transparent proxy and problem with Rapid Share

2009-03-14 Thread johan firdianto
Using TPROXY or bypass ip rapidshare from squid box by using iptables.

Johan

On Sat, Mar 14, 2009 at 3:38 PM, Azhar H. Chowdhury az...@citechco.net wrote:
 Hi,

 We are running ISP and have few cache+proxy servers running Squid as
 transparent. Lots of our clients have been
 using site like rapidshare from where they download files/program without
 having  an account.

 But  problem is that when a user trying to download off rapidshare it says
 his/her IP address (Squid server IP)
 already downloading a file and to wait until it is finished or ask to come
 back after 30 mins.

 How can we overcome this problem? How can we bypass totally rapidshare so
 rapidshare server can
 see the clients IP as his/her own public IP not squid server's IP address?

 Can any body help?

 Cheers

 Azhar


 --
 This message has been scanned for viruses and
 dangerous content by MailScanner, and is
 believed to be clean.




[squid-users] icap alter cookies

2009-03-11 Thread johan firdianto
dear guy,

I want to know about implementation ICAP for altering cookies.
Is it possible or not ?
Thanks.

Johan


Re: [squid-users] TProxy Issues

2009-03-11 Thread johan firdianto
Try define tcp_outgoing_address.
AFAIK I'm using squid 2.7 should define tcp_outgoing for tproxy
working properly.

Johan

On Thu, Mar 12, 2009 at 6:31 AM, Jamie Orzechowski ad...@ripnet.com wrote:
 Here is the config ... it does work fine in transparent mode just not
 tproxy mode

 acl manager proto cache_object
 acl localhost src 127.0.0.1/32
 acl to_localhost dst 127.0.0.0/8

 acl localnet src 66.78.96.0/19
 acl localnet src 64.235.192.0/19
 acl localnet src 72.0.192.0/19
 acl localnet src 192.168.1.0/24
 acl localnet src 192.168.254.0/24

 acl QUERY urlpath_regex cgi-bin \?
 cache deny QUERY

 hierarchy_stoplist cgi-bin ?

 acl directurls url_regex -i /etc/squid3/direct-urls
 cache deny directurls
 cache deny localnet
 always_direct allow directurls
 always_direct allow localnet

 acl SSL_ports port 443
 acl Safe_ports port 80  # http
 acl Safe_ports port 21  # ftp
 acl Safe_ports port 443 # https
 acl Safe_ports port 70  # gopher
 acl Safe_ports port 210 # wais
 acl Safe_ports port 1025-65535  # unregistered ports
 acl Safe_ports port 280 # http-mgmt
 acl Safe_ports port 488 # gss-http
 acl Safe_ports port 591 # filemaker
 acl Safe_ports port 777 # multiling http
 acl CONNECT method CONNECT

 http_access allow manager localhost
 http_access deny manager
 http_access deny !Safe_ports
 http_access deny CONNECT !SSL_ports
 http_access deny to_localhost
 http_access allow localnet  http_access allow localhost http_access deny all
 icp_access allow localnet
 htcp_access allow localnet
 icp_access deny all
 htcp_access deny all
 htcp_clr_access deny all
 ident_lookup_access deny all

 http_port 66.78.102.2:3128
 http_port 66.78.102.2:3129 tproxy

 cache_mgr supp...@ripnet.com

 acl snmp snmp_community s64hf2
 snmp_access allow snmp all

 snmp_port 3401
 snmp_incoming_address 192.168.1.8
 snmp_outgoing_address 192.168.1.8

 shutdown_lifetime 10 seconds
 pid_filename /var/run/squid3.pid
 mime_table /usr/share/squid3/mime.conf
 icon_directory /usr/share/squid3/icons
 error_directory /usr/share/squid3/errors/en
 cache_effective_user proxy
 ignore_unknown_nameservers on
 dns_nameservers 66.78.99.4 66.78.99.5

 max_open_disk_fds 0
 cache_mem 1024 MB minimum_object_size 0 KB
 maximum_object_size 4 GB
 maximum_object_size_in_memory 512 KB
 memory_replacement_policy heap LFUDA
 cache_replacement_policy heap LFUDA
 cache_swap_low 90
 cache_swap_high 95

 quick_abort_min -1 KB
 quick_abort_max 16 KB
 quick_abort_pct 95
 access_log /var/log/squid3/access.log squid
 cache_log /var/log/squid3/cache.log
 cache_store_log none

 log_fqdn off
 half_closed_clients off
 server_persistent_connections on
 client_persistent_connections on

 ipcache_size 16384
 ipcache_low 90
 ipcache_high 95

 fqdncache_size 8192
 client_db off
 pipeline_prefetch on
 forwarded_for on

 store_dir_select_algorithm least-load

 cache_dir aufs /cache0/cache0 1 16 256
 cache_dir aufs /cache0/cache1 1 16 256
 cache_dir aufs /cache0/cache2 1 16 256
 cache_dir aufs /cache0/cache3 1 16 256
 cache_dir aufs /cache0/cache4 1 16 256
 cache_dir aufs /cache0/cache5 1 16 256
 cache_dir aufs /cache0/cache6 1 16 256
 cache_dir aufs /cache0/cache7 1 16 256
 cache_dir aufs /cache0/cache8 1 16 256
 cache_dir aufs /cache0/cache9 1 16 256
 cache_dir aufs /cache0/cache10 1 16 256

 cache_dir aufs /cache1/cache0 1 16 256
 cache_dir aufs /cache1/cache1 1 16 256
 cache_dir aufs /cache1/cache2 1 16 256
 cache_dir aufs /cache1/cache3 1 16 256
 cache_dir aufs /cache1/cache4 1 16 256
 cache_dir aufs /cache1/cache5 1 16 256
 cache_dir aufs /cache1/cache6 1 16 256
 cache_dir aufs /cache1/cache7 1 16 256
 cache_dir aufs /cache1/cache8 1 16 256
 cache_dir aufs /cache1/cache9 1 16 256
 cache_dir aufs /cache1/cache10 1 16 256

 cache_dir aufs /cache2/cache0 1 16 256
 cache_dir aufs /cache2/cache1 1 16 256
 cache_dir aufs /cache2/cache2 1 16 256
 cache_dir aufs /cache2/cache3 1 16 256
 cache_dir aufs /cache2/cache4 1 16 256
 cache_dir aufs /cache2/cache5 1 16 256
 cache_dir aufs /cache2/cache6 1 16 256
 cache_dir aufs /cache2/cache7 1 16 256
 cache_dir aufs /cache2/cache8 1 16 256
 cache_dir aufs /cache2/cache9 1 16 256
 cache_dir aufs /cache2/cache10 1 16 256

 cache_dir aufs /cache3/cache0 2 16 256
 cache_dir aufs /cache3/cache1 2 16 256
 cache_dir aufs /cache3/cache2 2 16 256
 cache_dir aufs /cache3/cache3 2 16 256
 cache_dir aufs /cache3/cache4 2 16 256
 cache_dir aufs /cache3/cache5 2 16 256
 cache_dir aufs /cache3/cache6 2 16 256
 cache_dir aufs /cache3/cache7 2 16 256
 cache_dir aufs /cache3/cache8 2 16 256
 cache_dir aufs /cache3/cache9 2 16 256
 cache_dir aufs /cache3/cache10 2 16 256
 cache_dir aufs /cache3/cache11 2 16 256
 cache_dir aufs /cache3/cache12 2 16 256
 cache_dir aufs /cache3/cache13 2 16 256
 cache_dir 

Re: [squid-users] mysterious crashes

2009-03-10 Thread johan firdianto
Try do memoery test, you could using fedora installation DVD, there's
menu for memory testing.
Some weeks ago, we face the same problem, squid suddenly was
terminated. And the problem is at memory. There were many error in
chip.

On Wed, Mar 11, 2009 at 5:50 AM, Pieter De Wit pie...@insync.za.net wrote:

 Hi Hoover,

 Just a thought - what is the memory limit set to in squid and are other
 services like gkrellmd running ?

 Cheers,

 Pieter

 On Tue, 10 Mar 2009 15:43:17 -0700 (PDT), Hoover Chan c...@sacredsf.org
 wrote:
 It looks like Squid is what's crashing (I left a terminal session open
 with
 top running) but it's dragging the whole OS down with it to the point
 where the only way out is to reset or power cycle the computer.

 Very frustrating.


 --
 Hoover Chan                     c...@sacredsf.org
 Technology Director
 Schools of the Sacred Heart
  Broadway St.
 San Francisco, CA 94115


 - Rick Chisholm rchish...@parallel42.ca wrote:

 might be worthwhile to run memtest86 against your server to rule out
 memory issues, esp. since you appear to have clear logs.  Is squid
 crashing or is the OS locking up?

 Hoover Chan wrote:
  Hi, I'm new to this mailing list and relatively new to managing
 Squid.
 
  I'm running a Squid cache using version 2.5 and 1 Gb of RAM. I'm
 running into a problem where the system crashes so hard that the only
 way to bring it back up is to power cycle the server. Subsequent
 examination of the log files don't reveal any diagnostic information.
 The logs seem to show that the system is running just fine without
 incident.
 
  Any thoughts on what to look at? It's happening at least once a week
 now.
 
  Thanks in advance.
 
 
  --
  Hoover Chan                     c...@sacredsf.org
  Technology Director
  Schools of the Sacred Heart
   Broadway St.
  San Francisco, CA 94115
 
 
 



[squid-users] how to force squid to cache POST request/response for specific site

2009-02-12 Thread johan firdianto
some site, using POST to download the file.
url of POST is unique for each file.
I know this rather odd thing to cache post req/resp.
could anybody here point me which file in source of squid should be
modified, in case must to modify ?

Johan


[squid-users] What's difference x-cache and x-cache-lookup ?

2009-01-20 Thread johan firdianto
When I try to curl -I some object to squid,
it has output
x-cache: MISS
x-cache-lookup: HIT

so, I alter the refresh pattern, and than do curl the same object, and
squid answer
x-cache: HIT
x-cache-lookup: HIT

Could any guys here give explanation ?
Thanks.

Johan


Re: [squid-users] Not able to cache Streaming media

2008-11-13 Thread johan firdianto
Dear Kumar,

youtube already changed their url pattern.
before is get_video?video_id
now changed to
get_video.*video_id
you should update part of refresh pattern and store_url_rewrite.

Johan

On Thu, Nov 13, 2008 at 7:29 PM, bijayant kumar [EMAIL PROTECTED] wrote:
 Hi,

 I am using Squid-2.7.STABLE4 on Gentoo Box. I am trying to cache the 
 youtube's video but not able to do so. I have followed the links
 http://www.squid-cache.org/mail-archive/squid-users/200804/0420.html
 http://wiki.squid-cache.org/Features/StoreUrlRewrite.
 According to these urls I tuned my squid.conf as

 acl youtube dstdomain .youtube.com .googlevideo.com .video.google.com 
 .video.google.com.au .rediff.com
 acl youtubeip dst 74.125.15.0/24
 acl youtubeip dst 208.65.153.253/32
 acl youtubeip dst 209.85.173.118/32
 acl youtubeip dst 64.15.0.0/16
 cache allow all
 acl store_rewrite_list dstdomain mt.google.com mt0.google.com mt1.google.com 
 mt2.google.com
 acl store_rewrite_list dstdomain mt3.google.com
 acl store_rewrite_list dstdomain kh.google.com kh0.google.com kh1.google.com 
 kh2.google.com
 acl store_rewrite_list dstdomain kh3.google.com
 acl store_rewrite_list dstdomain kh.google.com.au kh0.google.com.au 
 kh1.google.com.au
 acl store_rewrite_list dstdomain kh2.google.com.au kh3.google.com.au

 acl store_rewrite_list dstdomain .youtube.com .rediff.com .googlevideo.com
 storeurl_access allow store_rewrite_list
 storeurl_access deny all
 storeurl_rewrite_program /usr/local/bin/store_url_rewrite
 quick_abort_min -1
 ##hierarchy_stoplist cgi-bin ?
 maximum_object_size_in_memory 1024 KB
 cache_dir ufs /var/cache/squid 2000 16 256
 maximum_object_size 4194240 KB
 refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
 refresh_pattern -i  \.flv$  10080   90% 99  ignore-no-cache 
 override-expire ignore-private
 refresh_pattern get_video\?video_id 10080   90% 99  
 ignore-no-cache override-expire ignore-private
 refresh_pattern youtube\.com/get_video\?10080   90% 99  
 ignore-no-cache override-expire ignore-private
 refresh_pattern .   99  100%99  override-expire 
 override-lastmodignore-reload   ignore-no-cache ignore-private  
 ignore-auth

 But when i am accessing any video of Youtube, I always get TCP_MISS/200 in 
 access.log and in store.log
  1226578788.877 RELEASE -1  C51CDFBB595CB7105BB2874BE5C3DFB0  303 
 1226578824-1  41629446 text/html -1/0 GET 
 http://www.youtube.com/get_video?video_id=tpQjv14-8yEt=OEgsToPDskKT1_ZEE8QGepXgMQ1i-_eYel=detailpageps=
 1226578813.447 SWAPOUT 00 0314 7CAD15F0E273355B5D18A2B6F8D1871F  200 
 1226578825 1186657973 1226582425 video/flv 2553595/2553595 GET 
 http://v16.cache.googlevideo.com/get_video?origin=sjc-v176.sjc.youtube.comvideo_id=tpQjv14-8yEip=59.92.192.176signature=CF0D875938C7073200F62243A6FF964C54749A5F.E443F5A221D07EF10EB8C29233A33ACC45F8DD50sver=2expire=1226600424key=yt4ipbits=2
 There are lots of RELEASE and very few almost negligible SWAPOUT in the 
 logs.
 I repeated this exercise many times but always getting the same result that 
 is TCP_MISS/200 in access.log.

 Please suggest me am i missing some thing or I need to add some more thing in 
 squid.conf.

 Bijayant Kumar


  Get your preferred Email name!
 Now you can @ymail.com and @rocketmail.com.
 http://mail.promotions.yahoo.com/newdomains/aa/



[squid-users] change method POST to GET using ICAP

2008-07-24 Thread johan firdianto
dear guys,

anybody here have done modification request method from POST to GET
using ICAP and squid.
Could put the squid.conf at here and icap configuration ?.
And what icap server opensource fully compatible with squid ?.
I found two, icap-server.sourceforge.net and poesia project.
Thanks.

Johan


[squid-users] reqmod_precache and respmod_postcache is not yet implemented in SQUID3stable8 ?

2008-07-24 Thread johan firdianto
dear guys,

I reading in release notes of squid-3.0stable8,
there's note in New Tags section, in ICAP subsection
Note: reqmod_precache and respmod_postcache is not yet implemented

But in example icap configuration (in Major new features section),
looks like already implemented.

icap_enable on
icap_service service_req reqmod_precache 1 icap://127.0.0.1:1344/request
icap_service service_resp respmod_precache 0 icap://127.0.0.1:1344/response

which correct ?
Thanks.

Johan


Re: [squid-users] Squid connections

2008-07-23 Thread johan firdianto
Try copy paste output result of squidclient mgr:info
The possible
1. out of file descriptor
2. low average median response time of cache miss, cache hit.
3. Your cache_dir is fully utilized, so squid does many release and
swapout object.


On Wed, Jul 23, 2008 at 10:43 AM, Marcos Dutra [EMAIL PROTECTED] wrote:
 Hi people

 I have a server with redhat enterprise 5 Xeon 3.06 4 processors, 8Gb
 RAM and 160GB SAS disk, well I ran the command in linux netstat -an
 |grep 3128| wc and when arrives in 2500 connections, it is very slow.
 How can I improve performance, I changed from diskd to aufs and I
 don't get any performance, I changed RPM binary to compiled from the
 source too.
 Detais, i would like put in this server about 3000 - 4000 users.

 Thanks for advice
 Marcos



[squid-users] how to cache POST request response ?

2008-07-21 Thread johan firdianto
anyone here know how to cache response of POST request ?
some popular filesharing sites use POST method to download somefile.
i checked in store.log, there's no SWAPOUT for POST request.
here my squid.conf

acl rapid dstdomain .rapidshare.com
acl POST method POST
cache allow rapid POST
http_access allow POST


Johan


[squid-users] could reverse proxy cache POST request

2008-07-21 Thread johan firdianto
hi all,

could reverse proxy cache POST request /response ?
Thanks.

Johan