Re: [squid-users] ROCK store and UFS (Squid 3.2.3)

2012-11-27 Thread Horacio H.
Hi,

Amos, thanks for your reply.  I'll test the patch and use
memory_cache_shared set to OFF.

Sorry, I was wrong. Objects bigger than maximum_object_size_in_memory
are not cached on disk. Although objects smaller than
maximum_object_size_in_memory but bigger than 32KB were written to
disk, I guess they got a HIT because Squid keeps a copy in memory of
hot and in-transit objects.  That explains why the UFS store was
ignored when Squid was restarted.

Thanks.


[squid-users] ROCK store and UFS (Squid 3.2.3)

2012-11-26 Thread Horacio H.
Hi,

I'm testing Squid 3.2.3 and wanted to use ROCK store combined with UFS
or AUFS. Yes, I know it's not currently supported
(http://wiki.squid-cache.org/Features/RockStore#limitations), but I
did some tests anyway (Yes, I forgot).

Doomed to failure, I added this two lines to Squid's default configuration:

  cache_dir rock /var/cache1 1000 max-size=16384
  cache_dir  ufs /var/cache2 2000 16 256 min-size=16384

When both lines were present, objects bigger than 32KB were not cached
(neither on memory or disk). Of course, when ROCK cache_dir was not
present objects bigger than 16KB were cached on disk at the UFS store
as expected.

After a few tries, I inverted the order of the lines and increased the
max-size of the rock store, like this:

  cache_dir  ufs /var/cache2 2000 16 256 min-size=16384
  cache_dir rock /var/cache1 1000 max-size=1048576

Surprisingly, objects bigger than 32KB (up to max-size) were stored on
disk and they got TCP_HIT when retrieved. Unfortunately, if Squid
process was stoped and restarted then those objects were retrieved
from source again (i.e. UFS storage was ignored or corrupted).

That's when I guessed rock store was not supposed to work that way, so
I hope this little information helps to advance integration with
UFS...

Thanks.


Re: [squid-users] WCCP transparent proxy

2011-10-05 Thread Horacio H.
Hi,

You're missing a few things. Please review the FAQ again, here are some hints:

1) Make sure there are no firewalls between your Squid and router (WCCP).

2) Make sure the GRE module is loaded:

   modprobe ip_gre
   echo ip_gre  /etc/modules

3) Create a GRE interface:

   ip tunnel add gre1 mode gre local squid-ip-address
   ip addr add squid-ip-address/32 dev gre1
   ip link set gre1 up

4) Add a redirect rule in iptables:

   iptables -t nat -A PREROUTING -i gre1 -j REDIRECT --redirect-to
squid-listening-port

5) Make sure Squid was compiled with WCCP-v2 support.

6) WCCP-v2 squid's configuration:

   wccp2_router router-ip-address

7) WCCP-v2 router's configuration:

   access-list 160 deny   ip  host squid-ip-address any
   access-list 160 permit tcp net wildcard any eq 80

   ip wccp version 2
   ip wccp web-cache redirect-list 160

   interface FastEthernet0/0
   ip wccp web-cache redirect in

Regards,
Horacio.


Re: [squid-users] url-rewrite PHP script issue under Ubuntu 10.04

2010-05-31 Thread Horacio H.
Hi!

Thanks Alexandre and Amos for your replies, together they pointed me
into the right direction!

Based on the the URLs sent by Alexandre, I edited the
/etc/php5/cli/php.ini file and tested different values for
max_execution_time and max_input_time but none changed the PHP's
script behavior.  Then, I remembered Amos mentioned a 60sec timeout. I
saw my cache.log and yes there was an exactly 60sec delay after
starting squid and the first Warning. So, I searched the php.ini for
a similar value and found this directive: default_socket_timeout. I
changed it to 300sec and the Warnings started to show up accordingly.
Then I changed it's value to -1 and the warnings haven't shown up
again!

Squid doesn't complain anymore about my PHP-scripts, but I don't know
if this change has secondary effects or any other consequences.  I'll
be monitoring them, but in any case I have the backup Perl-scripts.

Thanks again!


[squid-users] url-rewrite PHP script issue under Ubuntu 10.04

2010-05-25 Thread Horacio H.
Hi !

I was wondering if someone else has noticed a similar behavior:

I wrote an URL-rewrite script with PHP as explained at
http://wiki.squid-cache.org/ConfigExamples/PhpRedirectors. The
script was running without complains under Squid 2.7.Stable9 and
Ubuntu 9.04, then I upgraded Ubuntu to 10.04 and warning messages
started to show up:

2010/05/15 16:48:28| WARNING: url_rewriter #XX (FD XX) exited  
(repeat n-times)
2010/05/15 16:48:28| Too few url_rewriter processes are running
2010/05/15 16:48:28| Starting new helpers

Things I've tried to solve the issue without success:

- Simplified the PHP script to the minimum (finally just using the
wiki's example).
- A clean installation of Ubuntu 10.04.
- Downgraded PHP package from 5.3 to 5.2.
- Recompiled Squid (just in case).

Perl scripts are not afected, so I rewrited/transalted the script. The
service is up again but a big question mark was left over my head.

I know it's not a Squid's issue per se, but at least the wiki may need
to be updated before other people get stuck at this point...

Thanks for reading.

---
squid.conf:
---
url_rewrite_program  /etc/squid/phpredir
url_rewrite_children 32

-
phpredir:
-
#!/usr/bin/php
?php
$temp = array();
while ( $input = fgets(STDIN) ) {
 $temp = split(' ', $input);
 $output = $temp[0] . \n;
 echo $output;
}


Re: [squid-users] [Urgent] Please help : NAT + squid2.7 on ubuntu server 9.10 + cisco firewall (ASA5510)

2010-04-12 Thread Horacio H.
2010/4/8 Vichao Saenghiranwathana vich...@gmail.com:

 I still stunned. Can you explain more in deeper detail so I can
 understand what the problem is.


Hi Vichao,

If you already have a static NAT translation at the ASA between these
two addresses: 192.168.9.251 and 203.130.133.9, it doesn't make sense
to me why you also configured the same public IP address at the second
subinterface.  Unless you need it for an unrelated setup, you may want
to remove the second subinterface because (if you also configured a
default-gateway there) when external packets are destinede to the
address 203.130.133.9 it might cause the ASA to NAT packets that
shouldn't be, or viceversa.

Aside from that, if the issue persist your next clue resides in
collecting all the info your ASA shows about the WCCP
association/registration, and monitor the counters of the GRE tunnel
and iptables active rules and default policies.

I hope this comment was helpful. I have a similar setup and it works fine.

Regards,
Horacio.


Re: [squid-users] YouTube and other streaming media (caching)

2008-11-03 Thread Horacio H.
Hi everybody,

regarding this issue:

http://wiki.squid-cache.org/WikiSandBox/Discussion/YoutubeCaching

I came up with a workaroud, it's a rewriter script in PHP (sorry I'm
not good at Perl, but maybe someone would be kind enough to later
share a transcoded version... jeje)

NOTE 1: Use this script for testing purposes only, It may not work as
expected... I've tested it only with very few URLs... If you can
improve it, please share.

NOTE 2: To use this script you need the PHP command line interface. In
Ubuntu yo can install it with this command:

sudo apt-get install php5-cli

NOTE 3: Make sure the log file is writable by the script.

And now the script:

#!/usr/bin/php -q
?php
#
# 2008-11-03 : v1.3 : Horacio H.
#

 ## Open log file ##

 $log = fopen('/var/squid/logs/rewriter.log','a+');

 ## Main loop ##

 while ( $X = fgets(STDIN) ) {

   $X = trim($X);

   $lin = split(' ', $X);

   $url = $lin[0];

   ## This section is for rewriting store-URL of YT  GG videos ##

   if ( 
preg_match('@^http://[^/]+/(get_video|videodownload|videoplayback)\?@',$url)
) {

 ## Get reply headers ##

 $rep = get_headers($url);

 ## If reply is a redirect, make its store-URL unique to avoid
matching the store-URL of a video ##

 $rnd = ;

 if ( preg_match('/ 30[123] /',$rep[0]) ) {

   $rnd = REDIR= . rand(1,9);

 }

 $url = 
preg_replace('@.*id=([^]*)?.*$@',http://videos.SQUIDINTERNAL/ID=$1$rnd,$url);

   }

   ## Return rewrited URL ##

   print $url . \n;

   ## Record what we did on log ##

   fwrite($log,$url $rep[0]\n);

   ## May do some good, but I'm not sure ##

   flush();

 }

 fclose($log);

?
## END OF SCRIPT ##

The trick here is knowing if the URL is a redirect (301, 302 or 303)
with the get_headers function.  It would be nice if the Squid process
passed the HTTP status to the script, maybe as a key=value pair, but
I'm not even a programmer so that is way beyond my knowledge...

Regards,

Horacio H.