Re: [squid-users] squid speedup to client using TCP fast start?

2010-11-26 Thread Amos Jeffries

On 27/11/10 19:26, Linda Walsh wrote:



There's an article pointed to by slashdot @
http://blog.benstrong.com/2010/11/google-and-microsoft-cheat-on-slow.html
where the author found that instead of a slow start of sending a packet
or two and waiting for an ACK, some sites like Google and Microsoft,
optimize for their initial web page display by NOT "slow starting" and
sending 4 or more packets w/o waiting for the initial ACK. This gives
them a LARGE boost for that initial page -- and would for any initial
start of a TCP connection.

Since many connections to squid are small TCP session, it seems


This statement is no longer certain. HTTP/1.1 defauts to longer 
connections than HTTP/1.0 to avoid these same TCP delays.



eliminating the 'slow start' might provide a significant boost when
loading pages with browsers that use many small TCP connections.


Browsers load a max of 10 connections. With the older HTTP/1.0 Squid 
this could result is log as these few connections were opened and closed 
on the same client-proxy link. (avoided by turning 
client_persistent_connections ON).




Is this something the squid designers have given any consideration to
for inclusion as an option -- is it something that could be done when
setting up a connection to a remote server? I.e. when 'fetching', is it
possible to set the initial window 'larger' -- since most of the benefit
comes from using a larger window where the RTT is 'large' (~>30ms).


You had best ask those designers... over on the squid-dev mailing list 
where they hang out. I'm the only dev that reads this list regularly and 
any such decisions were made well before my time in the project.




If the RTT times between the squid cache and the client are very low
already, then the benefit wouldn't be as noticeable.

Some research papers from presentations on the benefit of increasing the
initial startup window:
Abstract:
http://www.google.com/research/pubs/pub36640.html
Full Paper:
http://www.isi.edu/lsam/publications/phttp_tcp_interactions/node2.html

Some simulations from 1998 on the value of increasing the initial TCP
window size:
http://www.rfc-archive.org/getrfc.php?rfc=2414

Apparently, a patch may be necessary to give applications control over
this, this patch was shown here:
http://www.amailbox.org/mailarchive/linux-netdev/2010/5/26/6278007



ICMP, MTU, ECN and Window scaling have only partial support in the IPv4 
Internet. When they work things go great. Squid leaves most of this to 
the underlying system settings for tuning. Several things like buffers 
are handled dynamically from the OS provided information at runtime.



-


Also another method of speeding up web page delivery has to do with
HTTP pipelining -- however, some paper (forget the link, had too many
open and then the window crashed...ARG)...said this wasn't widely used
due to problems with proxy support.

Doesn't (or does?) squid support this?


Squid supports pipelining since 2.5 or earlier.

The design flaw with pipelines is that if the connection closes for any 
reason the entire requests set is lost has to be re-sent by the browser. 
There are a great many reasons for closing a TCP link in HTTP/1.0. 
Dynamic content of unknown length being the overwhelming cause. So 
HTTP/1.1 support is essentially a requirement for reliability on the 
pipieline.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


[squid-users] squid speedup to client using TCP fast start?

2010-11-26 Thread Linda Walsh



There's an article pointed to by slashdot @ 
http://blog.benstrong.com/2010/11/google-and-microsoft-cheat-on-slow.html
where the author found that instead of a slow start of sending a packet 
or two and waiting for an ACK, some sites like Google and Microsoft, 
optimize for their initial web page display by NOT "slow starting" and 
sending 4 or more packets w/o waiting for the initial ACK.  This gives 
them a LARGE boost for that initial page -- and would for any initial 
start of a TCP connection.


Since many connections to squid are small TCP session, it seems 
eliminating the 'slow start' might provide a significant boost when 
loading pages with browsers that use many small TCP connections.


Is this something the squid designers have given any consideration to 
for inclusion as an option -- is it something that could be done when 
setting up a connection to a remote server? I.e. when 'fetching', is it 
possible to set the initial window 'larger' -- since most of the benefit 
comes from using a larger window where the RTT is 'large' (~>30ms).


If the RTT times between the squid cache and the client are very low 
already, then the benefit wouldn't be as noticeable.


Some research papers from presentations on the benefit of increasing the 
initial startup window:

Abstract:
  http://www.google.com/research/pubs/pub36640.html
Full Paper:
  http://www.isi.edu/lsam/publications/phttp_tcp_interactions/node2.html

Some simulations from 1998 on the value of increasing the initial TCP 
window size:

  http://www.rfc-archive.org/getrfc.php?rfc=2414

Apparently, a patch may be necessary to give applications control over 
this, this patch was shown here:

   http://www.amailbox.org/mailarchive/linux-netdev/2010/5/26/6278007

-


Also another method of speeding up web page delivery has to do with
HTTP pipelining -- however, some paper (forget the link, had too many 
open and then the window crashed...ARG)...said this wasn't widely used 
due to problems with proxy support.


Doesn't (or does?) squid support this?

Just some general questions...

Thanks!
Linda




Re: [squid-users] squid cache not updating?

2010-11-26 Thread Amos Jeffries

On 25/11/10 22:14, J Webster wrote:

I have my cache mounted on a drive at /var/spool/squid.
The other day I tied to mount a new folder also on the same drive, which
is apparently not the best thing to do.
Since then, I am not sure if my squid cache is updating or not. It seems
to be stuck at 35Gb use and 16% capacity.
Is there anyway to check if the cache is updating?


* the cachemgr interface provides some overviews of the storage content.
  squidclient mgr:store_io
  squidclient mgr:utilization (the syscall.disk.* and swap.* entries)

* Or enable the cache_store_log for a while and check the SWAPOUT / 
RELEASE. If all requests are doing RELEASE immediately then no new 
objects are accumulating in the cache.


Worst case scenario is the easiest to fix. Erase the content of the 
folder and run squid -z again to rebuild the cache structure empty. 
Bandwidth will be impacted afterwards for a while with increasing 
normality as the cache re-fills.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


Re: [squid-users] tproxy

2010-11-26 Thread Amos Jeffries

On 27/11/10 00:53, jiluspo wrote:

Would be posible to run tproxy in single ethernet, same subset of
gateway, squid box, clients(squid box as gateway)?


It could be difficult at best. You cannot rely on any IP-level 
networking mechanisms to get the packet handling right.


The ideal TPROXY setup works with two interfaces using TCP socket 
numbers and interface MAC address to pass packets around instead of IP 
address and port.



I'm trying to run tproxy at lab on ubuntu 10.04, I dont know what else


I've had mixed reports for Ubuntu TPROXY support. The cause of the 
failure reports has not been clear.



missing/wrong. squidbox as gateway works fine without tproxy.
This private IPs would be replaced with public IPs in production.

squid box runs as gateway single ethernet.
squidbox:
gateway 192.168.0.254
ip 192.168.0.123

client:
gateway 192.168.0.123
ip 192.168.0.197

r...@ubuntu:~# uname -r
2.6.32-25-generic-pae

cat /boot/config-`uname -r` | grep -E
'(NF_CONNTRACK=|TPROXY|XT_MATCH_SOCKET|XT_TARGET_TPROXY)'
CONFIG_NF_CONNTRACK=m
CONFIG_NETFILTER_TPROXY=m
CONFIG_NETFILTER_XT_TARGET_TPROXY=m
CONFIG_NETFILTER_XT_MATCH_SOCKET=m

iptables v1.4.4

libcap-dev 1:2.17-2ubuntu1
libcap2 1:2.17-2ubuntu1

sysctl.conf
net.ipv4.ip_forward=1
net.ipv4.conf.lo.rp_filter=0


Some OS has to have these set for "all" interfaces as well as all the 
individual ethN. I'm still trying to figure the logic behind that out.


In those cases there also needs to be a table 100 created for the public 
interfaces.




/tproxy script:
{{{
#!/bin/sh
ip rule del fwmark 1 lookup 100
ip route del local 0.0.0.0/0 dev lo table 100


If the above lines are doing anything the script is breaking something.
There is a very important MUST when setting TPROXY up that the table 
number is not clashing/sharing with any other feature in system.

The "100" here is an arbitrary number you can change as needed.


iptables -F
iptables -F -t mangle
iptables -F -t nat

iptables -t mangle -N DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 1
iptables -t mangle -A DIVERT -j ACCEPT

iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY
--tproxy-mark 0x1/0x1 --on-port 3129

ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100
}}}

sysctl.conf:
net.ipv4.ip_forward=1
net.ipv4.conf.lo.rp_filter=0

r...@ubuntu:~# squid -v
Squid Cache: Version 3.1.9
configure options: '--prefix=/usr' '--localstatedir=/var'
'--libexecdir=${prefix}/lib/squid' '--srcdir=.'
'--datadir=${prefix}/share/squid' '--sysconfdir=/etc/squid'
'--enable-async-io' '--with-pthreads' '--enable-storeio=aufs'
'--enable-epoll' '--enable-removal-policies=lru,heap' '--enable-snmp'
'--enable-linux-netfilter' '--with-large-files'
--with-squid=/root/squid-3.1.9

squid.conf has
http_port 3129 tproxy






--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


Re: [squid-users] squid-3.1 client POST buffering

2010-11-26 Thread Amos Jeffries

On 27/11/10 05:10, Graham Keeling wrote:

On Thu, Nov 25, 2010 at 04:36:49PM +, Graham Keeling wrote:

Hello,

I have upgraded to squid-3.1 recently, and found a change of behaviour.
I have been using dansguardian in front of squid.

It appears to be because squid now buffers uploaded POST data slightly
differently.
In versions<  3.1, it would take some data, send it through to the website,
and then ask for some more.
In 3.1 version, it appears to take as much from the client as it can without
waiting for what it has already got to be uploaded to the website.

This means that dansguardian quickly uploads all the data into squid, and
then waits for a reply, which is a long time in coming because squid still
has to upload everything to the website.
And then dansguardian times out on squid after two minutes.


I noticed the following squid configuration option. Perhaps what I need is
a similar thing for buffering data sent from the client.

#  TAG: read_ahead_gap  buffer-size
#   The amount of data the cache will buffer ahead of what has been
#   sent to the client when retrieving an object from another server.
#Default:
# read_ahead_gap 16 KB

Comments welcome!

Graham.



Upon further experimentation, I have found that squid-3.1.x (specifically,
I have tried squid-3.1.8 and squid-3.1.9) behaves very badly with POST uploads.

It just increases the input buffer forever, until the upload is finished, or
the machine runs out of memory.

This problem exists when connecting directly to squid without dansguardian
in the way.

This problem doesn't exist on my old squid-2.5 installation.



Buffer max is 64KB. I'm thinking this is related to 
http://bugs.squid-cache.org/show_bug.cgi?id=2910


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


Re: [squid-users] SAMBAPREFIX

2010-11-26 Thread Amos Jeffries

On 27/11/10 00:20, Helmut Hullen wrote:

Hallo, Amos,

Du meintest am 27.11.10:


it would be nice if "SAMBAPREFIX" is not hard coded in "helpers/
basic_auth/SMB/Makefile.in" but can be defined as a "configure"
option.




You will probably be wanting to tell this to the developers
(squid-dev mailing list) rather than fellow admin.
http://bugs.squid-cache.org/show_bug.cgi?id=2959


Reported:   2010-06-18 04:35 MDT by Helmut Hullen

Viele Gruesse!
Helmut


My point being that the dev do not read this mailing list. If you want 
things changed in the code this is one of the worst places to mention.
Bugzilla was a good start, a reminder to squid-dev may make one of the 
others try a fix.
 Or you could try your hand at making a patch for configure.in and the 
relevant Makefile.am?


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


Re: [squid-users] Problems with hotmail and facebook - rev

2010-11-26 Thread Amos Jeffries

On 27/11/10 02:35, Landy Landy wrote:

Sorry if you receive this message twice but, my yahoo is acting up again.

After a while looking for solutions on this problem still havent resolve this 
issue. I added an extra dsl line to our network and things are going the same 
way. Also, tried other mailing list and posted on WISPA and got this response:

"Could be your squid cache."

Someone replied to that with:

"Agreed, everyone gets different photo and messages depending who their
associated to. it would probably drive the squid nuts, especially when FB is
busy and slow and squid is trying to compare.  "

I don't know if that is true but, would like to confirm with this list before 
acknowledging it.


That second reply appears to contain no information relevant to the 
problem. Content requests are only related after a successful login.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


RE: [squid-users] Beta testers wanted for 3.2.0.1 - Changing 'workers' (from 1 to 2) is not supported and ignored

2010-11-26 Thread Ming Fu
Ktrace shown that the bind failed because it try to open unix socket in 
/usr/local/squid/var/run and it does not have the permission. So it is easy to 
fix.

After the permission is corrected, I run into other problem, here is the log 
snip:

2010/11/26 20:55:35 kid2| Starting Squid Cache version 3.2.0.3 for 
amd64-unknown-freebsd8.1...
2010/11/26 20:55:35 kid3| Starting Squid Cache version 3.2.0.3 for 
amd64-unknown-freebsd8.1...
2010/11/26 20:55:35 kid1| Starting Squid Cache version 3.2.0.3 for 
amd64-unknown-freebsd8.1...
2010/11/26 20:55:35 kid3| Set Current Directory to /usr/local/squid/var/cache
2010/11/26 20:55:35 kid2| Set Current Directory to /usr/local/squid/var/cache
2010/11/26 20:55:35 kid1| Set Current Directory to /usr/local/squid/var/cache
FATAL: commonUfsDirCloseTmpSwapLog: rename failed
Squid Cache (Version 3.2.0.3): Terminated abnormally.
CPU Usage: 0.043 seconds = 0.000 user + 0.043 sys
Maximum Resident Size: 10416 KB
Page faults with physical i/o: 0
2010/11/26 20:55:38 kid1| Starting Squid Cache version 3.2.0.3 for 
amd64-unknown-freebsd8.1...
2010/11/26 20:55:38 kid1| Set Current Directory to /usr/local/squid/var/cache
FATAL: kid2 registration timed out
Squid Cache (Version 3.2.0.3): Terminated abnormally.
CPU Usage: 0.041 seconds = 0.010 user + 0.031 sys
Maximum Resident Size: 10324 KB
Page faults with physical i/o: 0
2010/11/26 20:55:46 kid2| Starting Squid Cache version 3.2.0.3 for 
amd64-unknown-freebsd8.1...
2010/11/26 20:55:47 kid2| Set Current Directory to /usr/local/squid/var/cache
FATAL: kid1 registration timed out
Squid Cache (Version 3.2.0.3): Terminated abnormally.
===

Here is the trace log for the error 
==
35092 initial thread CALL  rename(0x80283f460,0x80283f430)
 35092 initial thread NAMI  "/usr/local/squid/var/cache/swap.state.new"
 35092 initial thread RET   rename -1 errno 2 No such file or directory
 35092 initial thread CALL  setgroups(0x1,0x89ccac)
 35092 initial thread RET   setgroups -1 errno 1 Operation not permitted
 35092 initial thread CALL  setgid(0)
 35092 initial thread RET   setgid 0
 35092 initial thread CALL  geteuid
 35092 initial thread RET   geteuid 65534/0xfffe
 35092 initial thread CALL  clock_gettime(0xd,0x7fffd980)
 35092 initial thread RET   clock_gettime 0
 35092 initial thread CALL  socket(PF_LOCAL,SOCK_DGRAM,0)
 35092 initial thread RET   socket 12/0xc
 35092 initial thread CALL  fcntl(0xc,F_SETFD,FD_CLOEXEC)
 35092 initial thread RET   fcntl 0
 35092 initial thread CALL  connect(0xc,0x7fffd8f0,0x6a)
 35092 initial thread STRU  struct sockaddr { AF_LOCAL, /var/run/logpriv }
 35092 initial thread NAMI  "/var/run/logpriv"
 35092 initial thread RET   connect -1 errno 13 Permission denied
 35092 initial thread CALL  connect(0xc,0x7fffd8f0,0x6a)
 35092 initial thread STRU  struct sockaddr { AF_LOCAL, /var/run/log }
 35092 initial thread NAMI  "/var/run/log"
 35092 initial thread RET   connect 0
 35092 initial thread CALL  sendto(0xc,0x7fffda10,0x48,0,0,0)
 35092 initial thread GIO   fd 12 wrote 72 bytes
   "<9>Nov 26 20:55:35 (squid-1): commonUfsDirCloseTmpSwapLog: rename 
failed"
=

What is squid trying to do here?

Also I was wondering if I run 2 workers, should I see two cache directories, 
one for each worker?

Ming

-Original Message-
From: Ming Fu [mailto:ming...@watchguard.com] 
Sent: November-22-10 2:55 PM
To: squid-users@squid-cache.org; Squid Developers
Subject: RE: [squid-users] Beta testers wanted for 3.2.0.1 - Changing 'workers' 
(from 1 to 2) is not supported and ignored

Hi Amos,

Is there any news for this problem. I tested squid 3.2.0.3. The problem is 
still there. I am using FreeBSD 8.1.

Regards,
Ming

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: August-04-10 9:56 AM
To: squid-users@squid-cache.org; Squid Developers
Subject: Re: [squid-users] Beta testers wanted for 3.2.0.1 - Changing 'workers' 
(from 1 to 2) is not supported and ignored

Zeller, Jan (ID) wrote:
>> It looks like that message only occurs on a reconfigure. Does "-k
>> restart"
>> after the config change work?
>>
>> Amos
> 
> hmm the change applies once squid is restarted but now I am getting :
> 
> 010/08/04 08:21:20 kid3| commBind: Cannot bind socket FD 12 to [::]: (13) 
> Permission denied
> .
> .
> 
> squid is running as 
> 
> cache_effective_user  proxy
> cache_effective_group proxy
> 
> squid processes are running but no listening port. Any clue why this happens 
> ? 

Nothing I know about should lead to a kidN using bind on [::] or 0.0.0.0.

Maybe Alex has a clue.

cc'ing to squid-dev where beta release problems really need to be 
discussed. Please followup there.

Amos
-- 
Please be using
   Current S

Re: [squid-users] squid-3.1 client POST buffering

2010-11-26 Thread Graham Keeling
On Thu, Nov 25, 2010 at 04:36:49PM +, Graham Keeling wrote:
> Hello,
> 
> I have upgraded to squid-3.1 recently, and found a change of behaviour.
> I have been using dansguardian in front of squid.
> 
> It appears to be because squid now buffers uploaded POST data slightly
> differently.
> In versions < 3.1, it would take some data, send it through to the website,
> and then ask for some more.
> In 3.1 version, it appears to take as much from the client as it can without
> waiting for what it has already got to be uploaded to the website.
> 
> This means that dansguardian quickly uploads all the data into squid, and
> then waits for a reply, which is a long time in coming because squid still
> has to upload everything to the website.
> And then dansguardian times out on squid after two minutes.
> 
> 
> I noticed the following squid configuration option. Perhaps what I need is
> a similar thing for buffering data sent from the client.
> 
> #  TAG: read_ahead_gap  buffer-size
> #   The amount of data the cache will buffer ahead of what has been
> #   sent to the client when retrieving an object from another server.
> #Default:
> # read_ahead_gap 16 KB
> 
> Comments welcome!
> 
> Graham.


Upon further experimentation, I have found that squid-3.1.x (specifically,
I have tried squid-3.1.8 and squid-3.1.9) behaves very badly with POST uploads.

It just increases the input buffer forever, until the upload is finished, or
the machine runs out of memory.

This problem exists when connecting directly to squid without dansguardian
in the way.

This problem doesn't exist on my old squid-2.5 installation.



Re: [squid-users] Problems with hotmail and facebook - rev

2010-11-26 Thread Landy Landy
Sorry if you receive this message twice but, my yahoo is acting up again.

After a while looking for solutions on this problem still havent resolve this 
issue. I added an extra dsl line to our network and things are going the same 
way. Also, tried other mailing list and posted on WISPA and got this response:

"Could be your squid cache. "

Someone replied to that with:

"Agreed, everyone gets different photo and messages depending who their
associated to. it would probably drive the squid nuts, especially when FB is
busy and slow and squid is trying to compare.  "

I don't know if that is true but, would like to confirm with this list before 
acknowledging it.

--- On Mon, 11/15/10, Amos Jeffries  wrote:

> From: Amos Jeffries 
> Subject: Re: [squid-users] Problems with hotmail and facebook - rev
> To: "Landy Landy" 
> Cc: squid-users@squid-cache.org
> Date: Monday, November 15, 2010, 5:00 PM
> On Mon, 15 Nov 2010 06:25:10 -0800
> (PST), Landy Landy
> 
> wrote:
> > --- On Mon, 11/15/10, Landy Landy 
> wrote:
> 
> > 
> > Just discovered another site I can't log on to. Is my
> bank's website.
> > Looks like theres a problem with https and squid I
> can't discover.
> > 
> > Sorry to insist on this issue but, please understand
> my frustration.
> > 
> > Thanks.
> 
> I understand. It is one of the built-in problems with NAT
> interception.
> The IPs change. Websites that depend on IP will break.
> 
> I think you need to give TPROXY a try. It does everything
> that NAT does
> without this IP change.
> 
> Amos
> 
> 


  


Re: [squid-users] STDERR is closed? So no std::cerr?

2010-11-26 Thread declanw
"dying from an unhandled exception: !theConsumer"

Hurrah! Caught the STDERR message via non-daemonised mode!
Now I just have to find out what that means :)

On Thu, Nov 25, 2010 at 08:39:05AM +, decl...@is.bbc.co.uk wrote:
> On Thu, Nov 25, 2010 at 12:27:50AM +, Amos Jeffries wrote:
> > On Wed, 24 Nov 2010 13:26:03 +, Declan White 
> > wrote:
> > > I've got some 'uncaught exception' coredumping squids which are leaving no
> > > clues about their deaths.
> > > They are *meant* to be sending an SOS via:
> > > 
> > > main.cc:1162:std::cerr << "dying from an unhandled exception: " <<
> > > e.what() << std::endl;
> > > 
> > > but std::cerr isn't the cache_log is it. It's STDERR, aka FD 2.
[...]
> > hmm, how many and what particular processes are running? which particular
> > sub-process(es) is this happening to? how are you starting squid? etc. etc.
> > 
> > For background, by default only the master process uses stderr as itself.
> > All sub-processes have their stderr redirected to cache.log.
> 
> It looks like it's decided by whether or not you use the -N non-daemonise
> startup flag. The auth sub processes always have STDERR correctly redirected
> to cache_log, but without -N, the worker squid in the squid/root-squid pair
> leaves no STDERR open for itself.
> 
> I'll get my farm using 'squid -N &' when they next hit a quiet period (and
> I'm awake). This will also fix my HUP problem, the non-worker root-squid
> does indeed drop dead on HUP.
> 
> squid 3.1.9 on Solaris 9 64bit btw.
> 
> DW
> 
> > Amos

DW


[squid-users] tproxy

2010-11-26 Thread jiluspo
Would be posible to run tproxy in single ethernet, same subset of gateway, 
squid box, clients(squid box as gateway)?
I'm trying to run tproxy at lab on ubuntu 10.04, I dont know what else 
missing/wrong. squidbox as gateway works fine without tproxy.

This private IPs would be replaced with public IPs in production.

squid box runs as gateway single ethernet.
squidbox:
gateway 192.168.0.254
ip 192.168.0.123

client:
gateway 192.168.0.123
ip 192.168.0.197

r...@ubuntu:~# uname -r
2.6.32-25-generic-pae

cat /boot/config-`uname -r` | grep -E 
'(NF_CONNTRACK=|TPROXY|XT_MATCH_SOCKET|XT_TARGET_TPROXY)'

CONFIG_NF_CONNTRACK=m
CONFIG_NETFILTER_TPROXY=m
CONFIG_NETFILTER_XT_TARGET_TPROXY=m
CONFIG_NETFILTER_XT_MATCH_SOCKET=m

iptables v1.4.4

libcap-dev 1:2.17-2ubuntu1
libcap2 1:2.17-2ubuntu1

sysctl.conf
net.ipv4.ip_forward=1
net.ipv4.conf.lo.rp_filter=0

/tproxy script:
{{{
#!/bin/sh
ip rule del fwmark 1 lookup 100
ip route del local 0.0.0.0/0 dev lo table 100
iptables -F
iptables -F -t mangle
iptables -F -t nat

iptables -t mangle -N DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 1
iptables -t mangle -A DIVERT -j ACCEPT

iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY --tproxy-mark 
0x1/0x1 --on-port 3129


ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100
}}}

sysctl.conf:
net.ipv4.ip_forward=1
net.ipv4.conf.lo.rp_filter=0

r...@ubuntu:~# squid -v
Squid Cache: Version 3.1.9
configure options:  '--prefix=/usr' '--localstatedir=/var' 
'--libexecdir=${prefix}/lib/squid' '--srcdir=.' 
'--datadir=${prefix}/share/squid' '--sysconfdir=/etc/squid' 
'--enable-async-io' '--with-pthreads' '--enable-storeio=aufs' 
'--enable-epoll' '--enable-removal-policies=lru,heap' '--enable-snmp' 
'--enable-linux-netfilter' 
'--with-large-files' --with-squid=/root/squid-3.1.9


squid.conf has
http_port 3129 tproxy



--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



[squid-users] Re: squid cache not updating?

2010-11-26 Thread J Webster

I have my cache mounted on a drive at /var/spool/squid.
The other day I tied to mount a new folder also on the same drive, which 
is apparently not the best thing to do.
Since then, I am not sure if my squid cache is updating or not. It seems 
to be stuck at 35Gb use and 16% capacity.
Is there anyway to check if the cache is updating? 




[squid-users] POP3 authentification

2010-11-26 Thread Helmut Hullen
Hallo, squid-users,

I've tried the POP3 authentification with the script "pop3.pl" in  
"helpers/basic_auth/POP3" (written by Henrik Nordstrom) - it didn't  
work.

Many error messages in /var/log/warn, telling

Nov 26 09:14:58 Arktur squid[2157]: Starting Squid Cache version 3.1.8  
for i486-slackware-linux-gnu...
Nov 26 09:15:06 Arktur squid[2157]: WARNING: basicauthenticator #1 (FD 9) exited
Nov 26 09:15:06 Arktur squid[2157]: WARNING: basicauthenticator #2 (FD 11) 
exited

[...]

Nov 26 09:15:07 Arktur squid[2157]: WARNING: basicauthenticator #20 (FD 47) 
exited
Nov 26 09:15:07 Arktur squid[2157]: WARNING: basicauthenticator #21 (FD 49) 
exited
Nov 26 09:15:07 Arktur squid[2157]: Too few basicauthenticator processes are 
running
Nov 26 09:15:07 Arktur squid[2157]: The basicauthenticator helpers are crashing 
too rapidly, need help!
Nov 26 09:15:07 Arktur squid[1928]: Exiting due to repeated, frequent failures


Changing to

squidauth.py (by POP3)
or
squidauth.py (by IMAP)

(see "http://lateral.netmanagers.com.ar/stories/6.html";)

solved the problem.

Maybe you should add the Python scripts in "helpers/basic_auth/POP3".

Viele Gruesse!
Helmut


Re: [squid-users] SAMBAPREFIX

2010-11-26 Thread Helmut Hullen
Hallo, Amos,

Du meintest am 27.11.10:

>> it would be nice if "SAMBAPREFIX" is not hard coded in "helpers/
>> basic_auth/SMB/Makefile.in" but can be defined as a "configure"
>> option.
>>

> You will probably be wanting to tell this to the developers
> (squid-dev mailing list) rather than fellow admin.
> http://bugs.squid-cache.org/show_bug.cgi?id=2959

Reported:   2010-06-18 04:35 MDT by Helmut Hullen

Viele Gruesse!
Helmut


Re: [squid-users] SAMBAPREFIX

2010-11-26 Thread Amos Jeffries

On 26/11/10 20:42, Helmut Hullen wrote:

Hallo, squid-users,

it would be nice if "SAMBAPREFIX" is not hard coded in "helpers/
basic_auth/SMB/Makefile.in" but can be defined as a "configure" option.



You will probably be wanting to tell this to the developers (squid-dev 
mailing list) rather than fellow admin.

http://bugs.squid-cache.org/show_bug.cgi?id=2959

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.9
  Beta testers wanted for 3.2.0.3


RE: [squid-users] Caching youtube videos problem/ always getting TCP_MISS

2010-11-26 Thread Saurabh Agarwal
Hi All

This is what I had to do for successfully caching youtube video. I tested with 
this URL http://www.youtube.com/watch?v=7M-jsjLB20Y 

Please follow the instruction given on 
http://wiki.squid-cache.org/ConfigExamples/DynamicContent/YouTube . Then I had 
these following lines in squid.conf. Also pasting below my storeurl.pl

###Following refresh pattern and store url rewrite config was used

acl store_rewrite_list1 dstdomain .youtube.com .video.google.com
acl store_rewrite_list urlpath_regex  
\/(get_video\?|videodownload\?|videoplayback\?|watch\?)
#acl store_rewrite_list urlpath_regex  
\/(get_video\?|videodownload\?|videoplayback\?|watch\?|generate_204\?|docid=)
storeurl_access allow store_rewrite_list store_rewrite_list1
storeurl_access allow all
cache allow all
storeurl_rewrite_program /tmp/squid/storeurl.pl
storeurl_rewrite_children 1
storeurl_rewrite_concurrency 10

#youtube's videos
#refresh_pattern -i 
(get_video\?|videoplayback\?|videodownload\?|watch\?|generate_204\?|docid=)$ 
5259487 % 5259487 ignore-no-cache override-lastmod override-expire 
ignore-reload negative-ttl=0
refresh_pattern -i (get_video\?|videoplayback\?|videodownload\?|watch\?) 
5259487 % 5259487 ignore-private ignore-no-cache override-expire 
ignore-reload negative-ttl=0

##squid.conf portion ends

##contents of storeurl.pl
#!/usr/bin/perl
$|=1;
while (<>) {
chomp;

@X = split;
$x = $X[0];
$_ = $X[1];
if (m/^http:(.*)\.youtube\.com\/videoplayback\?(.*)id=(.*?)&/) {
print 
$x."http://video-srv.youtube.com.SQUIDINTERNAL/get_video?video_id="; . $3 . "\n";
}elsif (m/^http:(.*)\.youtube\.com\/generate_204\?(.*)id=(.*)/) {
print 
$x."http://video-srv.youtube.com.SQUIDINTERNAL/get_video?video_id="; . $3 . "\n";
}elsif (m/^http:(.*)\.video\.google\.com\/(.*)docid=(.*?)&/) {
print 
$x."http://video-srv.youtube.com.SQUIDINTERNAL/get_video?video_id="; . $3 . "\n";
} elsif (m/^http:\/\/(.*)\.youtube\.com\/get_video\?video_id=(.*)/) {
print 
$x."http://video-srv.youtube.com.SQUIDINTERNAL/get_video?video_id="; . $2 . "\n";
} elsif (m/^http:(.*)\.youtube\.com\/watch.*v=(.*)&/) {
print 
$x."http://video-srv.youtube.com.SQUIDINTERNAL/get_video?video_id="; . $2 . "\n";
} elsif (m/^http:(.*)\.youtube\.com\/watch.*v=(.*)$/) {
print 
$x."http://video-srv.youtube.com.SQUIDINTERNAL/get_video?video_id="; . $2 . "\n";
} elsif (m/^http:(.*)\.youtube\.com\/get_video\?(.*)video_id=(.*?)&/) {
print 
$x."http://video-srv.youtube.com.SQUIDINTERNAL/get_video?video_id="; . $3 . "\n";
} elsif 
(m/^http:\/\/74\.125(.*?)\/get_video\?video_id=(.*?)&origin=(.*?)\.youtube\.com 
/) {
print 
$x."http://video-srv.youtube.com.SQUIDINTERNAL/get_video?video_id="; . $2 . "\n";
} else {
print $x.$_ . "\n";
}
##storeurl.pl ends

Thank You.

Regards,
Saurabh

-Original Message-
From: Saurabh Agarwal [mailto:saurabh.agar...@citrix.com] 
Sent: Friday, November 26, 2010 3:10 PM
To: Amos Jeffries
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] Caching youtube videos problem/ always getting 
TCP_MISS

Hi Amos 

It works fine now. Youtube videos are being cached. There was a mistake in 
refresh_pattern.

Regards,
Saurabh

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Tuesday, November 23, 2010 4:47 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Caching youtube videos problem/ always getting 
TCP_MISS

On 23/11/10 23:50, Saurabh Agarwal wrote:
> Thanks Amos.
>
> I fixed the channel id problem by fixing storeurl rewriter perl script but 
> still the video/x-flv response is getting RELEASED in store.log instead of 
> SWAPOUT. Can you please read below cache.log and suggest what is still going 
> wrong? Now "Rewrote to" message prints the right transformed URL. After this 
> store tries to look up for the "6B2E83D66FC215C27ECFBA432AB7B5F6" key which 
> returns a TCP MISS for the same key lookup for 2nd and third tries as well. 
> Then one hash entry gets inserted with another key for the same big URL with 
> different hash key for 2nd and 3rd time as well. This new key is 
> "04BE27CFF614A3315F5CEB008464C453". I have also pasted HTTP response header 
> for video/x-flv content from cache.log. After this I see "Deferring starting 
> swapping out" message in cache.log. Can you please suggest why it is not 
> swapping out?
>
> I am pasting below a section of squid.conf file as well. After this is the 
> cache.log output.
>

I'm thinking you don't want to ignore the last-modified header. It seems 
to be usefully far in the past to cause caching for some period. 
Ignoring it may cause the refresh_pattern to fail in extending the 
storage time (% of last-modified age).

Other than that I don't really know. I have not spent time working out 
how to port storeurl* feature yet so don't know its internals wel

RE: [squid-users] Caching youtube videos problem/ always getting TCP_MISS

2010-11-26 Thread Saurabh Agarwal
Hi Amos 

It works fine now. Youtube videos are being cached. There was a mistake in 
refresh_pattern.

Regards,
Saurabh

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Tuesday, November 23, 2010 4:47 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Caching youtube videos problem/ always getting 
TCP_MISS

On 23/11/10 23:50, Saurabh Agarwal wrote:
> Thanks Amos.
>
> I fixed the channel id problem by fixing storeurl rewriter perl script but 
> still the video/x-flv response is getting RELEASED in store.log instead of 
> SWAPOUT. Can you please read below cache.log and suggest what is still going 
> wrong? Now "Rewrote to" message prints the right transformed URL. After this 
> store tries to look up for the "6B2E83D66FC215C27ECFBA432AB7B5F6" key which 
> returns a TCP MISS for the same key lookup for 2nd and third tries as well. 
> Then one hash entry gets inserted with another key for the same big URL with 
> different hash key for 2nd and 3rd time as well. This new key is 
> "04BE27CFF614A3315F5CEB008464C453". I have also pasted HTTP response header 
> for video/x-flv content from cache.log. After this I see "Deferring starting 
> swapping out" message in cache.log. Can you please suggest why it is not 
> swapping out?
>
> I am pasting below a section of squid.conf file as well. After this is the 
> cache.log output.
>

I'm thinking you don't want to ignore the last-modified header. It seems 
to be usefully far in the past to cause caching for some period. 
Ignoring it may cause the refresh_pattern to fail in extending the 
storage time (% of last-modified age).

Other than that I don't really know. I have not spent time working out 
how to port storeurl* feature yet so don't know its internals well.

Amos
-- 
Please be using
   Current Stable Squid 2.7.STABLE9 or 3.1.9
   Beta testers wanted for 3.2.0.3


[squid-users] SAMBAPREFIX

2010-11-26 Thread Helmut Hullen
Hallo, squid-users,

it would be nice if "SAMBAPREFIX" is not hard coded in "helpers/ 
basic_auth/SMB/Makefile.in" but can be defined as a "configure" option.


Viele Gruesse!
Helmut