Re: [squid-users] injecting small piece of html into pages retrieved

2008-03-22 Thread Andreas Pettersson
On fre, 2008-03-21 at 19:45 -0700, Edward Rosinzonsky wrote:
 Hi,
 
 I'd like to inject a small piece of HTML into any page retrieved
 through the proxy. E.g. change /head to script .../head.
 The HTML is static.  That is the same piece of HTML is to be injected
 into every page.  Is this possible?  If so, how?  Is there perhaps a
 plugin, that could make this possible?

1. Create an url rewriter to http://localhost/specialscript.pl for the
page in question

2. Have the script fetch the original page

3. Insert desired text using regexp and send to browser

-- 
Andreas




[squid-users] BYPASS UPON FAILURE

2008-03-22 Thread Sadiq Walji
Hello,
I am new to squid and we have squid caching running on a server for users
and have a query as follows:

When squid fails, all the users cannot browse and we have to manually stop
squid to bypass it. Is there any way/feature that enables to bypass squid
automatically if and when it fails or has some problems?

Kindly assist
Thanks,
Sadiq Walji






Re: [squid-users] Squid Future (was Re: [squid-users] Squid-2, Squid-3, roadmap)

2008-03-22 Thread Amos Jeffries

Chris Woodfield wrote:
For our purposes (reverse proxy usage) we don't see any missing features 
from squid 3 that we would need - however, we'd like to see the code 
base mature some more before we trust it in production. Same reason that 
smart folks don't deploy new Cisco IOS trains until it hits the 3rd or 
4th rebuild.


However, I will use this opportunity to put forth my own wishlist of 
missing features (that AFAIK aren't even present in squid 3.0 at this 
time - if they are please let me know!:


- Multithreading, multithreading, multithreading!


Pick a large number... ;-)



- Better handling of cache_dir failures:
What I mean here is, if you have multiple cache_dirs configured 
(presumably on separate disks) squid should not refuse to start if one 
is unavailable. It should scream loudly, yes, but should be able to 
carry on with the ones it can use. For bonus points, make squid capable 
of dropping a cache_dir that becomes unavailable during runtime.




Thanks for the reminder. I had some wishlist myself here. I've added 
this and mine to the official wishlist.


- Fix mem_cache bottleneck that effectively prohibits large files from 
being stored in squid's memory.


http://www.mail-archive.com/squid-users@squid-cache.org/msg52509.html


Hmm, not sure exactly what Adrian as planned there, beyond changing the 
underlying malloc/calloc system of squid to something else.

Added it to the 'undocumented features wishlist' anyway.

- Allow helper children (url_rewriters, etc) to send some sort of 
pause message back to squid to signal that that child is temporarily 
unavailable for new queries, and then a ready message when it's 
available again.
(yes, this is kinda obscure - the issue here is a single-threaded 
rewriter helper app that occasionally has to re-read its rules database, 
and can't answer queries while it's doing so)


Um, sounds interesting. I've added it to the wishlist. Lets see if 
anyone picks it up.




- A better mechanism (B-tree maybe?) for storing cache contents such 
that cached object URIs can be quickly searched via path or regex for 
reporting/purging purposes.


We'd have to find a tree that is faster than or as fast as a hash over a 
very large dataset. Otherwise its not worth it to justify an overall 
performance degradation for one or two relatively minor occurances.




- Oh, and I want a pony.


Well... if you are willing to sponsor the funding of it that can be 
arranged easier than most of the other requests.



Amos



Thanks,

-C

On Mar 16, 2008, at 9:18 PM, Adrian Chadd wrote:

Just to summarise the discussion, both public and private.

* Squid-3 is receiving the bulk of the active core Squid developers' 
focus;

* Squid-2 won't be actively developed at the moment by anyone outside
 of paid commercial work;
* I've been asked (and agreed at the moment) to not push any big 
changes to

 Squid-2.

If your organisation relies on Squid-2 and you haven't any plans to 
migrate

to Squid-3, then there's a few options.

* Discuss migrating to Squid-3 with the Squid-3 developers, see what 
can be done.
* Discuss commercial Squid-2 support/development with someone (eg 
Xenion/me).

* Migrate away from Squid to something else.

Obviously all of us would prefer that users wouldn't migrate away from 
Squid in
general, so if the migration to Squid-3 isn't on your TODO list for 
whatever
reason then its in your best interests -right now- to discuss this out 
in the

open.

If you don't think that Squid as a project is heading in a direction 
that is useful
for you, then its in your best interests -right now- to discuss this 
with the
Squid development team rather than ignoring the issue or discussing it 
privately.
I'd prefer open discussions which everyone can view and contribute 
towards.


If there's enough interest in continuing the development of Squid-2 along
my Roadmap or any other plan then I'm interested in discussing this 
with you.
If the interest is enough to warrant beginning larger changes to 
Squid-2 to

support features such as IPv6, threading and improved general performance
then I may reconsider my agreement with the Squid-3 developers (and 
deal with

whatever pain that entails.)

At the end of the day, I'd rather see something that an increasing 
number of people
on the Internet will use and - I won't lie here - whatever creates a 
self sustaining

project, both from community and financial perspectives.





Adrian






--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


Re: [squid-users] BYPASS UPON FAILURE

2008-03-22 Thread Amos Jeffries

Sadiq Walji wrote:

Hello,
I am new to squid and we have squid caching running on a server for users
and have a query as follows:

When squid fails, all the users cannot browse and we have to manually stop
squid to bypass it. Is there any way/feature that enables to bypass squid
automatically if and when it fails or has some problems?



Which squid version are you running?

2.6+ restart themselves as best they can after fatal but temporary errors.

If you are having a problem that is so fatal squid dies long-term that 
problem needs to be found and fixed.



Amos
--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


Re: [squid-users] you tube +delay pool

2008-03-22 Thread s f
hi,

here is the things u mentioned

acl our_networks src x.x.x.x/x
delay_pools 1
delay_class 1 2
delay_parameters 1 -1/-1 2048/8000
#delay_parameters 1 4096/8000
acl dp url_regex -i \.mp3$ \.wmv$ \.avi$ \.wma$ \.mpe?g$
acl dp1 rep_mime_type video/flv
#acl youtube url_regex -i youtube
acl youtube dstdomain .youtube.com #rep_mime_type didnt worked so
currently am having this. but since youtube has
delay_access 1 allow dp our_networks
delay_access 1 allow dp1 our_networks
delay_access 1 allow youtube our_networks
delay_access 1 deny all

The delay pool is working for acl dp and youtube. But there is no
effect in youtube videos.


On Fri, Mar 21, 2008 at 7:19 AM, Chris Robertson [EMAIL PROTECTED] wrote:

 s f wrote:
   Hello,
  
   I am trying to put you tube and other flv videos in delay pool
  
   acl flvvideo rep_mime_type video/flv
   delay_access 1 allow flvvideo our_networks
  
   But its not working.
  
   How can I do that?
  

  That looks like a good start, but without context.  How is the delay
  pool set up?  Do you only want to delay Flash video to certain clients?
  If not, why are you specifying our_networks on the delay_access line
  (assuming it is appropriately labeled)?

  Giving a more thorough accounting of your intentions, and providing your
  full, stripped of comments conf file will allow us to better help.

  Chris



Re: [squid-users] OT: Removing unused lines

2008-03-22 Thread paul cooper
grep ^[A-Za-z] /etc/squid/squid.conf

to include lines that start with spaces

grep ^[A-Za-z\ ] /etc/squid/squid.conf





Re: [squid-users] A bug? (was cache deny and the 'public' token)

2008-03-22 Thread Ric


On Mar 21, 2008, at 3:06 PM, Henrik Nordstrom wrote:



On Thu, 2008-03-20 at 20:11 -0700, Ric wrote:
In reverse-proxy setup, I would like to prevent requests  
authenticated
via cookies from being cached while retaining the ability of the  
Cache-

Control: public token to override this behavior as if were a regular
authenticated request.

Will the following work?

  acl public rep_header Cache-Control public
  cache allow public

  acl auth_cookie req_header Cookie -i  auth=
  cache deny auth_cookie


Looks reasonable to me.

Try it and see how it fares.

Regards
Henrik



Okay... after much trial and error, I think this is a bug.

I stripped my config down to the bare essentials and these two lines  
consistently break caching.


acl public rep_header Cache-Control public
cache allow public

In fact, if I try to use any rep_header acl in the cache directive,  
the object is no longer cached.  Other acls seem to work (although I  
haven't tested all of them) but not rep_header.


Can anyone confirm this bug?

Ric





[squid-users] transparent proxy bypass https traffic

2008-03-22 Thread Razvan Grigore
Hello,

I'm using squid 2.6.STABLE6 on CentOS. I succesfully configured squid
both as a transparent proxy and normal proxy that's working fine on
http and https in normal mode, but on transparent mode https is a
challenge.

http_port 3128
http_port 3129 transparent

i'm redirecting with iptables like this:

iptables -t nat -A PREROUTING -i eth0 -p tcp -d ! 10.0.0.0/8 --dport
80 -m mark --mark 0x0 -j REDIRECT --to-port 3129

I have 2 types of clients, that are accessing internet through squid
or directly.

How can i bypass squid for https traffic ONLY for squid users?

I tried like this:

iptables -t nat -A PREROUTING -i eth0 -p tcp -d ! 10.0.0.0/8 --dport
443 -m mark --mark 0x0 -j REDIRECT --to-port 3129

but it gives:

2008/03/22 16:54:41| parseHttpRequest: Requestheader contains NULL characters
2008/03/22 16:54:41| parseHttpRequest: Unsupported method ''
2008/03/22 16:54:41| clientReadRequest: FD 19 (10.x.x.3:1104) Invalid Request

I think that i can make iptables rules for every ip in squid for
allowing direct https, but i want to avoid this.

is squid 3 capable through ssl bump for allowing https traffic without
braking the certificate? or at least without notiffing the user.

Thank you!


Re: [squid-users] transparent proxy bypass https traffic

2008-03-22 Thread Amos Jeffries

Razvan Grigore wrote:

Hello,

I'm using squid 2.6.STABLE6 on CentOS. I succesfully configured squid
both as a transparent proxy and normal proxy that's working fine on
http and https in normal mode, but on transparent mode https is a
challenge.

http_port 3128
http_port 3129 transparent

i'm redirecting with iptables like this:

iptables -t nat -A PREROUTING -i eth0 -p tcp -d ! 10.0.0.0/8 --dport
80 -m mark --mark 0x0 -j REDIRECT --to-port 3129

I have 2 types of clients, that are accessing internet through squid
or directly.

How can i bypass squid for https traffic ONLY for squid users?


What do you mean by this?
1) explicitly configured proxy clients should have no problems with HTTPS.
2) transparently redirecting encrypted traffic to squid 2.6 will fail 
since squid is expecting HTTP, not binary encryption.




I tried like this:

iptables -t nat -A PREROUTING -i eth0 -p tcp -d ! 10.0.0.0/8 --dport
443 -m mark --mark 0x0 -j REDIRECT --to-port 3129

but it gives:

2008/03/22 16:54:41| parseHttpRequest: Requestheader contains NULL characters
2008/03/22 16:54:41| parseHttpRequest: Unsupported method ''
2008/03/22 16:54:41| clientReadRequest: FD 19 (10.x.x.3:1104) Invalid Request

I think that i can make iptables rules for every ip in squid for
allowing direct https, but i want to avoid this.


2.6 has no capability for transparent HTTPS. If you continue with that 
version of squid you will have to unblock the HTTP outbound traffic.
Configured clients will use the proxy even if its open, others will get 
working HTTPS direct traffic.




is squid 3 capable through ssl bump for allowing https traffic without
braking the certificate? or at least without notiffing the user.


Yes Squid 3-HEAD (3.1 alpha) can cope with this. You will need to build 
it yourself from sources, but give it try.


http://www.squid-cache.org/Versions/v3/HEAD/


Thank you!


Thank you.

Amos
--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


Re: [squid-users] Squid Future (was Re: [squid-users] Squid-2, Squid-3, roadmap)

2008-03-22 Thread Henrik Nordstrom
On Sat, 2008-03-22 at 20:14 +1300, Amos Jeffries wrote:
  http://www.mail-archive.com/squid-users@squid-cache.org/msg52509.html
 
 Hmm, not sure exactly what Adrian as planned there, beyond changing the 
 underlying malloc/calloc system of squid to something else.
 Added it to the 'undocumented features wishlist' anyway.

In Squid-2 the function copying data from cache_mem is doing a linear
search on each copy from the first mem_node of the object, causing
exponential growth in CPU usage for TCP_MEM_HIT processing with the size
of the cached object.

Squid-3 is different and uses a splay tree for the memory nodes of the
object, and should behave a lot better in this regard.


  - A better mechanism (B-tree maybe?) for storing cache contents such 
  that cached object URIs can be quickly searched via path or regex for 
  reporting/purging purposes.
 
 We'd have to find a tree that is faster than or as fast as a hash over a 
 very large dataset. Otherwise its not worth it to justify an overall 
 performance degradation for one or two relatively minor occurances.

The grade of this varies a lot depending on the installation. For
reverse proxy installations selective purging of the cache is very
important.

However, there is other approaches which give the same end result
independently of the store. For example the purge list  generation
counter used by Varnish.

Regards
Henrik



Re: [squid-users] A bug? (was cache deny and the 'public' token)

2008-03-22 Thread Henrik Nordstrom
On Sat, 2008-03-22 at 03:29 -0700, Ric wrote:

 Okay... after much trial and error, I think this is a bug.
 
 I stripped my config down to the bare essentials and these two lines  
 consistently break caching.
 
 acl public rep_header Cache-Control public
 cache allow public
 
 In fact, if I try to use any rep_header acl in the cache directive,  
 the object is no longer cached.  Other acls seem to work (although I  
 haven't tested all of them) but not rep_header.

Probably only request data is available in the cache directive, not the
reply data..

Checking.. yes, the cache directive is evaluated before the request is
forwarded, which means that any acl that depends on the response will
always be false there.

Regards
Henrik



Re: [squid-users] A bug? (was cache deny and the 'public' token)

2008-03-22 Thread Ric


On Mar 22, 2008, at 2:46 PM, Henrik Nordstrom wrote:


On Sat, 2008-03-22 at 03:29 -0700, Ric wrote:


Okay... after much trial and error, I think this is a bug.

I stripped my config down to the bare essentials and these two lines
consistently break caching.

acl public rep_header Cache-Control public
cache allow public

In fact, if I try to use any rep_header acl in the cache directive,
the object is no longer cached.  Other acls seem to work (although I
haven't tested all of them) but not rep_header.


Probably only request data is available in the cache directive, not  
the

reply data..

Checking.. yes, the cache directive is evaluated before the request is
forwarded, which means that any acl that depends on the response will
always be false there.

Regards
Henrik




Well... that's unfortunate.

Any way to fix that?  Or do you have another suggestion?

To recap... In a reverse-proxy setup, I would like to prevent requests  
authenticated via cookies from being cached while retaining the  
ability of the Cache-Control: public token to override this behavior  
as if were a regular authenticated request.


Ric




Re: [squid-users] A bug? (was cache deny and the 'public' token)

2008-03-22 Thread Henrik Nordstrom
On Sat, 2008-03-22 at 15:38 -0700, Ric wrote:
  Checking.. yes, the cache directive is evaluated before the request is
  forwarded, which means that any acl that depends on the response will
  always be false there.

 Any way to fix that?

Fixable, but coding required.

 Or do you have another suggestion?

Fix your website to work with the HTTP cache model?

Involves the simple rule that an url is either public or authenticated.
To solve things like navigation menus, edit buttons etc on public
content draw them using javascript instead of delivering them in the
main public html blob, reducing your public/private scope to just the
javascript include (or none if basing the include path on cookies).

Regards
Henrik