[squid-users] Customizing ERR_TOO_BIG

2007-12-05 Thread Uto Cen
Hi,
this is my setup for squid as a reverse proxy:

client -> squid -> origin server with apache virtual hosting for
www.mysite1.com and www.mysite2.com


I would like to have a global file upload protection of 4MB max across
all virtual hosting domains.
So in squid.conf I have:
request_body_max_size 4 MB

and I would like to customize the error message to the client based on
the virtual domain that it's accessing, for example:

www.mysite1.com should show an error page from www.mysite1.com/errtoobig.html
and
www.mysite2.com show show an error page from www..mysite2.com/errtoobig.html

So, I went to ERR_TOO_BIG and added this line in the meta refresh:

http://%V/errtoobig.html";>

but Squid doesn't recognize %V as the host-header.

what is the HTTP_HOST header variable that Squid expects, or is there
a better to accomplish what I'm trying to achieve here.

Uto


[squid-users] Can't assign requested address

2007-12-05 Thread nix_kot
Hello, squid-users.

In my cache.log very many such messages >

2007/12/06 08:44:37| commBind: Cannot bind socket FD 7703 to *:0: (49) Can't 
assign requested address
2007/12/06 08:44:37| commBind: Cannot bind socket FD 7703 to *:0: (49) Can't 
assign requested address
2007/12/06 08:44:38| commBind: Cannot bind socket FD 7697 to *:0: (49) Can't 
assign requested address
2007/12/06 08:44:38| commBind: Cannot bind socket FD 7697 to *:0: (49) Can't 
assign requested address
2007/12/06 08:49:10| comm_accept: FD 80: (53) Software caused connection abort
2007/12/06 08:49:10| httpAccept: FD 80: accept failure: (53) Software caused 
connection abort
2007/12/06 08:50:03| parseHttpRequest: Unsupported method '..CONNECT'
2007/12/06 08:50:03| clientReadRequest: FD 103 Invalid Request
2007/12/06 08:52:31| sslReadServer: FD 91: read failure: (54) Connection reset 
by peer

I don't know, that is it.
Squid restarted after per minutes.
Users message in browser on the opening page: Can't assign requested address

And in this time squid load all Processor (80-90%).
-- 
Best regards,
 nix_kot  mailto:[EMAIL PROTECTED]



Re: [squid-users] COSS

2007-12-05 Thread Adrian Chadd
On Wed, Dec 05, 2007, s f wrote:
> Hello,
> 
> Do you use COSS storage schema?? Do yo prefer it or not? I was
> thinking of using COSS over aufs. I am no expert in this field so want
> your valuable suggestion to this.

COSS works fine in Squid-2.6 and Squid-2.HEAD. COSS works fine for small
objects (under ~ 100k.)

You can configure squid to use COSS for small objects and put the rest on
AUFS.


Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -


Re: [squid-users] process load

2007-12-05 Thread Adrian Chadd
On Wed, Dec 05, 2007, Amos Jeffries wrote:

> >So the expensive youtube regexp ACL will only be processed by requests 
> >from clientA.
> >Requests from clientB won't ever hit the youtube ACL lookup.
> >
> >If you know how to craft ACLs then you can avoid almost all of the 
> >penalties.
> >
> >Adrian
> 
> Adrian! stop encouraging the regexp-addicts. :-)
> 
> We're trying to wean them off the unnecessary use of slow ACL remember? ;)

its not always unnecessary! :)

Hah, then you're going to hate one of the things I might have in the pipeline
if I get time/funding - pushing some access control lookups into external
processes which only do things like regular expression processing.




Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -


Re: [squid-users] read_timeout and "fwdServerClosed: re-forwarding"

2007-12-05 Thread Adrian Chadd
On Wed, Dec 05, 2007, Chris Hostetter wrote:

> : Hmm.. might be a good idea to try Squid-2.HEAD. This kind of things
> : behaves a little differently there than 2.6..
> 
> Alas ... I don't think i could convince my boss to get on board the idea 
> of using a devel releases.  then again, i'm not too clear on how 
> branch/release management is done in squid ... do merges happen from 
> 2.HEAD to 2.6 (in which case does 2.6.STABLE17 have the behavior you are 
> refering to?) or will 2.HEAD ultimately become 2.7 once it's more stable?

Squid-2.HEAD should eventually become Squid-2.7.

> So it kind of seems like i'm out of luck right?  my only option being to 
> try 2.HEAD which *may* have the behavior i'm describing.

Its part and parcel with free software. We can be paid to test it in a lab
and give you a certain answer if you'd like. :)


-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -


[squid-users] solved - dns timeout , but working dns servers. Unable to determine IP address from host name

2007-12-05 Thread phil curb
I was getting this error..
using windows port of squid

Seems it was not getting DNS servers properly.
had to set both of them dns_nameservers in squid.conf

then it worked.

the short story with answer is
I did ipconfig /all 
it shows 10.0.0.138 as DNS server, and 192.168.0.1 as
my "Gateway" i.e. router. But the same device, like
most NAT routers, it is a DNS server too.

As you can see, my NAT router is slifghtly weird like
that. Speedtouch 546. It seems to have 2 ip addresses.
I can http to  192.168.0.1 or 10.0.0.138 , and get to
the router interface.

DNS worked, I can browse, and wireshark showed that it
was working. It uses UDP not TCP. And it showed 
sourcedest
DNS query  192.168.0.2 --> 10.0.0.138
DNS response 192.168.0.1 --> 192.168.0.2

as you can see..  the query goes to one DNS ip, the
response comes from the other one. Maybe that is part
of teh reason for a problem.. In that it needed to
know Both DNS server ips.

I reckon squid was only getting 10.0.0.138 as DNS
server and that was not enough.

When I did   dns_nameservers 192.168.0.1 10.0.0.138
it worked.

I knew DNS servers were working because I could browse
- when not using the squid peoxy. Now I can browse
with since i fixed it up.


hope this reads ok, i am a little dizzy.
jameshanley39





  ___
Support the World Aids Awareness campaign this month with Yahoo! For Good 
http://uk.promotions.yahoo.com/forgood/


[squid-users] Squid Child Cache, X-Forward

2007-12-05 Thread Jason Gauthier
All,

  Sorry for the seemingly weak question.  I'm using squid as a parent
right now, and receiving x-forward information. 
I want to start using squid as a child as well... but I cannot see how
to 'send' the x-forward information.

Hope that makes sense.

Thanks,

Jason


Re: [squid-users] auto blacklist users

2007-12-05 Thread Adrian Chadd
I've been trialing a set of test commercial services locally, including
proxy site filtering. I've written a series of external ACL helper plugins
to do things like phishtank/safebrowse filtering, match against some
malware RBL lists and these proxy lists.

Paying clients can download the proxy site lists and also submit new proxy
site lists and help correct errors.

I wasn't planning on going live with this until the appliance was built (early
next year) but I'm happy to break out just this particular module and offer
it at a discounted rate per-server.

Its commercial because someone has to keep the lists updated and write new
modules. As a server admin (or an appliance buyer!) you only have to care
when the automated updates stop. Other than that things will just "keep 
working."

If you're interested in this then please let me know and I'll pass on some
per-server and site pricing.

Thanks!




Adrian


On Wed, Dec 05, 2007, ian j hart wrote:
> Hello.
> 
> [sorry, slightly off topic]
> 
> I'm the ICT technician of a school. I have squid running to make the most
> of our bandwidth. Our ISP provides some content blocking but this is
> proving ineffective against the proliferation of proxy sites.
> 
> I've started to monitor and block sites with squid ACLs. This is also not
> so effective as there are 1200 users looking for new sites and only 1 user
> trying to block them.
> 
> Since there is no punishment for hitting any DENY ACL there's no reason
> for them to stop.
> 
> What I need is to apply some back pressure, i.e. automatically block
> persistant offenders.
> 
> Does anyone have anything like this?
> 
> N.B. This has to be user based. Host/IP based will not work due to the
> hot seating.
> 
> Thanks
> 
> -- 
> ian j hart

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] process load

2007-12-05 Thread Amos Jeffries
> Hi,
>
> Adrian though being a newbie I know about that though not perfect :D.
>
> I basically have 2.5 Ghz computer with 512 MB ram. With those number
> of users hunting squid, how many DSTDOMAIN ACL are considered safe.
>
> I know I can check the process load by adding few ACL at a time. But I
> can't complain management each week to replace the CUP or memory.
>
> So I am here for the help. You guys must have hell lot of experience
> to tell me that in rough figures.

No not really a lot. dstdomain can handle a few hundred thousand rules on
a modern fast server. On the 2.6GHz with just 512MB you should see no
problem with a few thousand.
Expect 10,000 acl entries to take less than ~3MB of RAM and 2000-cycles
processing time.

Particularly if you use the per-client-subnet filtering methods Adrian
mentioned (below) to speed things up even further.

Amos


>
> Regards
> Rishav Upadhaya
> Future System Administrator
> Current Support Officer
>
> On 12/5/07, Amos Jeffries <[EMAIL PROTECTED]> wrote:
>> Adrian Chadd wrote:
>> > ACLs are evaluated short-circuit. If you have this:
>> >
>> > acl clientA src 1.2.3.0/24
>> > acl clientB src 1.2.4.0/24
>> > acl youtube (expensive regexp)
>> > acl microsoft (expensive regexp)
>> >
>> > http_access deny clientA youtube
>> > http_access deny clientB microsoft
>> >
>> > the http_access lines are evaluated in order from top to bottom, and
>> stop being
>> > evaluated across each http_access line if one of the ACLs fails.
>> >
>> > So the expensive youtube regexp ACL will only be processed by requests
>> from clientA.
>> > Requests from clientB won't ever hit the youtube ACL lookup.
>> >
>> > If you know how to craft ACLs then you can avoid almost all of the
>> penalties.
>> >
>> > Adrian
>>
>> Adrian! stop encouraging the regexp-addicts. :-)
>>
>> We're trying to wean them off the unnecessary use of slow ACL remember?
>> ;)
>>
>> Amos
>>
>




Re: [squid-users] read_timeout and "fwdServerClosed: re-forwarding"

2007-12-05 Thread Chris Hostetter

sorry for the late reply, i was seriously sick last week and basically 
dead to the world...

: > The problem I'm running into is figuring out a way to get the analogous 
: > behavior when the origin server is "up" but taking "too long" to respond 
: > to the validation requests.   Ideally (in my mind) squid would have a 

: Hmm.. might be a good idea to try Squid-2.HEAD. This kind of things
: behaves a little differently there than 2.6..

Alas ... I don't think i could convince my boss to get on board the idea 
of using a devel releases.  then again, i'm not too clear on how 
branch/release management is done in squid ... do merges happen from 
2.HEAD to 2.6 (in which case does 2.6.STABLE17 have the behavior you are 
refering to?) or will 2.HEAD ultimately become 2.7 once it's more stable?


: > "read_timeout" was the only option I could find that seemed to relate to 
: > how long squid would wait for an origin server once connected -- but it 
: > has the retry problems previously discussed.  Even if it didn't retry, and 
: > returned the stale content as soon as the read_timeout was exceeded, 
: > I'm guessing it wouldn't wait for the "fresh" response from the origin 
: > server to cache it for future requests.
: 
: read_timeout in combination with forward_timeout should take care of the
: timeout part...

what do you mean by "in combination with forward_timeout" ... 
forward_timeout is just the 'connect' timeout for origin server requests 
right?  so i guess you mean that if i have a magic value of XX seconds 
that i'm willing to wait for data to come back, that i need to set 
fowrad_timeout and read_timeout such that they add up to XX right?  but as 
you say, that just solves the tieout problem, it doesn't get me stale 
content.

In my case, i'm not worried about the "connect" time for the origin server 
-- if it doesn't connect right away give up, not problem there.  it's 
getting stale content to be returned if the total request time excedes XX 
seconds that i'm worried about (without getting a bunch of 
automatic retries)


So it kind of seems like i'm out of luck right?  my only option being to 
try 2.HEAD which *may* have the behavior i'm describing.


: > for a fresh response) -- but it doesn't seem to work as advertised (see 
: > bug#2126).
: 
: Haven't looked at that report yet.. but a guess is that the refresh
: failed due to read_timeout?

(actually that was totally orthoginal to the read_timeout issues ... with 
refresh_stale_hit set to Y seconds, all requets are still considered cache 
hits up to Y seconds afer they expire -- with no attempt to validate.)


-Hoss


Re: [squid-users] Best configuration parameters for effective caching

2007-12-05 Thread Chris Robertson

Arun S wrote:

Sorry for the worng subject in the previous mail

-- Forwarded message --
From: Arun S <[EMAIL PROTECTED]>
Date: 5 Dec 2007 10:13
Subject: Re: [squid-users] Squid-2.6.STABLE17 available
To: squid-users@squid-cache.org


Hi list,

Can someone please suggest the best configuration parameters like
cache size, cache algorithm, FQDN memory size, etc. for Squid to cache
effectively?

--
Regards,
Arun S.
  


http://www.squid-cache.org/mail-archive/squid-users/200701/0433.html

To add to that, use a log analyzer such as Scalar (check the links at 
http://www.squid-cache.org/Scripts/ for others) to gauge your usage 
patterns.  Set maximum_object_size and the replacement policies to suit 
your goals (LFUDA with a large max_object_size for byte hit ratio, GDSF 
and a smaller max_object_size for object hit ratio).


Chris


Re: [squid-users] process load

2007-12-05 Thread Chris Robertson

s f wrote:

Hi,

Adrian though being a newbie I know about that though not perfect :D.

I basically have 2.5 Ghz computer with 512 MB ram. With those number
of users hunting squid, how many DSTDOMAIN ACL are considered safe.

I know I can check the process load by adding few ACL at a time. But I
can't complain management each week to replace the CUP or memory.

So I am here for the help. You guys must have hell lot of experience
to tell me that in rough figures.

Regards
Rishav Upadhaya
Future System Administrator
Current Support Officer
  


Your server should be able to handle hundreds of thousands entries in a 
dstdomain acl.  As for how many distinct dstdomain acls you can 
define...  That's probably a ridiculously large number as well 
(especially given 500 clients).


Chris


Re: [squid-users] FTP through Squid and pf.conf with load balancing dsl

2007-12-05 Thread Chris Robertson

Matus UHLAR - fantomas wrote:

On 04.12.07 10:54, Chris Robertson wrote:
  
To make the server set up the data connection, passive FTP is the 
correct choice (http://en.wikipedia.org/wiki/FTP#Connection_Methods).


Whether that makes the remote server any happier about the data 
connection originating from a different IP from the control, I can't say.



I'm think you have misread it. the data connection is opened by the server
in active/PORT connection. with passive connection, client opens both
connections (control and data) and in this case the server can reject
data connection, if client makes if from different IP.
  


I guess it all comes down to definitions.  I interpret "In passive mode, 
the FTP server opens a random port..." as the server setting up the data 
connection (considering the server controls what port is used), but I 
can see the other angle, with the client then initiating a connection to 
that port.


With active mode FTP, the server would also be able to refuse to 
initiate a connection to a different host than was sending the 
commands.  Passive, or active, a client specifying a different IP for 
data than that used for the control is FXP 
(http://en.wikipedia.org/wiki/File_eXchange_Protocol), and is disabled 
by default on many FTP servers (original poster's included).


In any case, to help with the original issue...

acl FTP proto FTP
tcp_outgoing_address 192.168.32.15 FTP

...will assure that all FTP data use the listed IP address on a multi-IP 
machine.  The proto FTP acl could also be used to send all FTP transfers 
to a specific parent proxy outside of the load balancing setup with 
cache_peer_access.


Chris


[squid-users] auto blacklist users

2007-12-05 Thread ian j hart
Hello.

[sorry, slightly off topic]

I'm the ICT technician of a school. I have squid running to make the most
of our bandwidth. Our ISP provides some content blocking but this is
proving ineffective against the proliferation of proxy sites.

I've started to monitor and block sites with squid ACLs. This is also not
so effective as there are 1200 users looking for new sites and only 1 user
trying to block them.

Since there is no punishment for hitting any DENY ACL there's no reason
for them to stop.

What I need is to apply some back pressure, i.e. automatically block
persistant offenders.

Does anyone have anything like this?

N.B. This has to be user based. Host/IP based will not work due to the
hot seating.

Thanks

-- 
ian j hart


[squid-users] COSS

2007-12-05 Thread s f
Hello,

Do you use COSS storage schema?? Do yo prefer it or not? I was
thinking of using COSS over aufs. I am no expert in this field so want
your valuable suggestion to this.

My cache data are of no important so I don't mind loosing them on restart.

Regards,
Roshan


Re: [squid-users] Best configuration parameters for effective caching

2007-12-05 Thread Arun S
Thank you, Tek for the configuration directives.

I would give a shot with it.

Regards,
Arun S.

On 05/12/2007, Tek Bahadur Limbu <[EMAIL PROTECTED]> wrote:
> Hi Arun,
>
>
> Arun S wrote:
> > Sorry for the worng subject in the previous mail
> >
> > -- Forwarded message --
> > From: Arun S <[EMAIL PROTECTED]>
> > Date: 5 Dec 2007 10:13
> > Subject: Re: [squid-users] Squid-2.6.STABLE17 available
> > To: squid-users@squid-cache.org
> >
> >
> > Hi list,
> >
> > Can someone please suggest the best configuration parameters like
> > cache size, cache algorithm, FQDN memory size, etc. for Squid to cache
> > effectively?
>
> There is no magic configuration for an effective Squid cache. It depends
> upon many factors like number of users, bandwidth pipe, hardware limits,
> Squid version, Operating systems,etc.
>
> But you can try the parameters below:
>
> cache size = 10 GB
> cache_replacement_policy = GDSF
> memory_replacement_policy = GDSF
> ipcache_size = 8192
> fqdncache_size = 8192
> Storage Scheme = AUFS
> cache mem = 128 MB
>
>
>
> Thanking you...
>
>
> >
> > --
> > Regards,
> > Arun S.
> >
> >
>
>
> --
>
> With best regards and good wishes,
>
> Yours sincerely,
>
> Tek Bahadur Limbu
>
> System Administrator
>
> (TAG/TDG Group)
> Jwl Systems Department
>
> Worldlink Communications Pvt. Ltd.
>
> Jawalakhel, Nepal
>
> http://www.wlink.com.np
>
> http://teklimbu.wordpress.com
>


-- 
Regards,
Arun S.


Re: [squid-users] google maps/earth/youtube caching in Squid3

2007-12-05 Thread Alex Rousskov
On Wed, 2007-12-05 at 15:18 +0100, Matus UHLAR - fantomas wrote:
> On 26.11.07 15:31, Adrian Chadd wrote: 
> > I've implemented some changes to Squid-2.HEAD which will allow certain
> > stuff to be cached which couldn't be in the past. The first two things I'm
> > going to try and concrete the support for is google maps/earth (web only)
> > and Youtube.
> 
> I wonder why didn't you make this change to squid-3. Is it already there?
> if not, do you plan to push that indo squid-3?
> 
> I'm interested in this feature, but also in ICAP (because of virus/phish
> filtering)

It is probably not too late to port Adrian changes for Squid v3.1
inclusion. Somebody will either need to submit a patch or pay for the
work though. Current Squid3 roadmap is available at
http://wiki.squid-cache.org/RoadMap/Squid3

Alex.




Re: [squid-users] looking for testers: google maps/earth/youtube caching

2007-12-05 Thread Adrian Chadd
Squid-3 has enough "stuff" going on in it at the moment. It needs to stabilise,
be released and have some architectural decisions made before new features
like this are included.

That, and I funded myself to implement this feature, and I feel more comfortable
working on squid-2 at the moment. I don't want to break Squid-3 and not be
able to fix it.


Adrian

On Wed, Dec 05, 2007, Matus UHLAR - fantomas wrote:
> On 26.11.07 15:31, Adrian Chadd wrote:
> > I don't know if people understood my last email about the StoreUrlRewrite
> > changes I've made to squid-2.HEAD, so I'll just be really clear this time
> > around.
> > 
> > I've implemented some changes to Squid-2.HEAD which will allow certain
> > stuff to be cached which couldn't be in the past. The first two things I'm
> > going to try and concrete the support for is google maps/earth (web only)
> > and Youtube.
> 
> I wonder why didn't you make this change to squid-3. Is it already there?
> if not, do you plan to push that indo squid-3?
> 
> I'm interested in this feature, but also in ICAP (because of virus/phish
> filtering)
> -- 
> Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
> Warning: I wish NOT to receive e-mail advertising to this address.
> Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
> "Two words: Windows survives." - Craig Mundie, Microsoft senior strategist
> "So does syphillis. Good thing we have penicillin." - Matthew Alton

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


[squid-users] Intermittent group failure

2007-12-05 Thread Paul Cocker
My cache.log is peppered with

/mswin_check_lm_group.exe NetUserGetGroups() failed.'

I thought this was due to a poor response time because our AD servers
needed some more memory. However, their memory has been doubled, their
CPUs are not overly taxed and still I'm seeing this error throughout
cache.log. NTLM authentication is working (so it's not a total failure),
and only three of my four check_lm_group processes have been used... so
I'm a bit stuck. I can't see the bottleneck or likely failure point.

The config line is as follows (to check formatting)

d:/squid2616/libexec/mswin_check_lm_group.exe -D cd -G

I believe the domain shorthand is the correct format, yes?

Does the authenticator respect site boundaries, or is it possible it's
trying to travel the WAN?

Paul Cocker
IT Systems Administrator




TNT Post is the trading name for TNT Post UK Ltd (company number: 04417047), 
TNT Post (Doordrop Media) Ltd (00613278), TNT Post Scotland Ltd (05695897),TNT 
Post North Ltd (05701709) and TNT Post South West Ltd (05983401). Emma's Diary 
and Lifecycle are trading names for Lifecycle Marketing (Mother and Baby) Ltd 
(02556692). All companies are registered in England and Wales; registered 
address: 1 Globeside Business Park, Fieldhouse Lane, Marlow, Buckinghamshire, 
SL7 1HY.



Re: [squid-users] Fwd: Problem with squid and Skype 3.5

2007-12-05 Thread Jakob Curdes

Leandro Ferrrari schrieb:


Besides, i installed 3Proxy with sock and this problem persist.
  

If you want to
say "I tried to use 3Proxy instead of squid and the problem persists"  
then you just have shown that it is not a squid problem!
Generally, skype is very good in playing tricks at the firewall. The 
proxy is the least problematic part here. Skype will try to use http, 
https or even tricky UDP protocols to route traffic through the 
firewall; it is even reported to get incoming connections going on a 
outbound-only firewall setup by keeping a UDP session open all the time. 
With this background, it is unpredictable how long a connection 
persists. Besides, a skype PC connects arbitrary "servers" in arbitrary 
places via HTTPS which is VERY UGLY for a security advisor. In short - 
you cannot really track down what it does exactly as it is all "black 
box" and so you cannot really adapt your system to it. Another point to 
make is that it hard to keep up TCP sessions for hours ongoing; you need 
a controlled environment for that which you do not have between two 
skype sites over the internet.


Hope this helps,

Jakob Curdes



Re: [squid-users] FTP through Squid and pf.conf with load balancing dsl

2007-12-05 Thread Matus UHLAR - fantomas
On 04.12.07 10:54, Chris Robertson wrote:
> To make the server set up the data connection, passive FTP is the 
> correct choice (http://en.wikipedia.org/wiki/FTP#Connection_Methods).
> 
> Whether that makes the remote server any happier about the data 
> connection originating from a different IP from the control, I can't say.

I'm think you have misread it. the data connection is opened by the server
in active/PORT connection. with passive connection, client opens both
connections (control and data) and in this case the server can reject
data connection, if client makes if from different IP.
-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
Fucking windows! Bring Bill Gates! (Southpark the movie)


Re: [squid-users] looking for testers: google maps/earth/youtube caching

2007-12-05 Thread Matus UHLAR - fantomas
On 26.11.07 15:31, Adrian Chadd wrote:
> I don't know if people understood my last email about the StoreUrlRewrite
> changes I've made to squid-2.HEAD, so I'll just be really clear this time
> around.
> 
> I've implemented some changes to Squid-2.HEAD which will allow certain
> stuff to be cached which couldn't be in the past. The first two things I'm
> going to try and concrete the support for is google maps/earth (web only)
> and Youtube.

I wonder why didn't you make this change to squid-3. Is it already there?
if not, do you plan to push that indo squid-3?

I'm interested in this feature, but also in ICAP (because of virus/phish
filtering)
-- 
Matus UHLAR - fantomas, [EMAIL PROTECTED] ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
"Two words: Windows survives." - Craig Mundie, Microsoft senior strategist
"So does syphillis. Good thing we have penicillin." - Matthew Alton


[squid-users] Fwd: Problem with squid and Skype 3.5

2007-12-05 Thread Leandro Ferrrari
I have a problem with the interaction between Squid 2.6 and Skype 3.5.
The skype 3.5, reconnect the session skype each 2 or 3 hours.
The configuration of the squid 2.6:

acl NOCACHESKYPE url_regex .skype
acl skype url_regex 0:443 1:443 2:443 3:443 4:443 5:443 6:443 7:443 8:443 9:443
cache deny NOCACHESKYPE
cache deny skype
always_direct allow skype

Besides, i installed 3Proxy with sock and this problem persist.

sincerely ,
Ing. Leandro Ferrari


Re: [squid-users] process load

2007-12-05 Thread s f
Hi,

Adrian though being a newbie I know about that though not perfect :D.

I basically have 2.5 Ghz computer with 512 MB ram. With those number
of users hunting squid, how many DSTDOMAIN ACL are considered safe.

I know I can check the process load by adding few ACL at a time. But I
can't complain management each week to replace the CUP or memory.

So I am here for the help. You guys must have hell lot of experience
to tell me that in rough figures.

Regards
Rishav Upadhaya
Future System Administrator
Current Support Officer

On 12/5/07, Amos Jeffries <[EMAIL PROTECTED]> wrote:
> Adrian Chadd wrote:
> > ACLs are evaluated short-circuit. If you have this:
> >
> > acl clientA src 1.2.3.0/24
> > acl clientB src 1.2.4.0/24
> > acl youtube (expensive regexp)
> > acl microsoft (expensive regexp)
> >
> > http_access deny clientA youtube
> > http_access deny clientB microsoft
> >
> > the http_access lines are evaluated in order from top to bottom, and stop 
> > being
> > evaluated across each http_access line if one of the ACLs fails.
> >
> > So the expensive youtube regexp ACL will only be processed by requests from 
> > clientA.
> > Requests from clientB won't ever hit the youtube ACL lookup.
> >
> > If you know how to craft ACLs then you can avoid almost all of the 
> > penalties.
> >
> > Adrian
>
> Adrian! stop encouraging the regexp-addicts. :-)
>
> We're trying to wean them off the unnecessary use of slow ACL remember? ;)
>
> Amos
>


Re: [squid-users] Object caching question

2007-12-05 Thread Amos Jeffries

Santiago Del Castillo wrote:

Hi list!!

Does squid let you configure if the following urls can be the same or
different object in cache??

http://www.example.com/example.jpg
http://www.example.com//exapmle.jpg



By the strict standard they are different URI if they ever get passed to 
the proxy. In practice the user-agent is supposed to strip the double // 
down to a single in HTTP and some other protocols. Squid may or may not 
strip it down as well.


Adrians recent work with a store re-writer in lets you create a custom 
URI re-mapper to point both URI at the same place and thus store as the 
same file. see the GoogleEarth/Maps threads for more info.



Amos


[squid-users] Object caching question

2007-12-05 Thread Santiago Del Castillo
Hi list!!

Does squid let you configure if the following urls can be the same or
different object in cache??

http://www.example.com/example.jpg
http://www.example.com//exapmle.jpg

Thanks!!
Santiago

-- 
Santiago del Castillo
System Administrator
FNBOX Ventures Inc.
ARG: +54.11.5258.4202
[EMAIL PROTECTED]
http://www.fnbox.com


Re: [squid-users] process load

2007-12-05 Thread Amos Jeffries

Adrian Chadd wrote:

ACLs are evaluated short-circuit. If you have this:

acl clientA src 1.2.3.0/24
acl clientB src 1.2.4.0/24
acl youtube (expensive regexp)
acl microsoft (expensive regexp)

http_access deny clientA youtube
http_access deny clientB microsoft

the http_access lines are evaluated in order from top to bottom, and stop being
evaluated across each http_access line if one of the ACLs fails.

So the expensive youtube regexp ACL will only be processed by requests from 
clientA.
Requests from clientB won't ever hit the youtube ACL lookup.

If you know how to craft ACLs then you can avoid almost all of the penalties.

Adrian


Adrian! stop encouraging the regexp-addicts. :-)

We're trying to wean them off the unnecessary use of slow ACL remember? ;)

Amos


Re: [squid-users] Issue with CNN Video

2007-12-05 Thread Amos Jeffries

Scott Anctil wrote:

I have a Squid 2.6 system that has been working perfectly for the past month. 
(or so I thought) I have users complaining that when they go to the video 
section on CNN and attempt to watch videos they occasionally get a message in 
the video window that says “The video system was not able to establish 
connectivity due to a Proxy/Firewall or network connectivity.” If they click 
the same link again it works. They also have complained that the video takes 
3-5 seconds to start when it is almost instantaneous when not using the proxy.

None of these issues are experienced when not using the proxy.

Any ideas?



Try grabbing the URI using squidclient and see if its shows any unusual 
headers. This is usually caused by broken streaming servers trying 
HTTP/1.1 stuff when they should not.


It might also be CNNs cluttered pages hogging squid resources with 
uncachable content bottle-necking under all your users.


From here it takes 45sec to load all 124 javascript files including the 
40KB front page script. The flash-video advertising adds a further 15 
sec each.
All their content is marked uncachable and pre-expired. That along would 
make squid re-fetch the lot for each client request.


Amos