Re: [squid-users] squid 2.6, wccp and tproxy

2008-06-01 Thread Anton
For a very light use... Even a single PC would experience 
problems with squid 3.1 and TPROXY 4.1... I switched back 
to 2.6.20.21+cttproxy and squid 2.6STAB-20 for a time 
being.

On Friday 30 May 2008 09:05, Amos Jeffries wrote:
> > That is interesting to note, and part of where my
> > problem lies. Given the way the files are marked on the
> > balabit site, I would not have known of the support
> > versions and differences. I just downloaded the patches
> > for the versions of squid, iptables, and kernel I was
> > using.
>
> So you have the Balabit 2.6s18 patch mentioned at
>  http://wiki.squid-cache.org/Features/TproxyUpdate
>
> > During the setup of the software, so far anyway, I have
> > not seen ways to specify the version of Tproxy, etc.
> > The initial tproxy README file I was using must have
> > been an older version because it didn't use the
> > difference in iptables table names that the newer
> > README mentions, and that someone was gracious enough
> > to point out to me on the TPROXY listserv.
>
> It's a little bit tricky at present, Balabit no longer
> support v2.2 and I don't know if/where one would get the
> necessary patches.
>
> Squid-2 performs detection at configure time with
> --enable-tproxy to see if its supported tproxy method is
> available, disabling tproxy support and warns if its not.
> The configure log I believe should tell you if it was
> successful or failed.
>
> Unless you able to use the old version, I don't think it
> will succeed though. You may need to migrate to 3-HEAD,
> its beta testing code, but stable enough for light use.
>
> Amos
>
> > Once I get Tproxy working, I would love to contribute
> > docs to the squid project.
> >
> > On the Tproxy enabled system I have now, which is the
> > same unit as my working WCCP/Squid 2.6 boxes now, WCCP
> > does not seem to be redirecting traffic to the squid
> > box. I am sure it is something I have done wrong, and
> > will figure out, but I wanted to be sure the end result
> > was possible before spending more time on the project.
> >
> > I am currently using the following for my TPROXY setup:
> >
> > CentOS 5.1 x86_64
> > Squid 2.6 STABLE 18 (custom compiled)
> > iptables 1.4.0 (custom compiled)
> > kernel 2.6.25.4 (custom compiled)
> > tproxy-iptables-1.4.0-20080521-113954-1211362794.patch
> > tproxy-kernel-2.6.25-20080519-165031-1211208631.tar.bz2
> > tproxy-squid-2.6-STABLE18.20080304-110716-1204625236.pa
> >tch
> >
> >
> > BTW - to Henrik, I was aware of a websense piece that
> > ran on a linux/windows based Squid box running squid
> > 2.5. The issues I currently have with that are:
> >
> > 1) Is the squid agent free to enterprise users? (I
> > posed this question to our sales rep)
> > 2) Does it support Squid 2.6, or only 2.5.
> > 3) Does it truly change the reporting such that
> > original client Ips can be seen, or does it just fetch
> > enforcement policies?
> >
> >
> >
> > -Original Message-
> > From: Amos Jeffries [mailto:[EMAIL PROTECTED]
> > Sent: Thursday, May 29, 2008 7:12 AM
> > To: Ritter, Nicholas
> > Cc: Adrian Chadd; squid-users@squid-cache.org
> > Subject: Re: [squid-users] squid 2.6, wccp and tproxy
> >
> > Ritter, Nicholas wrote:
> >> In websense the client IP addresses that show up are
> >> those of the
> >
> > squid boxes I have deployed. Websense does not utilize,
> > as far as I know, the x-forwarded-for header.
> >
> >> The doc on squid-cache.org about how to setup TPROXY
> >> with squid is a
> >
> > bit out of date because the latest version of tproxy
> > uses the mangle table and not a tproxy table.
> >
> >
> > The docs as far as we know are correct for all current
> > releases of Squid.
> > Unpatched Squid up to 3.1 still require TPROXY v2.2, so
> > far only 3-HEAD/3.1 has proper integrated support for
> > TPROXY v4+
> >
> > If you have any updates for the wiki regarding the
> > TPROXYv4 configs for when 3.1 is released, please point
> > out the variations.
> >
> > Amos
> >
> >> Nick
> >>
> >>
> >> -Original Message-
> >> From: Adrian Chadd [mailto:[EMAIL PROTECTED]
> >> Sent: Wed 5/28/2008 4:52 PM
> >> To: Ritter, Nicholas
> >> Cc: squid-users@squid-cache.org
> >> Subject: Re: [squid-users] squid 2.6, wccp and tproxy
> >>
> >> On Wed, May 28, 2008, Ritter, Nicholas wrote:
> >>> Can tproxy, squid 2.6, and wccp be used together?
> >>
> >> Yes.
> >>
> >>> I want to work around the hiding of the original
> >>> client ip because it
> >>>
> >>> is breaking websense. Any suggestions/comments?
> >>
> >> What do you mean?
> >>
> >>> Nick
> >
> > --
> > Please use Squid 2.7.STABLE1 or 3.0.STABLE6


[squid-users] Unable to have certain site to be non-cacheable and ignore already cached data

2008-10-16 Thread Anton
Hello!

was trying for a few hours to have a certain site 
(http://www.nix.ru) to be not cacheable - but squid always 
gives me an object which is in cache!

My steps:

acl DIRECTNIX url_regex ^http://www.nix.ru/$
no_cache deny DIRECTNIX
always_direct allow DIRECTNIX

- But anyway - until I PURGED by the squidclient the 
required page - it was in the STALE state in the log - 
TCP_REFRESH_HIT - and I saw the old data.

Could anyone please give a clue how to make a certain URL 
REGEX to be non-cacheable and to make a direct request 
always to a origin server, even if the object is already 
cached?

Regards,
Anton.


Re: [squid-users] Unable to have certain site to be non-cacheable and ignore already cached data

2008-10-16 Thread Anton
Just realized that i have 

reload_into_ims on

this was making me to be not able to refresh the given page 
or site, since refresh request was changed - but anyway - 
it should not affect no_cache? 

On Thursday 16 October 2008 14:28, Anton wrote:
> BTW Squid 2.6STABLE20 - TPROXY2
>
> On Thursday 16 October 2008 13:49, Anton wrote:
> > Hello!
> >
> > was trying for a few hours to have a certain site
> > (http://www.nix.ru) to be not cacheable - but squid
> > always gives me an object which is in cache!
> >
> > My steps:
> >
> > acl DIRECTNIX url_regex ^http://www.nix.ru/$
> > no_cache deny DIRECTNIX
> > always_direct allow DIRECTNIX
> >
> > - But anyway - until I PURGED by the squidclient the
> > required page - it was in the STALE state in the log -
> > TCP_REFRESH_HIT - and I saw the old data.
> >
> > Could anyone please give a clue how to make a certain
> > URL REGEX to be non-cacheable and to make a direct
> > request always to a origin server, even if the object
> > is already cached?
> >
> > Regards,
> > Anton.


Re: [squid-users] Unable to have certain site to be non-cacheable and ignore already cached data

2008-10-16 Thread Anton
Thanks so much Henrick and Leonardo! 
Looks I should learn regexes, since taked "$" as 
the "whatever after" meaning but not end of string :)
Now it logs as TCP_MISS.
Thanks so much again!

On Thursday 16 October 2008 15:45, Leonardo Rodrigues 
Magalhães wrote:
> Anton escreveu:
> > Hello!
> >
> > was trying for a few hours to have a certain site
> > (http://www.nix.ru) to be not cacheable - but squid
> > always gives me an object which is in cache!
> >
> > My steps:
> >
> > acl DIRECTNIX url_regex ^http://www.nix.ru/$
> > no_cache deny DIRECTNIX
> > always_direct allow DIRECTNIX
>
> your ACL is too complicated for a pretty simple thing
> ... it has the 'begin with' flag (^) and has the 'end
> with' ($) flag as well. And it has a final slash too. So,
> it seems that would match exclusively
>
> http://www.nix.ru/
>
> and nothing else . including NOT matching
> 'http://www.nix.ru/index.htm',
> 'http://www.nix.ru/logo.jpg' and so on.
>
> if you wanna hints on doing regexps, i would give you
> a precious hint: don't try to complicate things.
>
> acl DIRECTNIX url_regex -i www\.nix\.ru
>
> would do the job and would be much simplier to
> understand. And NEVER forget the case inconditional (-i)
> flag on regex 


Re: [squid-users] Unable to have certain site to be non-cacheable and ignore already cached data

2008-10-16 Thread Anton
BTW Squid 2.6STABLE20 - TPROXY2

On Thursday 16 October 2008 13:49, Anton wrote:
> Hello!
>
> was trying for a few hours to have a certain site
> (http://www.nix.ru) to be not cacheable - but squid
> always gives me an object which is in cache!
>
> My steps:
>
> acl DIRECTNIX url_regex ^http://www.nix.ru/$
> no_cache deny DIRECTNIX
> always_direct allow DIRECTNIX
>
> - But anyway - until I PURGED by the squidclient the
> required page - it was in the STALE state in the log -
> TCP_REFRESH_HIT - and I saw the old data.
>
> Could anyone please give a clue how to make a certain URL
> REGEX to be non-cacheable and to make a direct request
> always to a origin server, even if the object is already
> cached?
>
> Regards,
> Anton.


Re: [squid-users] YouTube and other streaming media (caching)

2008-11-08 Thread Anton
Hello!
Aside from redirect bug, any known issues in YouTube 
caching? Would I risk too much if I put this on the 
production squid?

On Saturday 08 November 2008 18:31, Kinkie wrote:
> On Mon, Nov 3, 2008 at 6:32 PM, Horacio H. 
<[EMAIL PROTECTED]> wrote:
> > Hi everybody,
> >
> > regarding this issue:
> >
> > http://wiki.squid-cache.org/WikiSandBox/Discussion/Yout
> >ubeCaching
>
> For those interested, that page has been moved to
> http://wiki.squid-cache.org/ConfigExamples/DynamicContent
>/YouTube/Discussion


[squid-users] 0nline pharmaceutics consultations

2006-06-28 Thread Anton










nHEJ555vvy3ZZTdSFwmTMA9zJJ8uCN0quWzPh6pOnEKLx8EHrKYKWMvaAkWV2jbMkDV8mhR7NBXN
cZUbR165vi7bGx3r01KyKe3AWd1QHDRtGSRrC7ej1GdAQHeFIvye32DwDkfuaht7qQ2yfN5bIVLI8Lm
Ac3OYiQAflZ1gCHn9yWg3b1y2FWjxloLsreoImB5ETYY0gqCrkA42pjX8DW98iYqECYCI4
s0n2zcFyNDJMOTgCnJxwt0XYuHTY8mXUYiL01KwUNQpiVEMaLV2CMjMAAxWyNknZthu2oKU
ZBrVgxia8Rlg6p7a4C25r64fO0RgWciMlvlkq02PaFz05Odb7PHT1Fczx5D0Sv7qp5q5kW0j
q6gK6xlySLUIJq5XKSd2SZZEIuxv9h9M6xAj7WDMGIXpkaPSjlymBFyph5m1OpR7EXbfCgp2RAYhb
VEOyxGSis13W2EkTm3iCJr0N1OmXJZI7RjtdROaNxCSEUaXv4hwNjokaWFZbNkJGao0Qb4WBriUNXe
yMjmCyzebP2ST6KzpfW6itTK7yb3dbWcDJu11dQqvigpB9Oqtt0mO4gGtrF8wuxaF9FuQw9Ns
Z3OtBdEqz1NRtPdU3GjGlBEj34olVoi8HhWTDJPZxpuCOMTHdEVSl9cWO1UUAIn7KPq2asJJpz
KNVixcQrstcsHRzuCiFE0nGtnFT1cELeFH8ksjBEn5T4kjwjtFcqccDG37glrvGbycDpKN31x73








[squid-users] Why no cache revisited...

2008-05-09 Thread Anton Melser
Hi all,
Yet again I can't work out why things aren't getting cached... Can
someone see anything here? I have disabled the cgi no cache
exception... I thought the cookies wouldn't be cached if I just left
the other standard values... In reality, I would like to get rid of
the cookies altogether, but if squid could just ignore them and cache
anyway I'd be just as happy... The app chain is squid -> apache ->
mod_jk -> tomcat. Or maybe it's not that?
I just get ...TCP_MISS/200... in the access.log and alloweds for both
request and reply in access.log.
Any ideas?
Thanks

http://www.myserver.com:9090/blah/blah/?nav_cat=852

GET /blah/blah/?nav_cat=852 HTTP/1.1
Host: www.myserver.com:9090
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; fr; rv:1.8.1.14)
Gecko/20080404 Firefox/2.0.0.14
Accept: 
text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
Accept-Language: fr,en;q=0.8,en-us;q=0.6,fr-fr;q=0.4,zh-cn;q=0.2
Accept-Encoding: gzip,deflate
Accept-Charset: UTF-8,*
Keep-Alive: 300
Connection: keep-alive
Cookie: prtl_2330=2334;
__utma=78053651.1678847336.1200999476.1210344620.1210346080.27;
__utmb=78053651; prtl_2048=2052; __qca=1200999474-97652213-3709607;
__utmz=78053651.1200999476.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none);
JSESSIONID=31988C5DA79D01D8239A321C02316AA5; __qcb=1689579535;
__utmc=78053651; ns_cookietest=true; ns_session=true

HTTP/1.x 200 OK
Date: Fri, 09 May 2008 16:39:39 GMT
Server: Apache/2.2.3 (CentOS)
Set-Cookie: prtl_2330=2334; Expires=Sat, 10-May-2008 02:39:39 GMT
Cache-Control: public
Content-Type: text/html;charset=UTF-8
X-Cache: MISS from www.myserver.com
X-Cache-Lookup: MISS from www.myserver.com:9090
Via: 1.0 www.myserver.com:9090 (squid/2.6.STABLE6)
Connection: close

-- 
echo '16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlbxq' | dc
This will help you for 99.9% of your problems ...


Re: [squid-users] Why no cache revisited...

2008-05-14 Thread Anton Melser
2008/5/9 Henrik Nordstrom <[EMAIL PROTECTED]>:
>
>  fre 2008-05-09 klockan 18:49 +0200 skrev Anton Melser:
>
>
>  > HTTP/1.x 200 OK
>  > Date: Fri, 09 May 2008 16:39:39 GMT
>  > Server: Apache/2.2.3 (CentOS)
>  > Set-Cookie: prtl_2330=2334; Expires=Sat, 10-May-2008 02:39:39 GMT
>  > Cache-Control: public
>  > Content-Type: text/html;charset=UTF-8
>  > X-Cache: MISS from www.myserver.com
>  > X-Cache-Lookup: MISS from www.myserver.com:9090
>  > Via: 1.0 www.myserver.com:9090 (squid/2.6.STABLE6)
>  > Connection: close
>
>  There is no expiry information in this response, so by default Squid
>  will consider it stale..
>
>  You can tell Squid to cache this with a min-age refresh_pattern rule,
>  but it's much better if you teach the web server to return some
>  meaningful expiry information using Cache-Control: max-age=NN or
>  Expires:

Set-Cookie: prtl_2330=2334; Expires=Sat, 10-May-2008 02:39:39 GMT

So this is the cookie that Expires and not the page then?
Thanks
Anton


-- 
echo '16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlbxq' | dc
This will help you for 99.9% of your problems ...


Re: [squid-users] Why no cache revisited...

2008-05-15 Thread Anton Melser
2008/5/14 Henrik Nordstrom <[EMAIL PROTECTED]>:
> On ons, 2008-05-14 at 11:52 +0200, Anton Melser wrote:
>
>> Set-Cookie: prtl_2330=2334; Expires=Sat, 10-May-2008 02:39:39 GMT
>>
>> So this is the cookie that Expires and not the page then?

Thanks for all your help. I've got it caching now and while it will
mean removing some functionality from the site, access times are down
from 15 seconds to 1! I also think I'm beginning to understand better
how squid works which means I'll stop bothering you guys!
Cheers
Anton


Re: [squid-users] Why no cache revisited...

2008-05-16 Thread Anton Melser
> http://www.mnot.net/cache_docs/ is mandatory reading for any technical
> webmaster.

Ack, and cheers heaps!
Anton


[squid-users] cache only certain files?

2008-05-21 Thread Anton Melser
Hi,
I'm struggling to get the logic right for only caching certain pages -
it seems very easy to do the negative (don't cache ...) but the
converse doesn't seem possible... I must be missing something.
ie. I want to cache
www.mysite.com
www.mysite.com/hello/this.aspx?hi=there&you=there
www.mysite.com/good/by/my/friend/this.aspx?hi=there&you=there
www.mysite.com/images/test.gif

but not the rest.

Any ideas?
Thanks,
Anton


Re: [squid-users] cache only certain files?

2008-05-22 Thread Anton Melser
Thanks for that.
The problem is the following: I have an *extremely* complicated, large
and poorly written web application. While it was intended to cache
gracefully and properly, the coders that came before me (after the
base product but before the end of the guarantee! I'm now maintaining
it) just went ahead and wrote like pigs, not taking any consideration
of this. To show the homepage causes around 200 database connections.
And the site is extremely slow... and more than a few hundred
simultaneous users and the site simply dies... and yet I have been
asked to find a solution, without spending the time necessary to redo
all the modules that don't allow proper caching. And with NO budget.
So we get rid of the parts of the site that simply can't be cached on
the most used pages, and for the rest, which is 2000-odd pages - 25,
let it do its 200 db connections.
Anyway, I was getting away with myself. I DO have a need for this, and
it looks as though mod_cache is a much better choice for all the
caching I've ever needed to do so far (only reverse proxy acceleration
and load balancing) as my use case is taken into account in the most
basic way.
Thanks for all your help!
Best wishes,
Anton

2008/5/22 Henrik Nordstrom <[EMAIL PROTECTED]>:
> On ons, 2008-05-21 at 17:11 +0200, Anton Melser wrote:
>> Hi,
>> I'm struggling to get the logic right for only caching certain pages -
>> it seems very easy to do the negative (don't cache ...) but the
>> converse doesn't seem possible... I must be missing something.
>
> To allow caching of only some URLs then allow those, then deny
> everything..
>
> The default is to cache. If it doesn't get cached then the content most
> likely do not want to be cached. If this is your problem then see the
> following:
>
> http://www.mnot.net/cache_docs/
> http://www.mnot.net/cacheability/
>
>
> Regards
> Henrik
>



-- 
echo '16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlbxq' | dc
This will help you for 99.9% of your problems ...


Re: [squid-users] cache only certain files?

2008-05-30 Thread Anton Melser
>> CacheEnable disk /these/pages
>> CacheEnable disk /those/pages/there
>
> acl cachable_pages urlpath_regex ^/these/pages
> acl cachable_pages urlpath_regex ^/these/other/pages
> cache allow cachable_pages

yep, that would do it!

> But the content must also be cachable for this to have any effect at
> all. The default is "cache allow all" which enables caching of all
> cachable content.

The app is a bit stupid now (not the original codebase, what was added
after), and it's either all or nothing. Which is the problem... but
yes, it definitely sends the all the right headers for caching, even
when it shouldn't (which is the problem!).

> If you also need to work around server responses which is indicated not
> cachable then you need to play a bit with the refresh_pattern directive
> to override what the server says, or fix the application to return good
> Cache-Control headers.

... I guess I was just a bit confused with the allow - I had some
problems with it simply denying access (not forwarding)... again the
problem being I don't work enough with reverse proxy caching on *nix
get pass the minimum level to get comfortable. I just discovered a
particularly nasty bug that has existed since in mod_cache since
apache 2.0.52 and nobody seems to care about (aptmp* files not getting
cleaned in the cache dir), so might just switch back!
Thanks for all your help.
Anton


-- 
echo '16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlbxq' | dc
This will help you for 99.9% of your problems ...


[squid-users] debug_options reference

2008-06-06 Thread Anton Melser
Hi all,
I feel like a complete fool but I just can't seem to use the squid
docs... could someone point me to the list of sections? ALL,1 33,2
seems to be a common setting - but wtf is the doc that says what 33
is?!?
Cheers
Anton
ps. Do I have to read through the source for this?

-- 
echo '16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlbxq' | dc
This will help you for 99.9% of your problems ...


Re: [squid-users] debug_options reference

2008-06-09 Thread Anton Melser
2008/6/6 Henrik Nordstrom <[EMAIL PROTECTED]>:
> On fre, 2008-06-06 at 18:56 +0200, Anton Melser wrote:
>> Hi all,
>> I feel like a complete fool but I just can't seem to use the squid
>> docs... could someone point me to the list of sections? ALL,1 33,2
>> seems to be a common setting - but wtf is the doc that says what 33
>> is?!?
>
> doc/debug-sections.txt in the source distribution. Also printed at the
> top of each source file.
>
> The recommended default is ALL,1 unless you get told to increase some
> debugging by a developer looking into some problem for you.

I can't seem to get a page to cache. If I just leave the default (all)
it will cache, if I explicitly do an acl and do a
cache deny !mylist
Then some of the pages in my list will cache but not this one. It
seems to get matched (I set ALL,3) but there is so much info I was
getting lost. I was trying to help myself instead of continuing to
bother you guys!
Cheers
Anton

-- 
echo '16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlbxq' | dc
This will help you for 99.9% of your problems ...


Re: [squid-users] debug_options reference

2008-06-10 Thread Anton Melser
> "debug_options ALL,0 28,9"  will run you through which ACL are being tested
> and which ones are matching/failing.
> Search the log output for an exact quote of the squid.conf line you want to
> check. "cache deny !mylist" etc.
>
> The ACL is definately not matching the file then. Do you mind saying what
> the ACL config line is exactly? and some details about this file?

Hmm, I still think it was good to find out where the values are
listed... but after changing the way it was done slightly it is now
working. And   how it is working! Damn this is good software. Are
there any simple tutorials for this sort of thing? I've always found
the squid docs to be a little too technical and particularly for this
area, the docs are woefully lacking. There isn't any mention of
cache allow /blah
in there! And a simple tutorial on the acl way of thinking could let a
lot of noobs do really powerful stuff without bothering you guys... I
finally went with a combination of
cache deny !cachable_pages
and
cache deny non_cachable_pages
That allows only the pages I want AND gives me a second level of
filtering (for disallowing /.*jsessionid.* for example). It's
absolutely perfect! But I don't think I should have had to bother
anyone about this and searching google for quite a while led to no
useful results. If no one can point to a useful tuto I might just
write a little something...
Cheers
Anton


-- 
echo '16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlbxq' | dc
This will help you for 99.9% of your problems ...


[squid-users] cache_mem or let the kernel handle it?

2008-06-10 Thread Anton Melser
Hi,
When going through mod_cache before finally coming back to squid, they
talk about the fact that it can actually be better to use a disk cache
than a mem cache. The reason being that the kernel caches files, and
does so very well... I have pumped up the cache_mem to 1GB and the
cache disk usage to 5GB, as I'm using a machine that is doing only
this (+ mod_jk), and has plenty of resources... I have to admit it
seems quite a bit faster than mod_cache was, though that is probably
just because I have the possibility to cache more (by being able to
use regexs for things I don't want cached very precisely...), but are
there any thoughts on this?
Cheers
Anton

-- 
echo '16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlbxq' | dc
This will help you for 99.9% of your problems ...


Re: [squid-users] debug_options reference

2008-06-10 Thread Anton Melser
>
> All access lists (those directives accepting allow/deny lists) works in
> the same manner:
>
> http://wiki.squid-cache.org/SquidFaq/SquidAcl#head-926288cb0cbbdea92bc4a807f06dd75ddbc446ff

Yeah, I get that, but there's nothing like a few pertinent examples to
help... and for people that do squid rarely or are just wanting to
solve a problem and are told "squid is the best" it can be far more
difficult that it needs to be. It very probably is the best but the
docs are very fulltime admin oriented, IMHO...
Cheers
Anton


Re: [squid-users] cache_mem or let the kernel handle it?

2008-06-12 Thread Anton Melser
2008/6/11 Henrik Nordstrom <[EMAIL PROTECTED]>:
> On tis, 2008-06-10 at 10:18 +0200, Anton Melser wrote:
>
>> When going through mod_cache before finally coming back to squid, they
>> talk about the fact that it can actually be better to use a disk cache
>> than a mem cache. The reason being that the kernel caches files, and
>> does so very well...
>
> For Squid it's a complex equation, but if your site is mostly small
> objects (max some hundreds KB) and of reasonably limited size then
> boosting up cache_mem is a benefit.

Thanks for that. For some reason I'm not surprised it's complicated!
In any case, the site is now so fast (and doesn't cache things it
shouldn't) with squid that changing anything seems so pointless. We do
indeed have the situation you mention, so I'll keep it up where it is!
Cheers
Anton

-- 
echo '16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlbxq' | dc
This will help you for 99.9% of your problems ...


[squid-users] All ntlmauthenticator processes are busy.

2005-08-23 Thread Anton Podrezov
Hello.
Squid crashes with "All ntlmauthenticator processes are busy".
Can anybody help me?

Squid Version: 2.5.Stable10
OS: Linux 2.4.30
Samba Version: 3.0.14a

2005/08/16 16:16:00| storeDirWriteCleanLogs: Starting...
2005/08/16 16:16:00| WARNING: Closing open FD  102
2005/08/16 16:16:00| 65536 entries written so far.
2005/08/16 16:16:00|131072 entries written so far.
2005/08/16 16:16:00|   Finished.  Wrote 142188 entries.
2005/08/16 16:16:00|   Took 0.1 seconds (2152276.6 entries/sec).
FATAL: Too many queued ntlmauthenticator requests (201 on 40)
Squid Cache (Version 2.5.STABLE10): Terminated abnormally.
CPU Usage: 143.360 seconds = 33.310 user + 110.050 sys
Maximum Resident Size: 0 KB
Page faults with physical i/o: 712
Memory usage for squid via mallinfo():
total space in arena:   35148 KB
Ordinary blocks:34999 KB 42 blks
Small blocks:   0 KB  0 blks
Holding blocks:  2044 KB  1 blks
Free Small blocks:  0 KB
Free Ordinary blocks: 149 KB
Total in use:   37043 KB 105%
Total free:   149 KB 0%
2005/08/16 16:16:10| Starting Squid Cache version 2.5.STABLE10 for 
i486-slackware-linux-gnu...


Re: [squid-users] Bandwidth Savings

2006-05-10 Thread Anton Glinkov
Hello,

If your squid is on a separate machine (not used for anything else but web
caching) and has only one network interface (to users and to internet),
you can monitor the outgoing and incoming bandwidth of the the device. The
difference between them is the bandwidth saved by squid. I do that by
drawing a rrd graph. Similar graph can be made if you have two interfaces
(one to users, other to internet -> users outgoing bw minus internet
incoming bw), but those must be used only by squid.

I hope this helps.

> Salut!
>
> Maybe this is a FAQ: How do I calculate/enumerate the bandwidth savings
> as a result of using a caching proxy (Squid in this case)?
>
> I'd like to come up with a report on the savings that can show how Squid
> is making the browsing experience better.
>
> Thanks in advance.
>

-- 
Anton Glinkov
network administrator




[squid-users] squid 2.6STABLE1 strips authentication headers

2006-07-20 Thread Anton Golubev

Hello list,

I wonder if it is a proper behavior of the squid to strip authentication
headers, then it configured as accelerating proxy? I noticed this
after upgrading squid from 2.5STABLE14 to 2.6STABLE1.

Here what is send to squid:

GET /adm/ HTTP/1.0
User-Agent: Wget/1.8.2
Host: ctsv.engec.ru
Accept: */*
Connection: Keep-Alive
Authorization: Basic YW50b246MTIzMTIz

Here waht squid sends to web server:

GET /adm/ HTTP/1.0
User-Agent: Wget/1.8.2
Host: ctsv.engec.ru
Accept: */*
Via: 1.0 himbeer1.engec.ru:80 (squid)
X-Forwarded-For: 85.142.33.28
Cache-Control: max-age=259200

Essential configuration from squid.conf:

http_port 85.142.33.28:80 vhost defaultsite=himbeer.engec.ru
cache_peer 127.0.0.1 parent 80 0 originserver

If it a new behavor, it probably need to be documented, since it break
many things for a lot of people.

Software: squid-2.6.STABLE1-20060711

Compilation options:

$ ./configure --bindir=/usr/local/sbin
--sysconfdir=/usr/local/etc/squid --datadir=/usr/local/etc/squid
--libexecdir=/usr/local/libexec/squid --localstated
ir=/usr/local/squid --enable-removal-policies=lru heap
--enable-storeio=aufs ufs diskd null --disable-wccp
--prefix=/usr/local --with-pthreads --enable-epoll


Sincerely,
Anton Golubev
ENGECON
St. Petersburg
Russia

AAG69-RIPE AAG28-RIPN


[squid-users] aufs vs diskd

2006-09-12 Thread Anton Glinkov
Hello,
I am running latest squid (2.6.STABLE3)
on a linux machine (latest stable 2.6 kernel)
and observed the following behavior:

when using aufs the store rebuilding is done MUCH faster compared to
diskd. But when using diskd, once the store is rebuilt the load average is
always lower than 1. With aufs it is usually above 2. The CPU time is done
mostly by iowait. There is plenty of memory on the machine. It doesn't run
any other services.

So which one should i use?

-- 
Anton Glinkov
network administrator



[squid-users] basic authenticator hangs when squid often receive logrotate

2005-01-09 Thread Anton Golubev
Hello, colleagues!

I have trouble with basic authenticator daemons. They are hanging and
use can't authenticate himself anymore. After a while I have this in my
statistics:

Basic Authenticator Statistics:
program: /usr/local/bin/user_auth
number running: 11 of 5
requests sent: 2808
replies received: 2802
queue length: 0
avg service time: 95.88 msec

#   FD  PID # Requests  Flags   TimeOffset  Request
2   33  90292   2   AB S130031.644  0
nsokolova2 \n
2   36  90615   2   AB S123530.990  0   mknyish
\n
1   26  93367   5   AB S66203.355   0   stepa
\n
1   29  93656   3   AB S58148.235   0   stepa
\n
1   28  94210   2   AB S45955.031   0
ibarabash \n
1   34  94701   22  AB S35611.662   0
ibarabash \n
1   41  96230   57  A   0.097   0   (none)
2   42  96231   0   A   0.000   0   (none)
3   43  96232   0   A   0.000   0   (none)
4   44  96233   0   A   0.000   0   (none)
5   48  96234   0   A   0.000   0   (none)


Debug 84, 5 in squid.conf shows:

2005/01/09 02:00:00| helperShutdown: basicauthenticator #2 is BUSY.
2005/01/09 02:00:00| helperShutdown: basicauthenticator #2 is BUSY.
2005/01/09 02:00:00| storeDirWriteCleanLogs: Starting...

2005/01/09 04:00:00| helperShutdown: basicauthenticator #2 is
BUSY.
2005/01/09 04:00:00| helperShutdown: basicauthenticator #2 is BUSY.
2005/01/09 04:00:00| storeDirWriteCleanLogs: Starting...

2005/01/09 06:00:01| helperShutdown: basicauthenticator #2 is BUSY.
2005/01/09 06:00:01| helperShutdown: basicauthenticator #2 is BUSY.
2005/01/09 06:00:01| helperShutdown: basicauthenticator #1 is BUSY.
2005/01/09 06:00:01| storeDirWriteCleanLogs: Starting...

And so on...

My opinion is has "hanging" of authenticator occurs just after receiving
"squid logrotate" signal (which is done each 2 hours via cron for rapid
log  processing).

Please, don't suspect authenticator itself. It exists more then 2 years
already without single problem (simple SELECT -> compare c-program).

I also think that only recent change in my system, which could trigger
the problem is decreasing "squid logrotate" period from 24 to 2 hours.

Some statistics:

Squid Object Cache: Version 2.5.STABLE5
Start Time: Fri, 07 Jan 2005 14:20:17 GMT
Current Time:   Sun, 09 Jan 2005 21:37:22 GMT

apfel# uname -a
FreeBSD apfel.engec.ru 5.2.1-RELEASE-p8 FreeBSD 5.2.1-RELEASE-p8 #1: Tue
Jun 15 00:18:07 MSD 2004
[EMAIL PROTECTED]:/usr/src/sys/i386/compile/APFEL  i386

Kernel is uniprocessor.

Thank you for any ideas how this problem can be solved!

--
Regards,
Anton Golubev
tel. +7 812 1185005 *7169
icq 70145498





RE: [squid-users] basic authenticator hangs when squid often receive logrotate

2005-01-10 Thread Anton Golubev
Hi Henrik,

Thank you for your prompt reply! I installed latest squid with
recommended patch (luckily squid port well maintained) and will watch
the result.

You are right "time" values from helper statistics don't exactly
correspond to "rotate" calls. And it can be a problem within
authenticator. Hope the patch will help squid to kill stalled helpers
during "rotate".


Many thanks,
Anton





[squid-users] squid crashes rapidly with glibc errors

2007-04-27 Thread Anton Golubev
Hello squid users,

I'm trying to figure out, why usually quite stable squid started to
crash rapidly on newly installed server with CentOS 4.4. Time to crash
is up to several seconds, then moderate requests load is applied.
Relevant log records are following:

2007/04/25 17:42:54|   Validated 10907 Entries
2007/04/25 17:42:54|   store_swap_size = 107484k
2007/04/25 17:42:54| storeLateRelease: released 0 objects
*** glibc detected *** corrupted double-linked list: 0x0a378818 ***


2007/04/25 17:46:31| Beginning Validation Procedure
2007/04/25 17:46:31|   Completed Validation Procedure
2007/04/25 17:46:31|   Validated 11022 Entries
2007/04/25 17:46:31|   store_swap_size = 108704k
2007/04/25 17:46:31| storeLateRelease: released 0 objects

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread -1208784224 (LWP 3473)]
0x004e71f1 in calloc () from /lib/tls/libc.so.6
(gdb) backtrace
#0  0x004e71f1 in calloc () from /lib/tls/libc.so.6
#1  0x080da383 in xcalloc (n=1, sz=20) at util.c:561
#2  0x08096ea2 in memPoolAlloc (pool=0x8b485f8) at MemPool.c:295
#3  0x0804ed93 in aclMatchAclList (list=0x8b53a88, checklist=0x8e65da0)
at acl.c:1442
#4  0x0804f129 in aclCheck (checklist=0x8e65da0) at acl.c:2154
#5  0x080d2e39 in authenticateDigestHandleReply (data=0x8e71a80,
reply=0x8b67e38 "be7acf8db7345b0fd24e106bef45896d")
at digest/auth_digest.c:923
#6  0x08082e38 in helperHandleRead (fd=8, data=0x8b67de0) at
helper.c:769
#7  0x0806db28 in comm_select (msec=10) at comm_generic.c:264
#8  0x0809556d in main (argc=2, argv=0xbfe0b934) at main.c:837
(gdb)


2007/04/25 17:47:14| 1 Duplicate URLs purged.
2007/04/25 17:47:14| 0 Swapfile clashes avoided.
2007/04/25 17:47:14|   Took 0.4 seconds (29390.2 objects/sec).
2007/04/25 17:47:14| Beginning Validation Procedure
2007/04/25 17:47:14|   Completed Validation Procedure
2007/04/25 17:47:14|   Validated 11022 Entries
2007/04/25 17:47:14|   store_swap_size = 108704k
2007/04/25 17:47:15| storeLateRelease: released 0 objects

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread -1208554848 (LWP 3485)]
0x004e71f1 in calloc () from /lib/tls/libc.so.6
(gdb) backtrace
#0  0x004e71f1 in calloc () from /lib/tls/libc.so.6
#1  0x080da383 in xcalloc (n=1, sz=20) at util.c:561
#2  0x08096ea2 in memPoolAlloc (pool=0xa2105f8) at MemPool.c:295
#3  0x0804ed93 in aclMatchAclList (list=0xa21bbb8, checklist=0xa52dbf0)
at acl.c:1442
#4  0x0804f129 in aclCheck (checklist=0xa52dbf0) at acl.c:2154
#5  0x080d2e39 in authenticateDigestHandleReply (data=0xa538090,
reply=0xa22fe38 "10e9e6568dcd9030053a9f347a4814bf")
at digest/auth_digest.c:923
#6  0x08082e38 in helperHandleRead (fd=8, data=0xa22fde0) at
helper.c:769
#7  0x0806db28 in comm_select (msec=10) at comm_generic.c:264
#8  0x0809556d in main (argc=2, argv=0xbff037d4) at main.c:837



Configuration options a following:

configure options: '--bindir=/usr/local/sbin'
'--sbindir=/usr/local/sbin' '--datadir=/usr/local/etc/squid'
'--libexecdir=/usr/local/libexec/squid'
'--localstatedir=/usr/local/squid' '--sysconfdir=/usr/local/etc/squid'
'--enable-auth=basic ntlm digest' '--enable-basic-auth-helpers=NCSA PAM
MSNT SMB YP' '--enable-digest-auth-helpers=password'
'--enable-external-acl-helpers=ip_user unix_group wbinfo_group'
'--enable-ntlm-auth-helpers=SMB' '--enable-delay-pools' '--enable-snmp'
'--enable-htcp' '--enable-forw-via-db' '--enable-cache-digests'
'--enable-wccpv2' '--enable-err-languages=English German Russian-1251
Russian-koi8-r' '--enable-default-err-language=English'
'--prefix=/usr/local'

Squid Cache: Version 2.6.STABLE12-20070424 The same behavior is with
2.6.STABLE12 release.

Operation system: CentOS 4.4 with 2.6.9-42.EL #1 Sat Aug 12 09:17:58.

# ldd /usr/local/sbin/squid
libcrypt.so.1 => /lib/libcrypt.so.1 (0x00609000)
libm.so.6 => /lib/tls/libm.so.6 (0x005b)
libnsl.so.1 => /lib/libnsl.so.1 (0x00639000)
libc.so.6 => /lib/tls/libc.so.6 (0x00483000)
/lib/ld-linux.so.2 (0x00465000)


I suspect there is some incompatibility between squid and the libc in
use in CentOS 4.4, e.g.:

[EMAIL PROTECTED] devel]# rpm -qf /lib/tls/libc.so.6
glibc-2.3.4-2.25


What would be my next steps to debug and fix it?


Kind regards,
Anton Golubev





[squid-users] browser (and access.log) says access denied but cache.log says it's ok?!?

2007-05-16 Thread Anton Melser

Hi,
I have searched high and low for this, and can't get anywhere!!! I am
using 2.6.STABLE5 (standard debian etch package).
I am trying to get squid to accelerate both a local apache and a
distant apache (I only want accelerating, nothing else).
If I set squid up on 3128 (with both local and distant apache on 80),
then everything works fine. However, when I set up squid on 80 and
local apache on either 81 (or whatever) or 127.0.0.1:80 then for the
local site I get an access denied.

ERROR
The requested URL could not be retrieved

While trying to retrieve the URL: http://lesite.org/

The following error was encountered:

  * Access Denied.

Access control configuration prevents your request from being
allowed at this time. Please contact your service provider if you feel
this is incorrect.

Your cache administrator is webmaster.
Generated Wed, 16 May 2007 17:59:44 GMT by lesite.org (squid/2.6.STABLE5)

In the access.log I get :

1179338384.598  0 ip_address_of_machine TCP_DENIED/403 1568 GET
http://lesite.org/ - NONE/- text/html
1179338384.598  9 firwall_ip TCP_MISS/403 1766 GET
http://lesite.org/ - DIRECT/172.16.116.1 text/html

But putting debug_options ALL,1 33,2
In cache.log I get
2007/05/16 19:59:44| The request GET http://lesite.org/ is ALLOWED,
because it matched 'sites_server_2'
2007/05/16 19:59:44| The request GET http://lesite.org/ is ALLOWED,
because it matched 'sites_server_2'
2007/05/16 19:59:44| WARNING: Forwarding loop detected for:
Client: anip http_port: an_ip.1:80
GET http://lesite.org/ HTTP/1.0
Host: lesite.org
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; fr; rv:1.8.1.3)
Gecko/20070309 Firefox/2.0.0.3
Accept: 
text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
Accept-Language: en,fr;q=0.8,fr-fr;q=0.5,en-us;q=0.3
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Cookie: LangCookie=fr;
Wysistat=0.9261449444734621_1179321228414%uFFFD6%uFFFD1179321268023%uFFFD2%uFFFD1179317769%uFFFD0.5886263648254288_1179223653760;
PHPSE
SSID=b0319d53833d11da790f5868f56c32e1; TestCookie=OK
Pragma: no-cache
Via: 1.1 lesite.org:80 (squid/2.6.STABLE5)
X-Forwarded-For: unip
Cache-Control: no-cache, max-age=259200
Connection: keep-alive

2007/05/16 19:59:44| The reply for GET http://lesite.org/ is ALLOWED,
because it matched 'QUERY'
2007/05/16 19:59:44| The reply for GET http://lesite.org/ is ALLOWED,
because it matched 'all'
2007/05/16 19:59:52| Preparing for shutdown after 2 requests

Can someone tell me what is going on here? I have tried pretty much
everything I can think of with no luck, and the boss is getting mighty
impatient!
Cheers
Anton


Re: [squid-users] browser (and access.log) says access denied but cache.log says it's ok?!?

2007-05-16 Thread Anton Melser

On 16/05/07, Chris Robertson <[EMAIL PROTECTED]> wrote:

Anton Melser wrote:
> Hi,
> I have searched high and low for this, and can't get anywhere!!! I am
> using 2.6.STABLE5 (standard debian etch package).
> I am trying to get squid to accelerate both a local apache and a
> distant apache (I only want accelerating, nothing else).
> If I set squid up on 3128 (with both local and distant apache on 80),
> then everything works fine. However, when I set up squid on 80 and
> local apache on either 81 (or whatever) or 127.0.0.1:80 then for the
> local site I get an access denied.

When you change what port Apache is listening on, did you just change
the http_port, or did you specify an IP as well in the squid.conf?  Did
you change the cache_peer line in Squid? Just asking because...

> 2007/05/16 19:59:44| WARNING: Forwarding loop detected for:
> Client: anip http_port: an_ip.1:80

...this looks like it could be caused by one (or both) of those.

>
> Can someone tell me what is going on here? I have tried pretty much
> everything I can think of with no luck, and the boss is getting mighty
> impatient!
> Cheers
> Anton

Have a peek at the FAQ entries on accelerator setups, if you haven't
already. http://wiki.squid-cache.org/SquidFaq/ReverseProxy/

Chris


Thanks Chris, I definitely changed the port (the live sites, which I
put in my hosts file so not to cause too much trouble...), and could
access with no problems the non localhost sites. I tried both setting
a hostname and a ip with the ports - no luck, and had apache2
listening on 127.0.0.7:80 and *.81.
I had a very long look at the article mentioned (and you need the
right keywords to get to it!) but doing both local and distant reverse
proxying wasn't mentioned.
I followed the instructions on that page for one of my attempts (with
both squid and apache listening on 80 but one localhost and one
external) but alas exactly the same results.
I have seen in various places about compiling without internal dns but
the vast bulk of the literature is on <=2.5, and 2.6 seems pretty
different (particularly for http acceleration), and I didn't know
whether this was desirable or necessary. Anyway, I will try a couple
of things with /etc/hosts, and a few things, but I think it may be due
to some resolution issues.
Thanks for your input,
Anton


Re: [squid-users] browser (and access.log) says access denied but cache.log says it's ok?!?

2007-05-17 Thread Anton Melser

On 17/05/07, Henrik Nordstrom <[EMAIL PROTECTED]> wrote:

ons 2007-05-16 klockan 19:45 +0200 skrev Anton Melser:

> In the access.log I get :
>
> 1179338384.598  0 ip_address_of_machine TCP_DENIED/403 1568 GET
> http://lesite.org/ - NONE/- text/html
> 1179338384.598  9 firwall_ip TCP_MISS/403 1766 GET
> http://lesite.org/ - DIRECT/172.16.116.1 text/html

Your Squid is not using the cache_peer.

If you use that Squid as proxy then make sure to use never_direct for
your accelerated sites.


Anton make big stupid booboo! I had put the external IP + domainname
in /etc/hosts... and I guess this is the reason it was borking.
Putting 127.0.0.1 solved everything.
Thanks for you help!
Cheers
Anton


[squid-users] finding the reason why an object is cached or not...

2007-05-17 Thread Anton Melser

Hi,
I am trying to understand why certain objects are cached and others
not (in a web accelerator setup). For example, squid is caching
http://my.site.org/images/hello.gif
but not
http://my.site.org/js/hello.js
Both files are well under any limits (< 2KB)
But for another site, pretty much everything is getting cached, i.e.,
http://my.othersite.org/images/hello.gif
and
http://my.othersite.org/js/hello.js
are giving hits.
The only difference I could see in the headers (as reported via the
firefox plugin httpheaders) is that the first js is gzipped. Could
that be it? I tried disabling the apache gzip exclusion thingie
#acl apache rep_header Server ^Apache
#broken_vary_encoding allow apache
But that didn't seem to change anything. Does anyone have any pointers?
Cheers
Anton


[squid-users] Re: finding the reason why an object is cached or not...

2007-05-17 Thread Anton Melser

On 17/05/07, Anton Melser <[EMAIL PROTECTED]> wrote:

Hi,
I am trying to understand why certain objects are cached and others
not (in a web accelerator setup). For example, squid is caching
http://my.site.org/images/hello.gif
but not
http://my.site.org/js/hello.js
Both files are well under any limits (< 2KB)
But for another site, pretty much everything is getting cached, i.e.,
http://my.othersite.org/images/hello.gif
and
http://my.othersite.org/js/hello.js
are giving hits.
The only difference I could see in the headers (as reported via the
firefox plugin httpheaders) is that the first js is gzipped. Could
that be it? I tried disabling the apache gzip exclusion thingie
#acl apache rep_header Server ^Apache
#broken_vary_encoding allow apache
But that didn't seem to change anything. Does anyone have any pointers?
Cheers
Anton


Here is what httpheaders gives

http://www.my.domain.name/js/general.js

GET /js/general.js HTTP/1.1
Host: www.my.domain.name
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US;
rv:1.8.1.3) Gecko/20070309 Firefox/2.0.0.3
Accept: 
text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Cookie: 
Wysistat=0.9275272298400944_1179412290656%A72%A71179412348937%A72%A71179398121%A70.9578866789309586_1179398121265;
PHPSESSID=27ed1a75d667f05cc961ff76df8bee61

HTTP/1.x 200 OK
Date: Thu, 17 May 2007 15:31:02 GMT
Server: Apache/2.0.54 (Debian GNU/Linux) mod_python/3.1.3 Python/2.3.5
PHP/5.0.5-1 mod_perl/1.999.21 Perl/v5.8.7
X-Powered-By: PHP/5.0.5-1
Set-Cookie: PHPSESSID=27ed1a75d667f05cc961ff76df8bee61; path=/
Expires: Sat, 19 May 2007 17:31:02 GMT
Cache-Control: private
Pragma: cache
Last-Modified: 28/04/2006 17:22
Accept-Ranges: bytes
Content-Encoding: gzip
Vary: Accept-Encoding
Etag: "dd4dd40f580d39ccada206547449762b-1378461273"
Content-Length: 796
Content-Type: text/javascript
X-Cache: MISS from sub.my.domain.name
X-Cache-Lookup: MISS from sub.my.domain.name:80
Via: 1.0 sub.my.domain.name:80 (squid/2.6.STABLE5)
Connection: keep-alive
X-Antivirus: avast! 4
X-Antivirus-Status: Clean
--
... (favicons removed)

http://sub.my.domain.name/outline/js/outline.js

GET /outline/js/outline.js HTTP/1.1
Host: sub.my.domain.name
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US;
rv:1.8.1.3) Gecko/20070309 Firefox/2.0.0.3
Accept: 
text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Cookie: 
Wysistat=0.12772142043000478_1179404772640%uFFFD2%uFFFD1179404903875%uFFFD2%uFFFD1179396774%uFFFD0.9420937600452932_1179396774750;
LangCookie=fr

HTTP/1.x 200 OK
Date: Thu, 17 May 2007 13:24:46 GMT
Server: Apache/2.2.3 (Debian) mod_python/3.2.10 Python/2.4.4
PHP/5.2.0-8+etch1c2c1 mod_perl/2.0.2 Perl/v5.8.8
Last-Modified: Thu, 26 Apr 2007 09:27:14 GMT
Etag: "2c611-b70-9fc78080"
Accept-Ranges: bytes
Content-Length: 2928
Content-Type: application/x-javascript
Age: 11403
X-Cache: HIT from sub.my.domain.name
X-Cache-Lookup: HIT from sub.my.domain.name:80
Via: 1.0 sub.my.domain.name:80 (squid/2.6.STABLE5)
Connection: keep-alive
X-Antivirus: avast! 4
X-Antivirus-Status: Clean
--


[squid-users] squid and IP fun...

2007-05-21 Thread Anton Melser

Hi all,
I have managed to get squid to cache pretty much everything I need,
and am very glad (reverse proxying two sites)! However, there is an
extra complication...
So instead of changing the DNS to make everything pass via squid, we
wanted to do it via the cisco firewall (we have two sites on two
machines, with squid serving both from one of these machines). So I
added an extra ip address to eth0 (eth0:1), with the hope that squid
would be able to handle it. In the cache logs everything looks ok...
but the site simply returns nothing. Can anyone think of something
simple I am forgetting to do here?
It works fine if I set the value in my hosts file, so I thought it
would be ok just to use the firewall to redirect... Any ideas?
Thanks for your help.
Anton


[squid-users] Re: squid and IP fun...

2007-05-21 Thread Anton Melser

Sorry all, stupid question, squid wasn't listening on the second IP...
Cheers
Anton

On 21/05/07, Anton Melser <[EMAIL PROTECTED]> wrote:

Hi all,
I have managed to get squid to cache pretty much everything I need,
and am very glad (reverse proxying two sites)! However, there is an
extra complication...
So instead of changing the DNS to make everything pass via squid, we
wanted to do it via the cisco firewall (we have two sites on two
machines, with squid serving both from one of these machines). So I
added an extra ip address to eth0 (eth0:1), with the hope that squid
would be able to handle it. In the cache logs everything looks ok...
but the site simply returns nothing. Can anyone think of something
simple I am forgetting to do here?
It works fine if I set the value in my hosts file, so I thought it
would be ok just to use the firewall to redirect... Any ideas?
Thanks for your help.
Anton



[squid-users] best io-scheduler for squid

2007-05-26 Thread Anton Glinkov
Hello

Has anyone done any benchmarks on how different io-schedulers in 2.6
kernels (as, deadline, cfq, noop) affect squid performance?

Thank you.
-- 
Anton Glinkov
network administrator



[squid-users] squid doesn't seem to be updating...

2007-06-08 Thread Anton Melser

Hi all,
I have tried restarting squid, I have tried commenting out the line in
squid.conf, no joy. I have the following setup.
server 1
site a
site b
squid

server 2
site c
site d
site e

Squid is doing reverse proxy for all these sites, and until now
everything seemed to be working. Site c is the default site... and I
am asking myself if this is the problem. What I want to do is point
site e to server 1 (which has a copy of site e). Alas, it keeps
pointing to server 1. I tried removing the site completely from
squid.conf, but that didn't work! So I am thinking that it is not
catching it, and as server 2 has the default site, is passing on to
server 2, which is using the headers to correctly server the page.
Can anyone give me a pointer or two ? I am using 2.6stable5
Thanks
Anton


Re: [squid-users] squid doesn't seem to be updating...

2007-06-10 Thread Anton Melser

On 09/06/07, Henrik Nordstrom <[EMAIL PROTECTED]> wrote:

fre 2007-06-08 klockan 16:11 +0200 skrev Anton Melser:

> Squid is doing reverse proxy for all these sites, and until now
> everything seemed to be working. Site c is the default site... and I
> am asking myself if this is the problem. What I want to do is point
> site e to server 1 (which has a copy of site e). Alas, it keeps
> pointing to server 1.

So how have you configured cache_peer + cache_peer_access/domain?


http_port   172.16.116.1:80 defaultsite=www.c.org vhost
http_port   172.16.116.2:80 defaultsite=www.c.org vhost

cache_peer 172.16.16.1 parent 80 0 originserver name=server_1
acl sites_server_1 dstdomain www.c.org c.org
cache_peer_access server_1 allow sites_server_1

cache_peer 127.0.0.1 parent 80 0 originserver name=server_2
acl sites_server_2 dstdomain cartographie.bretagne-environnement.org
cache_peer_access server_2 allow sites_server_2
http_access allow sites_server_2

cache_peer 172.16.16.1 parent 80 0 originserver name=server_3
acl sites_server_3 dstdomain www.d.fr d.fr
cache_peer_access server_3 allow sites_server_3


cache_peer 127.0.0.1 parent 80 0 originserver name=server_4
acl sites_server_4 dstdomain a.d.fr
cache_peer_access server_4 allow sites_server_4
http_access allow sites_server_4

cache_peer 127.0.0.1 parent 80 0 originserver name=server_5
acl sites_server_5 dstdomain www.e.fr e.fr
cache_peer_access server_5 allow sites_server_5

visible_hostname a.c.org
...
It is the last config - server_5 that was pointing to 172.16.16.1, and
for some reason keeps hitting it.
Cheers
Anton


Re: [squid-users] squid doesn't seem to be updating...

2007-06-10 Thread Anton Melser

On 10/06/07, Henrik Nordstrom <[EMAIL PROTECTED]> wrote:

sön 2007-06-10 klockan 13:32 +0200 skrev Anton Melser:
> visible_hostname a.c.org
> ...
> It is the last config - server_5 that was pointing to 172.16.16.1, and
> for some reason keeps hitting it.

What does access.log say?


1181547599.988142 10.10.10.10 TCP_CLIENT_REFRESH_MISS/200 1518 GET
http://www.e.fr/mambots/content/multithumb/lightbox/images/closelabel.gif
- DIRECT/172.16.16.1 image/gif
1181547599.997  8 10.10.10.10 TCP_CLIENT_REFRESH_MISS/200 3307 GET
http://www.e.fr/mambots/content/multithumb/lightbox/images/loading.gif
- DIRECT/172.16.16.1 image/gif

So it is clearly still redirecting to the old server... here is an
extract from cache.log

2007/06/11 09:43:29| The request GET
http://www.e.fr/mambots/content/multithumb/lightbox/images/closelabel.gif
is ALLOWED, because it matched 'our_networks1'
2007/06/11 09:43:29| The reply for GET
http://www.e.fr/mambots/content/multithumb/lightbox/images/closelabel.gif
is ALLOWED, because it matched 'all'

Could it be that I haven't done the access right, and so it is not
being allowed for the right entry, and is being caught by the more
permissive (temporary) value?
Cheers
Anton


[squid-users] good intro on the role of /etc/hosts with relation to cache_peer's?

2007-06-11 Thread Anton Melser

Hi,
I have realised my problem... I was trying to switch machines with a
site, but had forgotten to change /etc/hosts. Is this the perfect
example of the difference between compiling with and without the
internal dns option? I ask this because no matter what IP address I
put in the cache_peer, it still seems to look at /etc/hosts...
Is this right? Is there a reasonably short explanation of how this works?
Cheers
Anton


Re: [squid-users] squid doesn't seem to be updating...

2007-06-11 Thread Anton Melser

On 11/06/07, Henrik Nordstrom <[EMAIL PROTECTED]> wrote:

mån 2007-06-11 klockan 08:55 +0200 skrev Anton Melser:

> 1181547599.988142 10.10.10.10 TCP_CLIENT_REFRESH_MISS/200 1518 GET
> http://www.e.fr/mambots/content/multithumb/lightbox/images/closelabel.gif
> - DIRECT/172.16.16.1 image/gif

Hmm.. it's going DIRECT here, not using a cache_peer at all.

Are you using this Squid as a forward proxy? I thought you were setting
up a reverse proxy..
what do http_port say?

Any always_direct lines around?



So did I! Indeed, and removing it (always_direct allow all) didn't
have any unfortunate consequences :-). Now I am getting
FIRST_UP_PARENT, which is better I suppose?


If it's a forward proxy then you need never_direct to tell Squid to only
use peers, if not it may fall back on DIRECT.

(accelerator mode by default do not allow DIRECT unless forced by
always_direct)


As I mentioned in another post, I actually got it working... by
putting the values in /etc/hosts. It was sufficiently different for me
to want to do another post...
Thanks again, I think I am finally starting to get a clean squid.conf
file! I basically just put
everything all
to get things working, and now I am starting to understand how things
work am cleaning it up.
Cheers
Anton


Re: [squid-users] good intro on the role of /etc/hosts with relation to cache_peer's?

2007-06-12 Thread Anton Melser

On 12/06/07, Henrik Nordstrom <[EMAIL PROTECTED]> wrote:

mån 2007-06-11 klockan 18:23 +0200 skrev Anton Melser:
> Hi,
> I have realised my problem... I was trying to switch machines with a
> site, but had forgotten to change /etc/hosts. Is this the perfect
> example of the difference between compiling with and without the
> internal dns option?

No, both uses /etc/hosts first..

> I ask this because no matter what IP address I
> put in the cache_peer, it still seems to look at /etc/hosts...

That's because you have told Squid to completely ignore your cache_peers
by enabling always_direct, forcing Squid to go directly to the requested
site instead of forwarding it via the peers.

You do not need to define any hosts in /etc/hosts, unless you need these
in the host part of your cache_peer lines to keep your configuration
readable.. (the cache_peer host can be specified either by IP or name..)


Thanks Hendrik, everything seems much clearer now! I got really
confused trying to put my file together not really knowing how things
worked and getting bits from various 2.6 and pre 2.6 examples. It is
working like a charm now though!
Cheers
Anton


[squid-users] split access log up for different sites?

2007-07-05 Thread Anton Melser

Hi,
I had a look but couldn't see any way to split up a log for different
sites being reverse proxied. Is this possible?
Cheers
Anton


Re: [squid-users] split access log up for different sites?

2007-07-09 Thread Anton Melser

On 06/07/07, Adrian Chadd <[EMAIL PROTECTED]> wrote:

On Thu, Jul 05, 2007, Anton Melser wrote:
> Hi,
> I had a look but couldn't see any way to split up a log for different
> sites being reverse proxied. Is this possible?

Squid-2.6 introduced the ability to use ACLs and multiple access log
lines to determine which log gets which requests.


Thanks for that. It's not entirely clear from the docs... So do I need
something like:

acl sitea dstdomain my.site.com

logformat sitea %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %h" "%{User-Agent}>h" %Ss:%Sh
access_log  /var/log/squid/combined.log sitea

This doesn't seem possible from the docs
(http://www.visolve.com/squid/squid30/logs.php) but the docs are for
squid 3!
Thanks
Anton


Re: [squid-users] split access log up for different sites?

2007-07-09 Thread Anton Melser

That's just perfect thanks Hendrik! Those 2.6 docs are going to be my friend!
Cheers
Anton

On 09/07/07, Henrik Nordstrom <[EMAIL PROTECTED]> wrote:

fre 2007-07-06 klockan 08:58 +0200 skrev Anton Melser:

> Thanks for that. It's not entirely clear from the docs... So do I need
> something like:
>
> acl sitea dstdomain my.site.com
>
> logformat sitea %>a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs % "%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh
> access_log  /var/log/squid/combined.log sitea

Almost..

access_log  /var/log/squid/combined.log sitea sitea

the first is the log format, the second the acl filtering what to log
there..


> This doesn't seem possible from the docs
> (http://www.visolve.com/squid/squid30/logs.php) but the docs are for
> squid 3!

See squid.conf.default for the right documentation for your Squid
version, or
http://www.squid-cache.org/Versions/v2/2.6/cfgman/access_log.html for
the online version for 2.6.

Regards
Henrik




Re: [squid-users] split access log up for different sites?

2007-07-09 Thread Anton Melser

On 09/07/07, Henrik Nordstrom <[EMAIL PROTECTED]> wrote:

mån 2007-07-09 klockan 19:06 +0200 skrev Anton Melser:
> That's just perfect thanks Hendrik! Those 2.6 docs are going to be my friend!

It's the exact same text as you have in squid.conf.default..


yip, and the first thing I did was grep out all the comments... alas,
I spend about 0.005% of my time configuring squid, and have got used
to programmes with reasonably accessible online docs...
It's all good though, thanks!
Cheers
Anton


[squid-users] page not being cached... is this right?

2007-08-10 Thread Anton Melser
Hi,
I have installed squid 2.6stable14 (windows binary linked from the
site), and am getting a fair proportion of what should be cached
cached, but not the most important things!
I have deactivated the default setting to ignore urls with ? in it,
and am getting all those pages/images except this page with cache
hits. However, those have cache-expires headers and are cache-control
public... Am I missing something with the below headers that means it
won't be cached?
Thanks for your time!

http://www.mysite.test:3128/a/b/?nav_cat=128&lang=en_US

GET /a/b/?nav_cat=128&lang=en_US HTTP/1.1
Host: www.mysite.test:3128
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; fr; rv:1.8.1.6)
Gecko/20070725 Firefox/2.0.0.6
Accept: 
text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
Accept-Language: fr,en;q=0.8,fr-fr;q=0.5,en-us;q=0.3
Accept-Encoding: gzip,deflate
Accept-Charset: UTF-8,*
Keep-Alive: 300
Connection: keep-alive
Cookie: prtl_2048=2052; prtl_2048=2052;
__utmz=196985131.1183130363.1.1.utmccn=(direct)|utmcsr=(direct)|utmcmd=(none);
__utmb=196985131;
__utma=196985131.1319757679.1183130363.1186756561.1186757873.6;
JSESSIONID=82F28C32D70FD2B8E87CF5F93F3B392A; __utmc=196985131

HTTP/1.x 200 OK
Server: Apache-Coyote/1.1
Set-Cookie: prtl_2048=2052; Expires=Sat, 11-Aug-2007 00:58:16 GMT
Content-Type: text/html;charset=UTF-8
Date: Fri, 10 Aug 2007 14:58:20 GMT
X-Cache: MISS from pc-am.siege.ours.local
X-Cache-Lookup: MISS from pc-am.siege.ours.local:3128
Via: 1.0 pc-am.siege.ours.local:3128 (squid/2.6.STABLE14)
Connection: close
--

store.log
1186757048.978 RELEASE -1  ED746D4F06EEF75757282B4CA0B25510
200 1186757048-1-1 text/html -1/42930 GET
http://www.mysite.test/a/b/?

access.log
1186757998.370   5032 127.0.0.1 TCP_MISS/200 43227 GET
http://www.mysite.test/a/b/? - FIRST_UP_PARENT/server_1 text/html