Re: [squid-users] Squid-2, Squid-3, roadmap

2008-03-11 Thread Adrian Chadd
On Mon, Mar 10, 2008, Alex Rousskov wrote:

  WRT responsible sponsoring: I'm willing to pay a (reasonable) premium  
  to get the things that I pay to get into -2 into -3 as well,
 
 Thank you, and I am sure many sponsors would do the same if the
 trade-offs are explained to them correctly. Unfortunately, I have so far
 failed to convince the most prolific Squid2 developer to accept this as
 the default model and encourage its use.

Because I'm still not 100% convinced that the Squid-3 codebase is really
the way forward.

I shouldn't have been the one that tried to pull some sensible direction and
feedback into the development group - those working and pushing Squid-3 
should've
been doing that already. Unfortunately until very recently there has been
almost no public dialogue that I could see.

My concern is about project direction and sustainability. I chose to do
my work on Squid-2 in mid to late 2006 because:

(a) it was stable, so I didn't have to worry (as much) about whether bugs
were due to me or pre-existing code;
(b) it was in wide use by people, so incremental improvements could be
adopted by existing sites without as much fear as trying to push Squid-3
as a platform;
(c) I wasn't sure at the time whether there was enough momentum behind Squid-3
to justify investing time in something that may never be as prolific as
-2; and I wasn't willing to invest even more of my time trying to drag the
codebase forward.

I shouldn't have had to try and kick Squid-3 developers along to do simple 
things
like regression testing and local benchmarking; I shouldn't have to try and 
explain
that the model of do whats interesting to you and what you're being paid for
is such a great idea as a project direction; I shouldn't have to try and
explain why an architecture and a roadmap is a great idea for a software 
project.

I doubly shouldn't have to try and convince the Squid-3 developers considering
the -past history of the whole effort-.

This is why I'm not all that interested right now in doing very much in relation
to Squid-3.

As I said on squid-core, my opinion may change if - and I stress _if_ - changes
to the project structure and direction occur which I see improving things.
I don't mean improving the paid project quota on Squid-3; I mean things like
improvements in direction, collaboration, documentation, testing and 
communication.

 Personally, I would love to see active sponsors together with active
 developers agreeing on a pragmatic migration plan towards a single Squid
 roadmap. I would be happy to facilitate such discussions. The active
 developers alone have so far failed to reach such an agreement, but I
 think direct Squid2 sponsor participation may help resolve the deadlock.

To be honest about it, the only dissenter now is me. I'm not sure whether my
continued dissent is a good idea for the project, but thus far the feedback
I've received has been 100% positive. I'd like to keep kicking along Squid-2
until the point where a future Squid code tree is attractive enough to replace
it. And I'm going to keep dissenting until I see the fruits of actual change,
not just the discussion of it.




Adrian

-- 
- Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support -
- $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -


Re: [squid-users] Re: Re: [squid-users] centralized storage for squid

2008-03-11 Thread Kinkie
2008/3/11 Neil Harkins [EMAIL PROTECTED]:
 F5 has some documents on how to implement consisent hashes in bigip
  irules (tcl),
  but i wound up writing a custom one for use in front of our squids
  that only does one
  checksum per request, as opposed to one per squid in the pool, to avoid 
 wasting
  cpu cycles on the LB.

  it uses a precomputed table for the nodes, but doesn't need to be recomputed
  when you add/remove a few, they just fit in between the others. i'll
  try to finish
  the writeup and submit it to devcentral soon.

You're also welcome to write about this issue in the squid wiki (be it
a link to devcentral or an article in the wiki itself)

-- 
/kinkie


Re: [squid-users] Squid-2, Squid-3, roadmap

2008-03-11 Thread Michael Puckett
I'll come back to one of Mark's earlier points then which seems to have 
been lost. What will decide on adoption of -2 or -3 is the killer app. 
Developers roadmaps or sponsors notwithstanding. I think the point was 
raised that neither roadmap was especially compelling or seemed to yet 
contain that killer app. So momentum is on the side of -2. Users don't 
care if the project is recoded in C++ to make the developers life 
easier, the developers do, so that is not compelling in the slightest as 
a reason to migrate. Which I know you already realize.


IMHO I still think that Mark was right that (at least one) killer app is 
a true multi-threaded application that can take advantage of current HW 
today. Now. I think that the current squid is exposed and probably 
vulnerable if a competing project comes out that takes full advantage of 
current HW and significantly outperforms todays non-scalable squid. If 
that happens the entire -2, -3 argument is moot. Just my $0.02 though.


-mikep

Adrian Chadd wrote:

On Mon, Mar 10, 2008, Alex Rousskov wrote:

  
WRT responsible sponsoring: I'm willing to pay a (reasonable) premium  
to get the things that I pay to get into -2 into -3 as well,
  

Thank you, and I am sure many sponsors would do the same if the
trade-offs are explained to them correctly. Unfortunately, I have so far
failed to convince the most prolific Squid2 developer to accept this as
the default model and encourage its use.



Because I'm still not 100% convinced that the Squid-3 codebase is really
the way forward.

I shouldn't have been the one that tried to pull some sensible direction and
feedback into the development group - those working and pushing Squid-3 
should've
been doing that already. Unfortunately until very recently there has been
almost no public dialogue that I could see.

My concern is about project direction and sustainability. I chose to do
my work on Squid-2 in mid to late 2006 because:

(a) it was stable, so I didn't have to worry (as much) about whether bugs
were due to me or pre-existing code;
(b) it was in wide use by people, so incremental improvements could be
adopted by existing sites without as much fear as trying to push Squid-3
as a platform;
(c) I wasn't sure at the time whether there was enough momentum behind Squid-3
to justify investing time in something that may never be as prolific as
-2; and I wasn't willing to invest even more of my time trying to drag the
codebase forward.

I shouldn't have had to try and kick Squid-3 developers along to do simple 
things
like regression testing and local benchmarking; I shouldn't have to try and 
explain
that the model of do whats interesting to you and what you're being paid for
is such a great idea as a project direction; I shouldn't have to try and
explain why an architecture and a roadmap is a great idea for a software 
project.

I doubly shouldn't have to try and convince the Squid-3 developers considering
the -past history of the whole effort-.

This is why I'm not all that interested right now in doing very much in relation
to Squid-3.

As I said on squid-core, my opinion may change if - and I stress _if_ - changes
to the project structure and direction occur which I see improving things.
I don't mean improving the paid project quota on Squid-3; I mean things like
improvements in direction, collaboration, documentation, testing and 
communication.

  

Personally, I would love to see active sponsors together with active
developers agreeing on a pragmatic migration plan towards a single Squid
roadmap. I would be happy to facilitate such discussions. The active
developers alone have so far failed to reach such an agreement, but I
think direct Squid2 sponsor participation may help resolve the deadlock.



To be honest about it, the only dissenter now is me. I'm not sure whether my
continued dissent is a good idea for the project, but thus far the feedback
I've received has been 100% positive. I'd like to keep kicking along Squid-2
until the point where a future Squid code tree is attractive enough to replace
it. And I'm going to keep dissenting until I see the fruits of actual change,
not just the discussion of it.




Adrian

  




Re: [squid-users] Squid-2, Squid-3, roadmap

2008-03-11 Thread Adrian Chadd
I believe that we have to look far past the current Squid-2 and Squid-3
versions and think something much .. well, better.

My initial Squid-2 roadmap proposal (which is still in the Wiki) aimed
to take the codebase forward whilst beginning a modularisation project
(which is in there; just read between the lines and look at what
direction its heading in) with an eye towards pulling out bits of the
codebase and reusing it in a next release.

Squid-3 has done a little more modularisation work than Squid-2, but
there's still a long way to go in either codebase to properly disentangle
the dependencies and being reusing the interesting and useful parts
of the Squid codebase externally.

In parallel, I was planning on working on something which ties together
all the current best practices from HTTP software that, quite honestly,
smokes Squid in the performance arena. Varnish, Lighttpd, Nginx are three
examples which smoke squid in raw performance - but lack in features
and stability under given workloads.

Now that I have some support customers and I'm slowly working away at their
requirements, I'll hopefully start to have time to dedicate to Squid
features (to keep them happy) and work on the above in my (paid) spare time.




Adrian


On Mon, Mar 10, 2008, Michael Puckett wrote:
 I'll come back to one of Mark's earlier points then which seems to have 
 been lost. What will decide on adoption of -2 or -3 is the killer app. 
 Developers roadmaps or sponsors notwithstanding. I think the point was 
 raised that neither roadmap was especially compelling or seemed to yet 
 contain that killer app. So momentum is on the side of -2. Users don't 
 care if the project is recoded in C++ to make the developers life 
 easier, the developers do, so that is not compelling in the slightest as 
 a reason to migrate. Which I know you already realize.
 
 IMHO I still think that Mark was right that (at least one) killer app is 
 a true multi-threaded application that can take advantage of current HW 
 today. Now. I think that the current squid is exposed and probably 
 vulnerable if a competing project comes out that takes full advantage of 
 current HW and significantly outperforms todays non-scalable squid. If 
 that happens the entire -2, -3 argument is moot. Just my $0.02 though.
 
 -mikep
 
 Adrian Chadd wrote:
 On Mon, Mar 10, 2008, Alex Rousskov wrote:
 
   
 WRT responsible sponsoring: I'm willing to pay a (reasonable) premium  
 to get the things that I pay to get into -2 into -3 as well,
   
 Thank you, and I am sure many sponsors would do the same if the
 trade-offs are explained to them correctly. Unfortunately, I have so far
 failed to convince the most prolific Squid2 developer to accept this as
 the default model and encourage its use.
 
 
 Because I'm still not 100% convinced that the Squid-3 codebase is really
 the way forward.
 
 I shouldn't have been the one that tried to pull some sensible direction 
 and
 feedback into the development group - those working and pushing Squid-3 
 should've
 been doing that already. Unfortunately until very recently there has been
 almost no public dialogue that I could see.
 
 My concern is about project direction and sustainability. I chose to do
 my work on Squid-2 in mid to late 2006 because:
 
 (a) it was stable, so I didn't have to worry (as much) about whether bugs
 were due to me or pre-existing code;
 (b) it was in wide use by people, so incremental improvements could be
 adopted by existing sites without as much fear as trying to push 
 Squid-3
 as a platform;
 (c) I wasn't sure at the time whether there was enough momentum behind 
 Squid-3
 to justify investing time in something that may never be as prolific as
 -2; and I wasn't willing to invest even more of my time trying to drag 
 the
 codebase forward.
 
 I shouldn't have had to try and kick Squid-3 developers along to do simple 
 things
 like regression testing and local benchmarking; I shouldn't have to try 
 and explain
 that the model of do whats interesting to you and what you're being paid 
 for
 is such a great idea as a project direction; I shouldn't have to try and
 explain why an architecture and a roadmap is a great idea for a software 
 project.
 
 I doubly shouldn't have to try and convince the Squid-3 developers 
 considering
 the -past history of the whole effort-.
 
 This is why I'm not all that interested right now in doing very much in 
 relation
 to Squid-3.
 
 As I said on squid-core, my opinion may change if - and I stress _if_ - 
 changes
 to the project structure and direction occur which I see improving things.
 I don't mean improving the paid project quota on Squid-3; I mean things 
 like
 improvements in direction, collaboration, documentation, testing and 
 communication.
 
   
 Personally, I would love to see active sponsors together with active
 developers agreeing on a pragmatic migration plan towards a single Squid
 roadmap. I would be happy 

[squid-users] ACL dstdomain not working

2008-03-11 Thread Luca Gervasi
Hello,
i'm pretty new to squid. I installed it for the first time in a lab
which needs to access to some specific domains through a parent proxy,
using direct connection for all the requests.

I setup Squid Cache: Version 2.6.STABLE16, on Fedora 8, adding those commands:

cache_peer MY_PARENT_PROXY parent 3128 0 no-query proxy-only default

acl ieee dstdomain .ieee.org
acl acmorg dstdomain .acm.org

cache_peer_access MY_PARENT_PROXY allow ieee acmorg
cache_peer_access MY_PARENT_PROXY deny all

But all the requests seems to go on directly, avoiding MY_PARENT_PROXY.

Please note that i can setup squid to use ALWAYS MY_PARENT_PROXY, so
the error isn't in the connection with the parent...AFAICS..

Thanks a lot!

Luca Gervasi
-- 
GnuPG / PGP Key Available on http://pgp.mit.edu
KeyID: 0x17E179AA - Key Fingerprint:
6594 0AEB 13E9 7CA5 EBF7 FCF7 E201 1E6F 17E1 79AA
Linux Registered User: #192634
Web: http://www.ashetic.net/wordpress/


Re: [squid-users] ACL dstdomain not working

2008-03-11 Thread Luca Gervasi
Thanks for your kind answer. I tried to apply what you said in your
previous message as follow:

cache_peer MY_PARENT_PROXY parent 3128 0 no-query proxy-only default

acl to_parent dstdomain .ieee.org .acm.org

cache_peer_access MY_PARENT_PROXY allow to_parent.

What i got is:

MY_IP TCP_MISS/200 [...] DIRECT/140

...parent proxy got untouched at all :(

Anyone?

Thanks in Advance.

Luca



On Tue, Mar 11, 2008 at 10:41 AM, Henrik K [EMAIL PROTECTED] wrote:
 On Tue, Mar 11, 2008 at 10:32:49AM +0100, Luca Gervasi wrote:
   Hello,
   i'm pretty new to squid. I installed it for the first time in a lab
   which needs to access to some specific domains through a parent proxy,
   using direct connection for all the requests.
  
   I setup Squid Cache: Version 2.6.STABLE16, on Fedora 8, adding those 
 commands:
  
   cache_peer MY_PARENT_PROXY parent 3128 0 no-query proxy-only default
  
   acl ieee dstdomain .ieee.org
   acl acmorg dstdomain .acm.org
  
   cache_peer_access MY_PARENT_PROXY allow ieee acmorg
   cache_peer_access MY_PARENT_PROXY deny all

  
 http://wiki.squid-cache.org/SquidFaq/SquidAcl#head-af2c190759b099a7986221cd12a4066eb146a1c4

  Thus:


  cache_peer_access MY_PARENT_PROXY allow ieee
  cache_peer_access MY_PARENT_PROXY allow acmorg

 cache_peer_access MY_PARENT_PROXY deny all

  Or more simply:

  acl to_parent dstdomain .ieee.org .acm.org
  cache_peer_access MY_PARENT_PROXY allow to_parent


 cache_peer_access MY_PARENT_PROXY deny all





-- 
GnuPG / PGP Key Available on http://pgp.mit.edu
KeyID: 0x17E179AA - Key Fingerprint:
6594 0AEB 13E9 7CA5 EBF7 FCF7 E201 1E6F 17E1 79AA
Linux Registered User: #192634
Web: http://www.ashetic.net/wordpress/


Re: [squid-users] Squid-2, Squid-3, roadmap

2008-03-11 Thread Alex Rousskov
On Tue, 2008-03-11 at 15:19 +0900, Adrian Chadd wrote:

 I shouldn't have been the one that tried to pull some sensible
 direction and feedback into the development group ... I shouldn't have
 had to try and kick Squid-3 developers along to do simple things like
 regression testing and local benchmarking; I shouldn't have to try and
 explain that the model of do whats interesting to you and what you're
 being paid for is such a great idea as a project direction; I
 shouldn't have to try and explain why an architecture and a roadmap is
 a great idea for a software project.

Thank you, Adrian, for being the only beacon of reason and knowledge
among the developers. It horrifies me to even think about how many
obvious things we would have not known without your continuing efforts
to enlighten us. Anything good that comes out of the Squid project is
the direct result of your guidance and advice.

 I'm not sure whether my continued dissent is a good idea for the
 project, but thus far the feedback I've received has been 100%
 positive.

You are obviously doing the right thing. I am really glad you receive
100% positive feedback, it is such a rare thing for dissidents to
experience.

Thank you, 

Alex.




Re: [squid-users] what's near hits?

2008-03-11 Thread Chris Woodfield
A near hit is a validated cache miss - the object was stale, but squid  
did a Get with If-Modified-Since to the origin and received a 304 Not  
Modified, which resets the refresh timer on the object.  You'll see  
these as TCP_REFRESH_HIT in the access log.


-C

On Mar 10, 2008, at 12:18 AM, J. Peng wrote:


this is the info from my squidclient's output.

Median Service Times (seconds)  5 min60 min:
   HTTP Requests (All):   0.0  0.0
   Cache Misses:  0.02592  0.02592
   Cache Hits:0.0  0.0
   Near Hits: 0.03622  0.05331
   Not-Modified Replies:  0.0  0.0
   DNS Lookups:   0.0  0.0
   ICP Queries:   0.0  0.0



what's Near Hits in the info? thanks.





Re: [squid-users] Squid-2, Squid-3, roadmap

2008-03-11 Thread Alex Rousskov

On Mon, 2008-03-10 at 22:38 -0800, Michael Puckett wrote:
 I'll come back to one of Mark's earlier points then which seems to have 
 been lost. What will decide on adoption of -2 or -3 is the killer app.

That point has not been lost. There are two distinct problems here:
First, the killer app is Foo predictions differ and (looking back) are
often wrong. Second, even if we agree what a killer app will be, we may
not have the resources to implement it without more sponsorship,
especially when the development is split between the two versions.

Let's take scalability with CPU cores as an example (without making a
statement that this is the [only] killer app for Squid). Any sane
developer wants scalability. It has been on the roadmaps for years[1].
To actually implement this feature you need an agreement among
developers on how to do it and the funding to do the coding. If we
combine existing Squid2 and Squid3 funding streams, there is probably
enough money to fund the development. I suspect developers can agree on
how to implement it as long as nobody insists on a yet another rewrite
of Squid code.

As you can see, if there was an agreement on how to merge Squid2 and
Squid3 development, the funding streams would merge as well, and
improvements like SMP scalability would come out faster.

 Users don't care if the project is recoded in C++ to make the developers life 
 easier, the developers do, so that is not compelling in the slightest as 
 a reason to migrate. Which I know you already realize.

Yes, but what is missing in the above argument is that you get more
Squid3 developers and reduced danger of significant migration costs down
the road. Users do not care about programming languages, but many care
about support and sustainability of the features they want or depend on.

 IMHO I still think that Mark was right that (at least one) killer app is 
 a true multi-threaded application that can take advantage of current HW 
 today. Now. I think that the current squid is exposed and probably 
 vulnerable if a competing project comes out that takes full advantage of 
 current HW and significantly outperforms todays non-scalable squid. If 
 that happens the entire -2, -3 argument is moot. Just my $0.02 though.

Sure, we realize that there is (and always will be) competition. We need
to focus on specific/pragmatic steps to move forward though. Currently,
we are moving forward at 1/3 of the speed that would have been possible
with the same resources because we split the effort and then (some of
us) spend time trying to merge it back. This does not make the project
more competitive. 

If we want to move at full speed, we need to agree on a single roadmap.
Since we failed to reach that agreement among the active developers, I
suggest that sponsors that want a single roadmap join and help us to
resolve the deadlock.

Thank you,

Alex.
[1] I just made that wish more explicit to avoid any misunderstanding
that SMP scalability is what we want, with some rough indication of a
timeline: http://wiki.squid-cache.org/Features/SmpScale


 Adrian Chadd wrote:
  On Mon, Mar 10, 2008, Alex Rousskov wrote:
 

  WRT responsible sponsoring: I'm willing to pay a (reasonable) premium  
  to get the things that I pay to get into -2 into -3 as well,

  Thank you, and I am sure many sponsors would do the same if the
  trade-offs are explained to them correctly. Unfortunately, I have so far
  failed to convince the most prolific Squid2 developer to accept this as
  the default model and encourage its use.
  
 
  Because I'm still not 100% convinced that the Squid-3 codebase is really
  the way forward.
 
  I shouldn't have been the one that tried to pull some sensible direction and
  feedback into the development group - those working and pushing Squid-3 
  should've
  been doing that already. Unfortunately until very recently there has been
  almost no public dialogue that I could see.
 
  My concern is about project direction and sustainability. I chose to do
  my work on Squid-2 in mid to late 2006 because:
 
  (a) it was stable, so I didn't have to worry (as much) about whether bugs
  were due to me or pre-existing code;
  (b) it was in wide use by people, so incremental improvements could be
  adopted by existing sites without as much fear as trying to push Squid-3
  as a platform;
  (c) I wasn't sure at the time whether there was enough momentum behind 
  Squid-3
  to justify investing time in something that may never be as prolific as
  -2; and I wasn't willing to invest even more of my time trying to drag 
  the
  codebase forward.
 
  I shouldn't have had to try and kick Squid-3 developers along to do simple 
  things
  like regression testing and local benchmarking; I shouldn't have to try and 
  explain
  that the model of do whats interesting to you and what you're being paid 
  for
  is such a great idea as a project direction; I shouldn't have to try and
  explain why an architecture 

[squid-users] ESI choose/when statement

2008-03-11 Thread Paras Fadte
Hi,

 I have an html page with following ESI code

 esi:assign name=number value=100/
 esi:vars
 This is  $(number) On $(HTTP_HOST)
 esi:choose
 esi:when test=$(number)==100
 And I am in when
 /esi:when
 esi:otherwise
 I am in Otherwise
 /esi:otherwise
 /esi:choose
 /esi:vars

 The problem that I encounter is that it does't seem to execute
 esi:when test=$(number)==100 statement correctly since it
evaluates $(number) as a variable whose value is unknown as a result
of which the othweise tag is executed. But strangely  it does print
the value of number variable in the statement
 This is  $(number) On $(HTTP_HOST)  correctly . What could be wrong ?


 Please help and thanks in advance.

 -plf


[squid-users] Possible Error

2008-03-11 Thread Dave Coventry
Hi,

I am still unable to get Squid to process my acl_external_type script
to run as expected.

I'm getting an error in my cache.log 'ipcacheAddEntryFromHosts: Bad IP
address 'localhost.localdomain'' (see log listing below)

Is it possible that this is causing my script's anomalies?

Kind Regards,

Dave Coventry

2008/03/11 13:00:33| Starting Squid Cache version 3.0.STABLE2-20080307
for i686-pc-linux-gnu...
2008/03/11 13:00:33| Process ID 4635
2008/03/11 13:00:33| With 1024 file descriptors available
2008/03/11 13:00:33| ipcacheAddEntryFromHosts: Bad IP address
'localhost.localdomain'
2008/03/11 13:00:33| DNS Socket created at 0.0.0.0, port 32772, FD 7
2008/03/11 13:00:33| Adding nameserver 192.168.10.213 from /etc/resolv.conf
2008/03/11 13:00:33| helperOpenServers: Starting 5 'checkip' processes
2008/03/11 13:00:34| Unlinkd pipe opened on FD 17
2008/03/11 13:00:34| Swap maxSize 102400 KB, estimated 7876 objects
2008/03/11 13:00:34| Target number of buckets: 393
2008/03/11 13:00:34| Using 8192 Store buckets
2008/03/11 13:00:34| Max Mem  size: 8192 KB
2008/03/11 13:00:34| Max Swap size: 102400 KB
2008/03/11 13:00:34| Version 1 of swap file with LFS support detected...
2008/03/11 13:00:34| Rebuilding storage in /usr/local/squid/var/cache (DIRTY)
2008/03/11 13:00:34| Using Least Load store dir selection
2008/03/11 13:00:34| Set Current Directory to /usr/local/squid/var/cache
2008/03/11 13:00:34| Loaded Icons.
2008/03/11 13:00:34| Accepting transparently proxied HTTP connections
at 0.0.0.0, port 3128, FD 19.
2008/03/11 13:00:34| Accepting ICP messages at 0.0.0.0, port 3130, FD 20.
2008/03/11 13:00:34| HTCP Disabled.
2008/03/11 13:00:34| Ready to serve requests.
2008/03/11 13:00:34| Done reading /usr/local/squid/var/cache swaplog
(201 entries)
2008/03/11 13:00:34| Finished rebuilding storage from disk.
2008/03/11 13:00:34|   201 Entries scanned
2008/03/11 13:00:34| 0 Invalid entries.
2008/03/11 13:00:34| 0 With invalid flags.
2008/03/11 13:00:34|   201 Objects loaded.
2008/03/11 13:00:34| 0 Objects expired.
2008/03/11 13:00:34| 0 Objects cancelled.
2008/03/11 13:00:34| 0 Duplicate URLs purged.
2008/03/11 13:00:34| 0 Swapfile clashes avoided.
2008/03/11 13:00:34|   Took 0.14 seconds (1415.77 objects/sec).
2008/03/11 13:00:34| Beginning Validation Procedure
2008/03/11 13:00:34|   Completed Validation Procedure
2008/03/11 13:00:34|   Validated 427 Entries
2008/03/11 13:00:34|   store_swap_size = 2252
2008/03/11 13:00:35| storeLateRelease: released 0 objects


[squid-users] Multi processors

2008-03-11 Thread Marcos Camões Bourgeaiseau

I have compiled squid with those options below:

squid -v
Squid Cache: Version 2.5.STABLE12
configure options:  --sysconfdir=/etc/squid 
--enable-storeio=aufs,coss,diskd,ufs --enable-poll --enable-delay-pools 
--enable-linux-netfilter --enable-htcp --enable-carp --with-pthreads 
--enable-underscores --enable-external --enable-arp-acl 
--with-maxfd=16384 --enable-async-io=50 --enable-snmp


It runs in a machine with 4 Itel Xeon processors, but squid no matter 
how many instances i start, uses only one processor, and my other three 
processors stay idle.


My Squid.conf is this: (I have cutted-out my acls and http_acces)

http_port 8080
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin aspx \?
no_cache deny QUERY

# OPTIONS WHICH AFFECT THE CACHE SIZE
cache_mem 3072000 KB
maximum_object_size 2 KB
minimum_object_size 0 KB
maximum_object_size_in_memory 4 MB
cache_replacement_policy lru
memory_replacement_policy lru

# LOGFILE PATHNAMES AND CACHE DIRECTORIES
cache_dir ufs /var/spool/squid 5000 16 256
cache_access_log /var/log/squid/access.log
cache_log none
cache_store_log none
pid_filename /var/run/squid.pid

# OPTIONS FOR EXTERNAL SUPPORT PROGRAMS
ftp_list_width 32
ftp_passive on

auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off

# OPTIONS FOR TUNING THE CACHE
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
quick_abort_pct 98

# MISCELLANEOUS
append_domain .rio.rj.gov.br
memory_pools_limit 50 MB
log_icp_queries off
snmp_port 3401


Does anyone have an idea?
I have looked up in this list old mails, and have not found anything.

Thanks a lot,
--
Marcos Camões Bourgeaiseau - KIKO

e-mail pessoal: [EMAIL PROTECTED]
e-mail institucional: [EMAIL PROTECTED]



RE: [squid-users] Multi processors

2008-03-11 Thread saul waizer
Marcos,

What OS are you running squid on?

According to the Docs, squid cannot take advantage of an SMP kernel but
there is a reference about having multiple instances of squid running,
However some OS's are very specific on how they handle processes, a little
more information about your setup would be helpful

Saul 
-Original Message-
From: Marcos Camões Bourgeaiseau [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, March 11, 2008 3:21 PM
To: squid-users@squid-cache.org
Subject: [squid-users] Multi processors

I have compiled squid with those options below:

squid -v
Squid Cache: Version 2.5.STABLE12
configure options:  --sysconfdir=/etc/squid 
--enable-storeio=aufs,coss,diskd,ufs --enable-poll --enable-delay-pools 
--enable-linux-netfilter --enable-htcp --enable-carp --with-pthreads 
--enable-underscores --enable-external --enable-arp-acl 
--with-maxfd=16384 --enable-async-io=50 --enable-snmp

It runs in a machine with 4 Itel Xeon processors, but squid no matter 
how many instances i start, uses only one processor, and my other three 
processors stay idle.

My Squid.conf is this: (I have cutted-out my acls and http_acces)

http_port 8080
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin aspx \?
no_cache deny QUERY

# OPTIONS WHICH AFFECT THE CACHE SIZE
cache_mem 3072000 KB
maximum_object_size 2 KB
minimum_object_size 0 KB
maximum_object_size_in_memory 4 MB
cache_replacement_policy lru
memory_replacement_policy lru

# LOGFILE PATHNAMES AND CACHE DIRECTORIES
cache_dir ufs /var/spool/squid 5000 16 256
cache_access_log /var/log/squid/access.log
cache_log none
cache_store_log none
pid_filename /var/run/squid.pid

# OPTIONS FOR EXTERNAL SUPPORT PROGRAMS
ftp_list_width 32
ftp_passive on

auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off

# OPTIONS FOR TUNING THE CACHE
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
quick_abort_pct 98

# MISCELLANEOUS
append_domain .rio.rj.gov.br
memory_pools_limit 50 MB
log_icp_queries off
snmp_port 3401


Does anyone have an idea?
I have looked up in this list old mails, and have not found anything.

Thanks a lot,
-- 
Marcos Camões Bourgeaiseau - KIKO

e-mail pessoal: [EMAIL PROTECTED]
e-mail institucional: [EMAIL PROTECTED]

No virus found in this incoming message.
Checked by AVG. 
Version: 7.5.518 / Virus Database: 269.21.7/1325 - Release Date: 3/11/2008
1:41 PM
 

No virus found in this outgoing message.
Checked by AVG. 
Version: 7.5.518 / Virus Database: 269.21.7/1325 - Release Date: 3/11/2008
1:41 PM
 



RE: [squid-users] ACL lists

2008-03-11 Thread saul waizer
Garry,

Here are some examples I prepared for you:

acl badguys src 6.0.0.0/8
acl badguys2 src 2.0.0.0/8
acl intruder src 10.10.10.16
acl workstation src 10.10.10.19
acl our_networks src 192.168.1.0/24



http_access deny badguys
http_access deny badguys2
http_access deny intruder
http_access allow workstation
http_access allow our_networks

http_access deny all


Brief explanation on these ACL's:

I use a general acl called badguys to prevent access from an entire network
class, I.E. someone doing a DoS attack on your network from multiple IP's on
the same class.

Intruder: A kid with a script trying to use your squid coming from the same
ip (Your question about deny a single host)

The rest is self explanatory, you can call the acl's whatever you want.

After an acl you must have a rule matching the ACL name, so here is where
you either allow or deny access based on your ACL's, see the http_access
allow or deny above.

Last, but also the most important, at the end of all your ACL's put
http_access deny all so you can secure your installation based on your
newly created ACL's

Hope it helps
Saul Waizer




-Original Message-
From: Garry D. Chapple [mailto:[EMAIL PROTECTED] 
Sent: Monday, March 10, 2008 8:27 PM
To: squid-users@squid-cache.org
Subject: [squid-users] ACL lists

Hi,

I am a complete Squid newb with my first install done only yesterday,
2.6 stable(18). Can someone please help with basic ACL config for
network IP's, I would like to allow my local network and restrict just
one or two hosts by IP address. I have Googled a little but as there are
so many ACL configurations it's difficult to know which one works!

Squid is up and running well and I have an ACL to allow my local network
(acl our_networks src 192.168.1.0/24) but how do I then deny access to
just a single host IP? Any examples or good web sites with these kinds
of examples would be much appreciated.

Regards,

Garry C

No virus found in this incoming message.
Checked by AVG. 
Version: 7.5.518 / Virus Database: 269.21.7/1325 - Release Date: 3/11/2008
1:41 PM
 

No virus found in this outgoing message.
Checked by AVG. 
Version: 7.5.518 / Virus Database: 269.21.7/1325 - Release Date: 3/11/2008
1:41 PM
 



Re: [squid-users] Multi processors

2008-03-11 Thread Marcos Camões Bourgeaiseau

Sorry about that.
It is a Ubuntu Feisty with a re-compiled Kernel version 2.6.15.7. We 
just took out some harware modules. We tried some newer Kernel but we 
couldn't make it work with the hadware that we have here.
And just for clarity: It was OK to put four or more instances running at 
the same time, but all of those instances keep using the same processor 
and only that ONE processor. It is such a waste. And we have very 
limited material to work here.


Thanks again,

saul waizer escreveu:

Marcos,

What OS are you running squid on?

According to the Docs, squid cannot take advantage of an SMP kernel but
there is a reference about having multiple instances of squid running,
However some OS's are very specific on how they handle processes, a little
more information about your setup would be helpful

Saul 
-Original Message-
From: Marcos Camões Bourgeaiseau [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, March 11, 2008 3:21 PM

To: squid-users@squid-cache.org
Subject: [squid-users] Multi processors

I have compiled squid with those options below:

squid -v
Squid Cache: Version 2.5.STABLE12
configure options:  --sysconfdir=/etc/squid 
--enable-storeio=aufs,coss,diskd,ufs --enable-poll --enable-delay-pools 
--enable-linux-netfilter --enable-htcp --enable-carp --with-pthreads 
--enable-underscores --enable-external --enable-arp-acl 
--with-maxfd=16384 --enable-async-io=50 --enable-snmp


It runs in a machine with 4 Itel Xeon processors, but squid no matter 
how many instances i start, uses only one processor, and my other three 
processors stay idle.


My Squid.conf is this: (I have cutted-out my acls and http_acces)

http_port 8080
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin aspx \?
no_cache deny QUERY

# OPTIONS WHICH AFFECT THE CACHE SIZE
cache_mem 3072000 KB
maximum_object_size 2 KB
minimum_object_size 0 KB
maximum_object_size_in_memory 4 MB
cache_replacement_policy lru
memory_replacement_policy lru

# LOGFILE PATHNAMES AND CACHE DIRECTORIES
cache_dir ufs /var/spool/squid 5000 16 256
cache_access_log /var/log/squid/access.log
cache_log none
cache_store_log none
pid_filename /var/run/squid.pid

# OPTIONS FOR EXTERNAL SUPPORT PROGRAMS
ftp_list_width 32
ftp_passive on

auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off

# OPTIONS FOR TUNING THE CACHE
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
quick_abort_pct 98

# MISCELLANEOUS
append_domain .rio.rj.gov.br
memory_pools_limit 50 MB
log_icp_queries off
snmp_port 3401


Does anyone have an idea?
I have looked up in this list old mails, and have not found anything.

Thanks a lot,
  


--
Marcos Camões Bourgeaiseau - KIKO

e-mail pessoal: [EMAIL PROTECTED]
e-mail institucional: [EMAIL PROTECTED]


RE: [squid-users] Multi processors

2008-03-11 Thread saul waizer
Marcos,

Ubuntu should work fine with an SMP kernel for squid.

Just to double check, with your setup have you followed these guidelines?

http://wiki.squid-cache.org/MultipleInstances 

one of the most important things to check is that you have different PID's
for every instance of squid, see pid_filename

Also, how many cpu's does that box have? Do you see squid always using the
same one (I.E. CPU2)

Saul W

-Original Message-
From: Marcos Camões Bourgeaiseau [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, March 11, 2008 4:34 PM
To: saul waizer; squid-users@squid-cache.org
Subject: Re: [squid-users] Multi processors

Sorry about that.
It is a Ubuntu Feisty with a re-compiled Kernel version 2.6.15.7. We 
just took out some harware modules. We tried some newer Kernel but we 
couldn't make it work with the hadware that we have here.
And just for clarity: It was OK to put four or more instances running at 
the same time, but all of those instances keep using the same processor 
and only that ONE processor. It is such a waste. And we have very 
limited material to work here.

Thanks again,

saul waizer escreveu:
 Marcos,

 What OS are you running squid on?

 According to the Docs, squid cannot take advantage of an SMP kernel but
 there is a reference about having multiple instances of squid running,
 However some OS's are very specific on how they handle processes, a little
 more information about your setup would be helpful

 Saul 
 -Original Message-
 From: Marcos Camões Bourgeaiseau [mailto:[EMAIL PROTECTED] 
 Sent: Tuesday, March 11, 2008 3:21 PM
 To: squid-users@squid-cache.org
 Subject: [squid-users] Multi processors

 I have compiled squid with those options below:

 squid -v
 Squid Cache: Version 2.5.STABLE12
 configure options:  --sysconfdir=/etc/squid 
 --enable-storeio=aufs,coss,diskd,ufs --enable-poll --enable-delay-pools 
 --enable-linux-netfilter --enable-htcp --enable-carp --with-pthreads 
 --enable-underscores --enable-external --enable-arp-acl 
 --with-maxfd=16384 --enable-async-io=50 --enable-snmp

 It runs in a machine with 4 Itel Xeon processors, but squid no matter 
 how many instances i start, uses only one processor, and my other three 
 processors stay idle.

 My Squid.conf is this: (I have cutted-out my acls and http_acces)

 http_port 8080
 hierarchy_stoplist cgi-bin ?
 acl QUERY urlpath_regex cgi-bin aspx \?
 no_cache deny QUERY

 # OPTIONS WHICH AFFECT THE CACHE SIZE
 cache_mem 3072000 KB
 maximum_object_size 2 KB
 minimum_object_size 0 KB
 maximum_object_size_in_memory 4 MB
 cache_replacement_policy lru
 memory_replacement_policy lru

 # LOGFILE PATHNAMES AND CACHE DIRECTORIES
 cache_dir ufs /var/spool/squid 5000 16 256
 cache_access_log /var/log/squid/access.log
 cache_log none
 cache_store_log none
 pid_filename /var/run/squid.pid

 # OPTIONS FOR EXTERNAL SUPPORT PROGRAMS
 ftp_list_width 32
 ftp_passive on

 auth_param basic children 5
 auth_param basic realm Squid proxy-caching web server
 auth_param basic credentialsttl 2 hours
 auth_param basic casesensitive off

 # OPTIONS FOR TUNING THE CACHE
 refresh_pattern ^ftp:   144020% 10080
 refresh_pattern ^gopher:14400%  1440
 refresh_pattern .   0   20% 4320
 quick_abort_pct 98

 # MISCELLANEOUS
 append_domain .rio.rj.gov.br
 memory_pools_limit 50 MB
 log_icp_queries off
 snmp_port 3401


 Does anyone have an idea?
 I have looked up in this list old mails, and have not found anything.

 Thanks a lot,
   

-- 
Marcos Camões Bourgeaiseau - KIKO

e-mail pessoal: [EMAIL PROTECTED]
e-mail institucional: [EMAIL PROTECTED]

No virus found in this incoming message.
Checked by AVG. 
Version: 7.5.518 / Virus Database: 269.21.7/1325 - Release Date: 3/11/2008
1:41 PM
 

No virus found in this outgoing message.
Checked by AVG. 
Version: 7.5.518 / Virus Database: 269.21.7/1325 - Release Date: 3/11/2008
1:41 PM
 



Re: [squid-users] Multi processors

2008-03-11 Thread Marcos Camões Bourgeaiseau

In parts:

1-One of the most important things to check is that you have different 
PID's for every instance of squid, see pid_filename

Sure. Otherwise you can't even start more than one process.

2-Also, how many cpu's does that box have? Do you see squid always 
using the same one (I.E. CPU2)
Squid always use the same CPU, but others services (apache for exemple) 
in the same machine use all four CPUs, the Ubuntu itself uses the four 
CPUs. That I know, this problem only occurs with squid.


More info: Each squid instance uses it own cache, have it own squid.conf 
file and listens in different ports.


Thanks one more time,

escreveu:

Marcos,

Ubuntu should work fine with an SMP kernel for squid.

Just to double check, with your setup have you followed these guidelines?

http://wiki.squid-cache.org/MultipleInstances 


one of the most important things to check is that you have different PID's
for every instance of squid, see pid_filename

Also, how many cpu's does that box have? Do you see squid always using the
same one (I.E. CPU2)

Saul W

-Original Message-
From: Marcos Camões Bourgeaiseau [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, March 11, 2008 4:34 PM

To: saul waizer; squid-users@squid-cache.org
Subject: Re: [squid-users] Multi processors

Sorry about that.
It is a Ubuntu Feisty with a re-compiled Kernel version 2.6.15.7. We 
just took out some harware modules. We tried some newer Kernel but we 
couldn't make it work with the hadware that we have here.
And just for clarity: It was OK to put four or more instances running at 
the same time, but all of those instances keep using the same processor 
and only that ONE processor. It is such a waste. And we have very 
limited material to work here.


Thanks again,

saul waizer escreveu:
  

Marcos,

What OS are you running squid on?

According to the Docs, squid cannot take advantage of an SMP kernel but
there is a reference about having multiple instances of squid running,
However some OS's are very specific on how they handle processes, a little
more information about your setup would be helpful

Saul 
-Original Message-
From: Marcos Camões Bourgeaiseau [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, March 11, 2008 3:21 PM

To: squid-users@squid-cache.org
Subject: [squid-users] Multi processors

I have compiled squid with those options below:

squid -v
Squid Cache: Version 2.5.STABLE12
configure options:  --sysconfdir=/etc/squid 
--enable-storeio=aufs,coss,diskd,ufs --enable-poll --enable-delay-pools 
--enable-linux-netfilter --enable-htcp --enable-carp --with-pthreads 
--enable-underscores --enable-external --enable-arp-acl 
--with-maxfd=16384 --enable-async-io=50 --enable-snmp


It runs in a machine with 4 Itel Xeon processors, but squid no matter 
how many instances i start, uses only one processor, and my other three 
processors stay idle.


My Squid.conf is this: (I have cutted-out my acls and http_acces)

http_port 8080
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin aspx \?
no_cache deny QUERY

# OPTIONS WHICH AFFECT THE CACHE SIZE
cache_mem 3072000 KB
maximum_object_size 2 KB
minimum_object_size 0 KB
maximum_object_size_in_memory 4 MB
cache_replacement_policy lru
memory_replacement_policy lru

# LOGFILE PATHNAMES AND CACHE DIRECTORIES
cache_dir ufs /var/spool/squid 5000 16 256
cache_access_log /var/log/squid/access.log
cache_log none
cache_store_log none
pid_filename /var/run/squid.pid

# OPTIONS FOR EXTERNAL SUPPORT PROGRAMS
ftp_list_width 32
ftp_passive on

auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off

# OPTIONS FOR TUNING THE CACHE
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
quick_abort_pct 98

# MISCELLANEOUS
append_domain .rio.rj.gov.br
memory_pools_limit 50 MB
log_icp_queries off
snmp_port 3401


Does anyone have an idea?
I have looked up in this list old mails, and have not found anything.

Thanks a lot,
  



  


--
Marcos Camões Bourgeaiseau - KIKO

e-mail pessoal: [EMAIL PROTECTED]
e-mail institucional: [EMAIL PROTECTED]


[squid-users] Troubles with SquidNT in complex environment

2008-03-11 Thread Peter Weichenberger
Dear All,

I'm pretty new to Squid and have troubles running it in the following 
environment:

* LAN with 250 users
* Windows Active Directory Service (ADS)

Web Security Solution consisting of
* IBM Proventia Web Filter performing URL filtering
* Trend Micro InterScan Web Security Suite (IWSS) performing Antivirus scanning

Both products (Webfilter and AV scanner) are installed on virtual machines 
running under VMware ESX 3.02.
Both of them have an integrated, non-caching proxy server.

Starting from the user PC, we have the following proxy chain:

User PC = Web Filter proxy = IWSS proxy =  Internet

I want to use ADS objects like usernames in the Web Filter configuration - e.g. 
to create a rules based on usernames instead of IP addresses.
Problem: The proxy server included in Proventia Web Filter has no ADS/NTLM auth 
support, but can act as an ICAP server.
In order to use ADS objects in the Web Filter config you need an additional, 
NTLM auth-capable proxy server.
Since there is no such proxy server in our LAN yet, we obtained a preconfigured 
Squid for Windows package containing

* SquidNT 2.5 Stable12 binaries
* NTLM auth support

I installed the Squid package on the same virtual machine where the Web Filter 
is installed.
SquidNT acts as an ICAP client, authenticating proxy users against our AD.
The Proventia Web Filter acts as an ICAP server, telling SquidNT if the 
authenticated user is allowed to access the requested site.

So the proxy chain now looks like this:

User PC = Squid proxy (ICAP client) = Web Filter (ICAP server) = IWSS proxy 
= Internet

Unfortunately we have the following problems with SquidNT:

1. Excessive RAM consumption
After starting the SquidNT service, Windows Task manager shows that squid.exe 
uses about 9,000 KB of RAM.
A working day and many user requests later, squid.exe uses about 700,000 KB 
(!!) of RAM!
Although the virtual machine has 1 GB of RAM assigned, Windows XP SP2 started 
to expand its paging file in order to satisfy the ever-increasing RAM demand of 
squid.exe.

Monitoring Windows Task Manager, you can watch squid.exe's memory consumption 
counting up every 5 seconds.
This means I have to restart the SquidNT service at least once a day - 
otherwise the paging file would fill up the harddisk completely.
After restarting SquidNT, it returns back to its initial RAM footprint of about 
9,000 KB, but starts to count up its memory consumption immediately.

I already set memory_pools to off in squid.conf, but this freed up 1,600 KB, 
which is nothing compared to 700,000 KB.

Since we had repeated Squid fatal errors due to insufficient ntlm_auth 
processes in the beginning, I have set the number of these processes to 35
(auth_param ntlm children 35).
Q: Although these are separate processes, can they be the cause for Squid 
sucking RAM like a black hole?
Is there anything else I can do against it - besides restarting the Squid 
service?


2. Service instabilities
Occasionally, users get a message in their browser telling them that the proxy 
has rejected the connection.
I checked the Squid server immediately after having received this message 
myself, but squid.exe was running as always.
Obviously there are situations where Squid ceases its service for a short time, 
being unable to service user requests during this period.

Q: What can be done to enhance reliability/stability of SquidNT?


3. Problems accessing certain websites with Internet Explorer (IE) through Squid
Our users have problems accessing the following sites:
a) Bank website hosting a Java-based Internet banking application (website 
complains about missing Java support/invalid browser configuration)
b) Website running a Citrix portal delivering applications over the Web

Both applications use HTTPS and work when
* using the IWSS proxy, bypassing Squid; independent of browser
* using the Squid proxy, but Firefox instead of IE

Problem: IE is our standard browser and is installed everywhere.

Q: Is there any IE setting, which has to be changed in order to make special 
Web applications work over Squid?


Ideas and hints regarding any of these issues are appreciated.

Many thanks in advance,

Peter

_
Der WEB.DE SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen!
http://smartsurfer.web.de/?mc=100071distributionid=0066



RE: [squid-users] ACL lists

2008-03-11 Thread Garry D. Chapple
Thanks Saul,

It works a treat mate and thanks again for a quick response.

Regards,

Garry Chapple

-Original Message-
From: saul waizer [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, 12 March 2008 5:24 AM
To: squid-users@squid-cache.org
Cc: Garry D. Chapple
Subject: RE: [squid-users] ACL lists

Garry,

Here are some examples I prepared for you:

acl badguys src 6.0.0.0/8
acl badguys2 src 2.0.0.0/8
acl intruder src 10.10.10.16
acl workstation src 10.10.10.19
acl our_networks src 192.168.1.0/24



http_access deny badguys
http_access deny badguys2
http_access deny intruder
http_access allow workstation
http_access allow our_networks

http_access deny all


Brief explanation on these ACL's:

I use a general acl called badguys to prevent access from an entire
network
class, I.E. someone doing a DoS attack on your network from multiple
IP's on
the same class.

Intruder: A kid with a script trying to use your squid coming from the
same
ip (Your question about deny a single host)

The rest is self explanatory, you can call the acl's whatever you want.

After an acl you must have a rule matching the ACL name, so here is
where
you either allow or deny access based on your ACL's, see the http_access
allow or deny above.

Last, but also the most important, at the end of all your ACL's put
http_access deny all so you can secure your installation based on your
newly created ACL's

Hope it helps
Saul Waizer




-Original Message-
From: Garry D. Chapple [mailto:[EMAIL PROTECTED] 
Sent: Monday, March 10, 2008 8:27 PM
To: squid-users@squid-cache.org
Subject: [squid-users] ACL lists

Hi,

I am a complete Squid newb with my first install done only yesterday,
2.6 stable(18). Can someone please help with basic ACL config for
network IP's, I would like to allow my local network and restrict just
one or two hosts by IP address. I have Googled a little but as there are
so many ACL configurations it's difficult to know which one works!

Squid is up and running well and I have an ACL to allow my local network
(acl our_networks src 192.168.1.0/24) but how do I then deny access to
just a single host IP? Any examples or good web sites with these kinds
of examples would be much appreciated.

Regards,

Garry C

No virus found in this incoming message.
Checked by AVG. 
Version: 7.5.518 / Virus Database: 269.21.7/1325 - Release Date:
3/11/2008
1:41 PM
 

No virus found in this outgoing message.
Checked by AVG. 
Version: 7.5.518 / Virus Database: 269.21.7/1325 - Release Date:
3/11/2008
1:41 PM
 



Re: [squid-users] Troubles with SquidNT in complex environment

2008-03-11 Thread Guido Serassio

Hi,

At 22:52 11/03/2008, Peter Weichenberger wrote:

Dear All,

I'm pretty new to Squid and have troubles running it in the 
following environment:


* LAN with 250 users
* Windows Active Directory Service (ADS)

Web Security Solution consisting of
* IBM Proventia Web Filter performing URL filtering
* Trend Micro InterScan Web Security Suite (IWSS) performing 
Antivirus scanning


Both products (Webfilter and AV scanner) are installed on virtual 
machines running under VMware ESX 3.02.

Both of them have an integrated, non-caching proxy server.

Starting from the user PC, we have the following proxy chain:

User PC = Web Filter proxy = IWSS proxy =  Internet

I want to use ADS objects like usernames in the Web Filter 
configuration - e.g. to create a rules based on usernames instead of 
IP addresses.
Problem: The proxy server included in Proventia Web Filter has no 
ADS/NTLM auth support, but can act as an ICAP server.
In order to use ADS objects in the Web Filter config you need an 
additional, NTLM auth-capable proxy server.
Since there is no such proxy server in our LAN yet, we obtained a 
preconfigured Squid for Windows package containing


* SquidNT 2.5 Stable12 binaries
* NTLM auth support


First, you should upgrade to Squid 2.6 and add also Negotiate authentication.

I installed the Squid package on the same virtual machine where the 
Web Filter is installed.

SquidNT acts as an ICAP client, authenticating proxy users against our AD.
The Proventia Web Filter acts as an ICAP server, telling SquidNT if 
the authenticated user is allowed to access the requested site.


So the proxy chain now looks like this:

User PC = Squid proxy (ICAP client) = Web Filter (ICAP server) = 
IWSS proxy = Internet


Unfortunately we have the following problems with SquidNT:

1. Excessive RAM consumption
After starting the SquidNT service, Windows Task manager shows that 
squid.exe uses about 9,000 KB of RAM.


This is a know and fixed old bug for Squid STABLE 12:
http://www.squid-cache.org/bugs/show_bug.cgi?id=1522

A working day and many user requests later, squid.exe uses about 
700,000 KB (!!) of RAM!
Although the virtual machine has 1 GB of RAM assigned, Windows XP 
SP2 started to expand its paging file in order to satisfy the 
ever-increasing RAM demand of squid.exe.


Please: use a Server OS ..

Monitoring Windows Task Manager, you can watch squid.exe's memory 
consumption counting up every 5 seconds.
This means I have to restart the SquidNT service at least once a day 
- otherwise the paging file would fill up the harddisk completely.
After restarting SquidNT, it returns back to its initial RAM 
footprint of about 9,000 KB, but starts to count up its memory 
consumption immediately.


I already set memory_pools to off in squid.conf, but this freed up 
1,600 KB, which is nothing compared to 700,000 KB.


Since we had repeated Squid fatal errors due to insufficient 
ntlm_auth processes in the beginning, I have set the number of these 
processes to 35

(auth_param ntlm children 35).


If you are using IE7, Negotiate here could help you.

Q: Although these are separate processes, can they be the cause for 
Squid sucking RAM like a black hole?
Is there anything else I can do against it - besides restarting the 
Squid service?


Upgrade Squid to latest 2.6.



2. Service instabilities
Occasionally, users get a message in their browser telling them that 
the proxy has rejected the connection.
I checked the Squid server immediately after having received this 
message myself, but squid.exe was running as always.
Obviously there are situations where Squid ceases its service for a 
short time, being unable to service user requests during this period.


Expected, because you are running on a Workstation OS:
http://smallvoid.com/article/winnt-tcpip-max-limit.html



Q: What can be done to enhance reliability/stability of SquidNT?



Run Squid on Windows 2003 Server.

3. Problems accessing certain websites with Internet Explorer (IE) 
through Squid

Our users have problems accessing the following sites:
a) Bank website hosting a Java-based Internet banking application 
(website complains about missing Java support/invalid browser configuration)


Latest Java VM is NTLM aware.


b) Website running a Citrix portal delivering applications over the Web


Not sure if there is something to do here., but there are many 
changes/improvement into 2.6.



Both applications use HTTPS and work when
* using the IWSS proxy, bypassing Squid; independent of browser
* using the Squid proxy, but Firefox instead of IE

Problem: IE is our standard browser and is installed everywhere.

Q: Is there any IE setting, which has to be changed in order to make 
special Web applications work over Squid?



Ideas and hints regarding any of these issues are appreciated.


Again, first upgrade to latest 2.6 STABLE 18.

Regards

Guido



-

Guido Serassio
Acme Consulting S.r.l. - Microsoft 

Re: [squid-users] Multi processors

2008-03-11 Thread Mark Nottingham

Sounds like you want processor affinity;
  http://www.linuxcommand.org/man_pages/taskset1.html


Cheers,


On 12/03/2008, at 8:22 AM, Marcos Camões Bourgeaiseau wrote:


In parts:

1-One of the most important things to check is that you have  
different

PID's for every instance of squid, see pid_filename
Sure. Otherwise you can't even start more than one process.

2-Also, how many cpu's does that box have? Do you see squid always
using the same one (I.E. CPU2)
Squid always use the same CPU, but others services (apache for  
exemple)

in the same machine use all four CPUs, the Ubuntu itself uses the four
CPUs. That I know, this problem only occurs with squid.

More info: Each squid instance uses it own cache, have it own  
squid.conf

file and listens in different ports.

Thanks one more time,

escreveu:

Marcos,

Ubuntu should work fine with an SMP kernel for squid.

Just to double check, with your setup have you followed these  
guidelines?


http://wiki.squid-cache.org/MultipleInstances

one of the most important things to check is that you have  
different PID's

for every instance of squid, see pid_filename

Also, how many cpu's does that box have? Do you see squid always  
using the

same one (I.E. CPU2)

Saul W

-Original Message-
From: Marcos Camões Bourgeaiseau [mailto:[EMAIL PROTECTED]
Sent: Tuesday, March 11, 2008 4:34 PM
To: saul waizer; squid-users@squid-cache.org
Subject: Re: [squid-users] Multi processors

Sorry about that.
It is a Ubuntu Feisty with a re-compiled Kernel version 2.6.15.7. We
just took out some harware modules. We tried some newer Kernel but we
couldn't make it work with the hadware that we have here.
And just for clarity: It was OK to put four or more instances  
running at
the same time, but all of those instances keep using the same  
processor

and only that ONE processor. It is such a waste. And we have very
limited material to work here.

Thanks again,

saul waizer escreveu:


Marcos,

What OS are you running squid on?

According to the Docs, squid cannot take advantage of an SMP  
kernel but
there is a reference about having multiple instances of squid  
running,
However some OS's are very specific on how they handle processes,  
a little

more information about your setup would be helpful

Saul
-Original Message-
From: Marcos Camões Bourgeaiseau [mailto:[EMAIL PROTECTED]
Sent: Tuesday, March 11, 2008 3:21 PM
To: squid-users@squid-cache.org
Subject: [squid-users] Multi processors

I have compiled squid with those options below:

squid -v
Squid Cache: Version 2.5.STABLE12
configure options:  --sysconfdir=/etc/squid
--enable-storeio=aufs,coss,diskd,ufs --enable-poll --enable-delay- 
pools

--enable-linux-netfilter --enable-htcp --enable-carp --with-pthreads
--enable-underscores --enable-external --enable-arp-acl
--with-maxfd=16384 --enable-async-io=50 --enable-snmp

It runs in a machine with 4 Itel Xeon processors, but squid no  
matter
how many instances i start, uses only one processor, and my other  
three

processors stay idle.

My Squid.conf is this: (I have cutted-out my acls and http_acces)

http_port 8080
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin aspx \?
no_cache deny QUERY

# OPTIONS WHICH AFFECT THE CACHE SIZE
cache_mem 3072000 KB
maximum_object_size 2 KB
minimum_object_size 0 KB
maximum_object_size_in_memory 4 MB
cache_replacement_policy lru
memory_replacement_policy lru

# LOGFILE PATHNAMES AND CACHE DIRECTORIES
cache_dir ufs /var/spool/squid 5000 16 256
cache_access_log /var/log/squid/access.log
cache_log none
cache_store_log none
pid_filename /var/run/squid.pid

# OPTIONS FOR EXTERNAL SUPPORT PROGRAMS
ftp_list_width 32
ftp_passive on

auth_param basic children 5
auth_param basic realm Squid proxy-caching web server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off

# OPTIONS FOR TUNING THE CACHE
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
quick_abort_pct 98

# MISCELLANEOUS
append_domain .rio.rj.gov.br
memory_pools_limit 50 MB
log_icp_queries off
snmp_port 3401


Does anyone have an idea?
I have looked up in this list old mails, and have not found  
anything.


Thanks a lot,







--
Marcos Camões Bourgeaiseau - KIKO

e-mail pessoal: [EMAIL PROTECTED]
e-mail institucional: [EMAIL PROTECTED]


--
Mark Nottingham   [EMAIL PROTECTED]




Re: [squid-users] Possible Error

2008-03-11 Thread Amos Jeffries

Dave Coventry wrote:

Hi,

I am still unable to get Squid to process my acl_external_type script
to run as expected.


Not unusual for new scripts. Which script was this and what it doing?



I'm getting an error in my cache.log 'ipcacheAddEntryFromHosts: Bad IP
address 'localhost.localdomain'' (see log listing below)

Is it possible that this is causing my script's anomalies?


Depends. What does your hosts file contain? Sepcifically lines 
containing 'localhost' or 'localdomain'.





Kind Regards,

Dave Coventry

2008/03/11 13:00:33| Starting Squid Cache version 3.0.STABLE2-20080307
for i686-pc-linux-gnu...
2008/03/11 13:00:33| Process ID 4635
2008/03/11 13:00:33| With 1024 file descriptors available
2008/03/11 13:00:33| ipcacheAddEntryFromHosts: Bad IP address
'localhost.localdomain'
2008/03/11 13:00:33| DNS Socket created at 0.0.0.0, port 32772, FD 7
2008/03/11 13:00:33| Adding nameserver 192.168.10.213 from /etc/resolv.conf
2008/03/11 13:00:33| helperOpenServers: Starting 5 'checkip' processes
2008/03/11 13:00:34| Unlinkd pipe opened on FD 17
2008/03/11 13:00:34| Swap maxSize 102400 KB, estimated 7876 objects
2008/03/11 13:00:34| Target number of buckets: 393
2008/03/11 13:00:34| Using 8192 Store buckets
2008/03/11 13:00:34| Max Mem  size: 8192 KB
2008/03/11 13:00:34| Max Swap size: 102400 KB
2008/03/11 13:00:34| Version 1 of swap file with LFS support detected...
2008/03/11 13:00:34| Rebuilding storage in /usr/local/squid/var/cache (DIRTY)
2008/03/11 13:00:34| Using Least Load store dir selection
2008/03/11 13:00:34| Set Current Directory to /usr/local/squid/var/cache
2008/03/11 13:00:34| Loaded Icons.
2008/03/11 13:00:34| Accepting transparently proxied HTTP connections
at 0.0.0.0, port 3128, FD 19.
2008/03/11 13:00:34| Accepting ICP messages at 0.0.0.0, port 3130, FD 20.
2008/03/11 13:00:34| HTCP Disabled.
2008/03/11 13:00:34| Ready to serve requests.
2008/03/11 13:00:34| Done reading /usr/local/squid/var/cache swaplog
(201 entries)
2008/03/11 13:00:34| Finished rebuilding storage from disk.
2008/03/11 13:00:34|   201 Entries scanned
2008/03/11 13:00:34| 0 Invalid entries.
2008/03/11 13:00:34| 0 With invalid flags.
2008/03/11 13:00:34|   201 Objects loaded.
2008/03/11 13:00:34| 0 Objects expired.
2008/03/11 13:00:34| 0 Objects cancelled.
2008/03/11 13:00:34| 0 Duplicate URLs purged.
2008/03/11 13:00:34| 0 Swapfile clashes avoided.
2008/03/11 13:00:34|   Took 0.14 seconds (1415.77 objects/sec).
2008/03/11 13:00:34| Beginning Validation Procedure
2008/03/11 13:00:34|   Completed Validation Procedure
2008/03/11 13:00:34|   Validated 427 Entries
2008/03/11 13:00:34|   store_swap_size = 2252
2008/03/11 13:00:35| storeLateRelease: released 0 objects


Amos
--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.


Re: [squid-users] Transparent proxy. router + dedicated server

2008-03-11 Thread Amos Jeffries

Rafal Ramocki wrote:

Amos Jeffries pisze:

Rafal Ramocki wrote:

Amos Jeffries wrote:

Hello,

I have problem with my squid setup. For quite long time I've been 
using

Squid 2.6 STABLE-17. I decidet to switch to squid 3.0 but there is
problem.

My configuration is:

large network - nat router (linux) - router (hardware ATM) - 
internet

  \   /
squid

Most of traffic is nat'ed on nat router, and forwarded to border
hardware atm router. HTTP traffic (port 80) is DNAT'ed to machine with
squid. And that setup worked fine for now. But after switching to 
3.0 I

have following error message:

ERROR
The requested URL could not be retrieved

While trying to retrieve the URL: /

The following error was encountered:

 * Invalid URL

Here are few directives from my configuration file.

http_port 80 transparent
icp_port 0
htcp_port 0
tcp_outgoing_address X.X.X.X
dns_nameservers X.X.X.X


I have been working on it for a quite long time. I have been googling
but i have found information about one server setups. Even in squid 
faq

there is only that configuration.

Please help ;)


What you have is in no way transparent. As long as DNAT was used 
never has

been.


That setup worked for me for something about 4 years. Transparent for 
me menas with no configuration in browsers.



Transparent interception is done with REDIRECT (netfilter) or TPROXY-2
when squid sits on the NAT box with the full NAT tables available to 
it.


It is not possilble in my case. My network is 3000+ nodes. Both 
machines are under heavy load. And i just cant place squid, filtering 
and traffic controll on one single machine. I also don't want to 
place squid after router as long as that setup is less fail proof.


Using DNAT to another box isolates squid from the information it 
needs to
work transparently, 


Funny is that squid never needed that information ;)


but it can still be faked with an semi-open proxy
config.

 

You need the following for starters:

  # cope with standard web requests ...
  http_port 80 vhost
  # SECURE the access
  acl localnet src 10.0.0.0/8
  http_access deny !localnet

** alter the ACL to contain the IP ranges you are intercepting.


I've already tried that similar configuration. And I've tried once 
more. The reslt is:


ERROR
The requested URL could not be retrieved

While trying to retrieve the URL: http://www.debian.org/

The following error was encountered:

* Unable to forward this request at this time.

This request could not be forwarded to the origin server or to any 
parent caches. The most likely cause for this error is that:


* The cache administrator does not allow this cache to make 
direct connections to origin servers, and

* All configured parent caches are currently unreachable.

In cache.log I have:

2008/03/10 09:16:53| Failed to select source for 
'http://www.debian.org/'

2008/03/10 09:16:53|   always_direct = 0
2008/03/10 09:16:53|never_direct = 0
2008/03/10 09:16:53|timedout = 0


I think in that setup directives cache_peer* are mandatory. But I 
cant't define in that way whole internet ;)


No not mandatory. The semi-open-proxy config should be lettign 
internal requests out in your setup.





NP: broken non-HTTP-compliant client software will still get that error
page, but compliant software like most browsers will get through okay.


That is OK for me. I also want to ensure that by port 80 is 
transmited only http traffic and not for example p2p.


Any ideas? Because I'm running out. :)


This second problem you are now hitting (Unable to forward) shows a 
problem making general web requests.


 - check that the squid box is able to make outbound port-80 requests.
   (particularly without looping back through itself!)


Yes it can. When i configure proxy in browser it works fine. I have 
similar test environment. In the same configuration when I'm redirecting 
it works fine, when I'm configuring browser to use squid it works fine. 
It doesn't work when I'm dnating from other machine.



 - check that the squid box can resolve the domain its fetching


Squid box can resolve DNS. Squid it self should resolbe to. 
Configuration is the same like in squid i'm currently using (2.6 Stable 
17).




In which case it looks like either something else in the config blocking 
that request or a transport-layer problem.


If you set:   debug_options ALL,5

What shows up as the preceeding 150 or so cache.log lines to the Failed 
to select source for entry?



Amos
--
Please use Squid 2.6STABLE17+ or 3.0STABLE1+
There are serious security advisories out on all earlier releases.