Re: gzip+squid3 code

2006-09-01 Thread Joe Cooper

Henrik Nordstrom wrote:

sön 2006-08-27 klockan 17:33 -0500 skrev Joe Cooper:


Hey guys,

Jon Kay here in Austin wrote it under contract to Swell (US 
work-for-hire laws apply, and the contract stipulates it explicitly) so 
I own it, and Ganzalo Arana made a few bugfixes.  I'm happy to have it 
merged into mainline Squid, and I had given Ganzalo permission to do so 
some time ago, but I guess it never happened and I've been too busy to 
follow up.



The first steps have now been taken and the code is up on
devel.squid-cache.org for maintenance and review.


Excellent.

I'll just add that the code did work exactly as it was supposed to, in 
my testing at the time, but there remained at least one serious memory 
leak (possibly others) that led to it being unusable.  I believe all 
crashes that I experienced running the code were attributable to this 
leak.  Whether there are other issues that never had a chance to exhibit 
themselves is unknown to me.


Ganzalo was running it, I believe, in production, so he may actually 
have fixes for the issues that I saw.  I haven't been in contact with 
him for some time, however, so I don't know the state of things in his 
neck of the woods.


As you may know, I'm out of the Squid business for the foreseeable, so I 
won't be doing anything else with the code.  But I do hope it sees some 
real world use.


Re: net ads user info group authenticator

2005-07-04 Thread Joe Cooper

Henrik Nordstrom wrote:

On Mon, 4 Jul 2005, Joe Cooper wrote:

For whatever reason (still would like to know why) one of my client 
systems using NTLM auth to an Active Directory server suddenly could 
no longer get group and user information via wbinfo -g and wbinfo -u 
after an AD server update.



This is a question to the Samba people.. but I would guess there is some 
problem with the Kerberos computer account. The NTLM authentication uses 
NT Domain RPC login, while ADS lookups (groups etc) uses LDAP with 
Kerberos authentication I think.


Interestingly, wbinfo -t still works, and the rest of the NTLM 
authentication stuff works.  Even a wbinfo -a user%pass works.  So, I 
don't think kerberos is the issue (though I've had enough troubles from 
that aspect of the system to know not to argue too strongly about it).


However, the net ads user info command still worked fine, so as a 
workaround I rewrote the wbinfo_group.pl to use net ads commands.  
I've attached the modified version.



Not sure if this interface is considered stable, or if it will change 
wildly between Samba versions.. but if the Samba people says it is a 
stable interface then I have no problem with it as an alternative.


My understanding is that the net commands are The Way of the Future, and 
are designed to mirror the Windows net commands.  That's not to say they 
are stable--but they are being recommended as the right way to interact 
with the AD.


It's probably wrong in some or many ways, and it has the negative of 
needing a username/password (but seemingly a not very privileged user 
will work).



Probably same requirements as for the LDAP helpers.. you need some 
account who is allowed to see what groups you have. In most 
installations this is any account.


That seems to be the case.  I used a plain old user account, which had 
the lowest level of privileges available to users in the AD in question. 
 I guess it is mostly harmless to have the password in there.



Anyway, it solved my immediate problem and got groups working again.



If my suspicion above is correct it should help to rejoin the ADS tree, 
followed by a restart of winbind to flush the local cache..


Did that without success a few times--the rejoin was fine, and winbind 
came back up without trouble.  No errors, wbinfo -t succeeds, and even 
winbindd with debugging cranked up to 9 revealed nothing (or at least 
nothing I could find...it might have ended up in a log I didn't know to 
look at, though I redirected output to STDIO).  After an hour or so of 
poking at it, I rewrote the group authenticator in about five minutes. 
It worked, so I called it done.


I'm guessing something changed in the AD during the upgrade, though I 
haven't had time to dig around in the MS knowledge base and google to 
see what.  This box has been working fine for over a year with the same 
configuration...the last Samba upgrade from a few months ago, and went 
without a hitch.  So I blame Windows.


Thanks for chiming in.


net ads user info group authenticator

2005-07-04 Thread Joe Cooper

Hi all,

For whatever reason (still would like to know why) one of my client 
systems using NTLM auth to an Active Directory server suddenly could no 
longer get group and user information via wbinfo -g and wbinfo -u after 
an AD server update.  However, the net ads user info command still 
worked fine, so as a workaround I rewrote the wbinfo_group.pl to use net 
ads commands.  I've attached the modified version.


It's probably wrong in some or many ways, and it has the negative of 
needing a username/password (but seemingly a not very privileged user 
will work).  Anyway, it solved my immediate problem and got groups 
working again.


Something along these lines might be useful in subsequent Squid 
versions.  I don't know if wbinfo is one of the samba pieces being 
replaced by net commands, but if it is, obviously a net replacement 
would be good to have around.


Comments welcome.  I'm always eager to learn more about winbind.  It's 
still a mystery to me, most days.  ;-)


netads_group.pl
Description: Perl program


Re: future of icap-patch

2005-05-02 Thread Joe Cooper
Henrik Nordstrom wrote:
On Mon, 2 May 2005, Baumgaertel Oliver wrote:
I do understand that there were a couple of people in the past asking for
exactly that and it was denied. I am currently in the position to have 
the
freedom to start a new icap code project. But before I dive headfirst 
into
it, I'd ask if it is at all possible to make it a part of the mainline 
once
it reaches a certain maturity and if so, what do I have to do for it.

For it to get included it must be to the development version of Squid, 
i.e. Squid-3.

Squid-2.5 is in it's STABLE cycle where only bugfixes is accepted. No 
new features is accepted except as required to address security issues.
Though there have been occasions where a new branch was started just to 
"keep up" with what a side project was doing.  Sometimes those branches 
get merged into the next development release, either by someone else, or 
the same person that started the branch.

But, Henrik is the boss of devel.squid-cache.org, so my opinion is moot. 
 Just mentioning what has been known to happen in the past...


Re: LFS Browser support

2005-04-18 Thread Joe Cooper
Serassio Guido wrote:
Hi Henrik,
At 03.01 18/04/2005, Henrik Nordstrom wrote:
On Sun, 17 Apr 2005, Serassio Guido wrote:
It seems that the Mozilla and Internet Explorer doesn't support at 
all file bigger than 2 GB.

Do they work when not configured to use the proxy?

No, identical behaviour.
Also worth noting is that most webservers (including Apache up to 2.0.x, 
but not 2.1.x) on 32 bit platforms will not serve >2GB files either. 
So, it's hard to know which component is to blame when a large file 
download doesn't work, as most clients and servers are lacking support.

Thanks, Henrik, for making sure it isn't Squid that's to blame!  ;-)


Re: Linux filesystem speed comparison

2005-04-11 Thread Joe Cooper
Steven Wilton wrote:
Heheh..."only" is a relative term.  Our old 500 Mhz boxes 
couldn't even 
work two 7200 RPM IDE disks (on different buses, of course) 
effectively, 
and three provided a barely measurable boost.  I'd be 
surprised if you 
aren't able to easily max your CPU with ReiserFS on a 
polygraph run on 
these boxes...and it will probably happen at around 110 reqs/sec., 
assuming reasonable configuration of kernel and Squid.


We currently get arount 70 reqs/sec using 25% CPU (5 minute average for both
values) on this hardware.  I'm confident that I'll get a pretty high number
of requests/second through these proxies becase of the epoll patch.
Ah, yes, epoll could very well be an interesting twist...and it might 
make the relative filesystem results come out very differently.




Re: Linux filesystem speed comparison

2005-04-11 Thread Joe Cooper
Steven Wilton wrote:
It depends on the balance of hardware, but I'd be extremely 
surprised if XFS performs better than either reiser or ext2/3 for
Squid workloads on /any/ system.  So I have to assume your
methodology is slightly flawed. ;-)

That's what I thought, but there has been a bit of XFS work in recent
 kernels, and after my initial observations I was wondering if this
has improved the performance with squid's filesystem load.
It's been at least a year since I tried XFS, so I reserve the right to
be horribly wrong.  But it was so far behind the other options when I
last toyed with it that I completely wrote it off as wholly
uninteresting for Squid workloads.  ;-)
While I have found that ext3 (when configured correctly) has
improved performance for Squid quite a bit over ext2, it is still
no match for ReiserFS on our hardware, which always has more than
enough CPU for the disk bandwidth available.  But, I can certainly
imagine a hardware configuration that would lead to ext3 performing
better than ReiserFS (especially since Duane has proven that it is
possible by putting 6 10,000 RPM disks on a relatively wimpy CPU
and testing the configuration extensively with polygraph).

The machines are a bit old (P3-500), but they've only got 3x 9Gb SCSI
cache disks, and they're not running anywhere near 100% load.
Heheh..."only" is a relative term.  Our old 500 Mhz boxes couldn't even 
work two 7200 RPM IDE disks (on different buses, of course) effectively, 
and three provided a barely measurable boost.  I'd be surprised if you 
aren't able to easily max your CPU with ReiserFS on a polygraph run on 
these boxes...and it will probably happen at around 110 reqs/sec., 
assuming reasonable configuration of kernel and Squid.

I'm always interested in conflicting reports, however.  If you've
got a case that makes XFS faster for Squid against polygraph, I'd 
love to see the results and configuration.

I had a quick look at polygraph before, but I didn't get very far in
testing it.  I would like to produce some polygraph figures for the
proxies, so I will see what I can do to make a test system.  My only
concern is that the proxies may be able to process requests faster
than the polygraph hardware can serve them.
From memory there are a lot of options available for polygraph, and
I was
not sure how to produce meaningful results.  Any help would be
appreciated.
I've got no magic formula for making Polygraph go.  I can say that it 
built easily on Linux and FreeBSD last time I tried on either platform, 
and the documentation that Alex wrote for preparing for the cacheoffs 
was very helpful in getting an environment that works for roughly 
replicating publicly available proxy performance numbers, including 
those that Duane, myself, and others have published for Squid.

Oh, and as for your concern that the Polygraph boxes might not be able 
to work your Squid hard enough to stress it...don't worry.  Polygraph 
has a lot less work to do than Squid, and Alex has done a nice job 
making it go plenty fast.  No single Squid instance will be a match for 
even a single polygraph box of equal hardware.  I've been able to use a 
laptop for /both/ sides of the polypair and successfully stress Squid on 
similar hardware to what you're testing (not recommended, since results 
from a single box polypair wouldn't be trustworthy or sanely comparable 
to a real pair of poly boxes--but in a pinch it will do).

Just some thoughts.  Performance has become progressively less important 
over the years as hardware has become so much faster.  I only see a few 
cases a year where we even need more than one Squid box, for anything 
other than redundancy (though sometimes the one Squid box is obscenely 
powerful).  So, my Squid performance testing has become an occasional 
hobby rather than a core interest...so new results for various 
configurations might surprise me just as much as anyone else.


Re: Linux filesystem speed comparison

2005-04-11 Thread Joe Cooper
Steven Wilton wrote:
The interesting thing is that this test shows that in a 2.6.10 kernel, XFS
is the clear winner for I/O wait, followed by ext3 writeback.  I was not
surprised to see reiser come off worse than ext3, as I have previously tried
to use reiser on our proxies (on a 2.2 kernel), and noticed that initially
the proxy was a lot quicker, but as the disk filled up, the cache
performance dropped.
I thought I'd post this to squid-dev for comments first, as I have read
other posts that say that squid+reiser is the recommended combination, and
was wondering if there are other tests that I should perform.
The only test I know of that accurately predicts how a proxy will 
perform when given real load is Polygraph.  And depending on the 
hardware configuration, either ext2/ext3 or reiserfs will easily 
outperform xfs.  In my experience, ReiserFS is a better performer 
assuming CPU is not a bottleneck.  But it is a much heavier user of CPU, 
and so some test results (like Duane's extensive benchmarks from a year 
or more ago) show ext2/3 performing measurably better than ReiserFS.  A 
Polymix-4 test will fill the cache twice and then begin the test...so it 
takes into account the decline in performance that hits all filesystems.

It depends on the balance of hardware, but I'd be extremely surprised if 
XFS performs better than either reiser or ext2/3 for Squid workloads on 
/any/ system.  So I have to assume your methodology is slightly flawed.
;-)

While I have found that ext3 (when configured correctly) has improved 
performance for Squid quite a bit over ext2, it is still no match for 
ReiserFS on our hardware, which always has more than enough CPU for the 
disk bandwidth available.  But, I can certainly imagine a hardware 
configuration that would lead to ext3 performing better than ReiserFS 
(especially since Duane has proven that it is possible by putting 6 
10,000 RPM disks on a relatively wimpy CPU and testing the configuration 
extensively with polygraph).

I'm always interested in conflicting reports, however.  If you've got a 
case that makes XFS faster for Squid against polygraph, I'd love to see 
the results and configuration.


Re: originserver plus carp configuration?

2005-04-08 Thread Joe Cooper
Joe Cooper wrote:
Now that httpd accel hosts are cache_peers I can used cache_peer_access 
to make the distribution decisions...Don't know why I didn't think of 
that, as I've used it in the past for similar purposes.
And cache_peer_access is broken in 3.0, due to bug 1201.  Sigh.
Anyone wanna make a quick bit of cash to fix it?  ;-)


Re: originserver plus carp configuration?

2005-04-08 Thread Joe Cooper
Henrik Nordstrom wrote:
On Thu, 7 Apr 2005, Joe Cooper wrote:
CARP balances based on a hash of the destination URL, not client.

Hmmm...that raises a different question: How does one address the 
issue of maintaining client stickiness?

It doesn't. CARP is designed for routing requests to a cloud/array of 
parent proxy cache servers with minimal duplication of cache content.
Nevermind.  One shouldn't configure Squid while tired.  ;-)
Now that httpd accel hosts are cache_peers I can used cache_peer_access 
to make the distribution decisions...Don't know why I didn't think of 
that, as I've used it in the past for similar purposes.

That said, http://devel.squid-cache.org/rproxy/backend.html#balance 
suggests that some mechanism for load balancing based on client exists 
in rproxy, though I don't see anything relevant in the configuration 
options.  (This item is marked "(done)" in the todo list.)


Re: originserver plus carp configuration?

2005-04-07 Thread Joe Cooper
Thanks for the rapid response, Henrik.
Henrik Nordstrom wrote:
On Thu, 7 Apr 2005, Joe Cooper wrote:
The whole configuration is working, except for load balancing.  
Without "carp" I always get FIRST_UP_PARENT/192.168.1.47.  With "carp" 
I always get CARP/192.168.1.48, no matter what IP I'm coming from (and 
I tried a half dozen client IPs to be sure I wasn't just 
coincidentally always hashing to the same destination).

CARP balances based on a hash of the destination URL, not client.
Hmmm...that raises a different question: How does one address the issue 
of maintaining client stickiness?

You can get quite detailed tracing of the CARP hashing by enabling debug 
section 39,9, combined with the cachemgr carp section.
Excellent.  Thanks for the tip.


originserver plus carp configuration?

2005-04-07 Thread Joe Cooper
Hey Henrik and all,
I've got a reverse proxy running Squid with the following cache_peer 
configuration:

cache_peer 192.168.1.47 parent 80 7 originserver no-query carp
cache_peer 192.168.1.48 parent 80 7 originserver no-query carp
cache_peer_domain 192.168.1.47 .domain.com
cache_peer_domain 192.168.1.48 .domain.com
The cache_peer_domain settings are there because we have 6 back-end 
servers serving 3 domains--two servers for each domain.

The whole configuration is working, except for load balancing.  Without 
"carp" I always get FIRST_UP_PARENT/192.168.1.47.  With "carp" I always 
get CARP/192.168.1.48, no matter what IP I'm coming from (and I tried a 
half dozen client IPs to be sure I wasn't just coincidentally always 
hashing to the same destination).

What am I doing wrong?
Thanks!


Re: log the reason of TCP_DENIED/403

2005-03-31 Thread Joe Cooper
[EMAIL PROTECTED] wrote:
If a user gets TCP_DENIED/403 because of a blacklist it often is a bad job
to find out which entry of the blacklist caused this error. Because squid
knows this entry it should log it into cache.log or somewhere else
("TCP_DENIED/403/http://www.badsite.com";). I guess for a specialist this is
quite easy to code, isnt it? 
For a specialist being paid his normal hourly rate, perhaps.  ;-)
It would be very nice if I could see this loggin in S2.5 Stable 10 ;-)
Well, it's good that you're so patient.
I doubt any new features will be going into 2.5.STABLE10.  It is a 
STABLE branch...new releases are bug fixes and corrections of serious 
misfeatures (like Henrik's recent 2GB size limit on 32 bit platforms fix).


Re: ESI

2005-03-16 Thread Joe Cooper
files:
[EMAIL PROTECTED] logs]$ tail access.log
1110468354.979  0 213.241.34.210 TCP_REFRESH_HIT/200 3245 GET 
http://localhost:8080/rpctlinks/script/content.css - NONE/- text/css
1110468356.935  0 213.241.34.210 TCP_REFRESH_HIT/200 3227 GET 
http://localhost:8080/rpctlinks/images/CERNLogo.png - NONE/- image/png
1110468356.935  0 213.241.34.210 TCP_MISS/503 2422 GET 
http://localhost:8080/favicon.ico - NONE/- text/html
1110468474.664  2 255.255.255.255 TCP_REFRESH_HIT/200 0 GET 
http://localhost:8080/rpctlinks/esi/header.html - NONE/- text/html
1110468474.664  2 255.255.255.255 TCP_REFRESH_HIT/200 0 GET 
http://localhost:8080/rpctlinks/esi/leftmenu.html - NONE/- text/html
1110468474.665  1 255.255.255.255 TCP_REFRESH_HIT/200 0 GET 
http://localhost:8080/rpctlinks/esi/body.html - NONE/- text/html
1110468476.669363 213.241.34.210 TCP_REFRESH_HIT/200 3245 GET 
http://localhost:8080/rpctlinks/script/content.css - NONE/- text/css
1110468476.670   2227 213.241.34.210 TCP_REFRESH_HIT/200 16632 GET 
http://localhost:8080/rpctlinks/esi/template.html - NONE/- text/html
1110468478.462566 213.241.34.210 TCP_REFRESH_HIT/200 3227 GET 
http://localhost:8080/rpctlinks/images/CERNLogo.png - NONE/- image/png
1110468481.464  0 213.241.34.210 TCP_MISS/503 2422 GET 
http://localhost:8080/favicon.ico - NONE/- text/html



[EMAIL PROTECTED] logs]$ tail cache.log
2005/03/10 16:27:56|never_direct = 0
2005/03/10 16:27:56|timedout = 0
2005/03/10 16:27:57| Failed to select source for 
'http://localhost:8080/rpctlinks/images/CERNLogo.png'
2005/03/10 16:27:57|   always_direct = 0
2005/03/10 16:27:57|never_direct = 0
2005/03/10 16:27:57|timedout = 0
2005/03/10 16:28:01| Failed to select source for 
'http://localhost:8080/favicon.ico'
2005/03/10 16:28:01|   always_direct = 0
2005/03/10 16:28:01|never_direct = 0
2005/03/10 16:28:01|timedout = 0

[EMAIL PROTECTED] logs]$ tail store.log
1110468354.979 RELEASE -1  608899516D23CFDF7FE69465AEEDA83D  503 
1110468354 0 1110468354 text/html 2254/2254 GET 
http://localhost:8080/rpctlinks/script/content.css
1110468356.934 RELEASE -1  F2BE2CCA30C79B49138C879925B75950  503 
1110468356 0 1110468356 text/html 2258/2258 GET 
http://localhost:8080/rpctlinks/images/CERNLogo.png
1110468356.935 RELEASE -1  A4D18CABC953D3A6081E7A56D2FACFD7  503 
1110468356 0 1110468356 text/html 2049/2049 GET 
http://localhost:8080/favicon.ico
1110468474.444 RELEASE -1  5EF4E255EBFCBB62D1BE8C2D92836B4B  503 
1110468474 0 1110468474 text/html 2271/2271 GET 
http://localhost:8080/rpctlinks/esi/template.html
1110468474.662 RELEASE -1  7FA3985250F9D971AC5AD880FA862B80  503 
1110468474 0 1110468474 text/html 2208/2208 GET 
http://localhost:8080/rpctlinks/esi/header.html
1110468474.663 RELEASE -1  4AE40C3C89D2C156F4DD9DFD6FBD873D  503 
1110468474 0 1110468474 text/html 2214/2214 GET 
http://localhost:8080/rpctlinks/esi/leftmenu.html
1110468474.664 RELEASE -1  FD26621635007DE60A100E2D388F27F8  503 
1110468474 0 1110468474 text/html 2202/2202 GET 
http://localhost:8080/rpctlinks/esi/body.html
1110468476.306 RELEASE -1  2C51D336BEB8A9E8A1BEE1E9066548F5  503 
1110468476 0 1110468476 text/html 2254/2254 GET 
http://localhost:8080/rpctlinks/script/content.css
1110468477.897 RELEASE -1  240F5EBB733E41EB2C249A815E85ADF3  503 
1110468477 0 1110468477 text/html 2258/2258 GET 
http://localhost:8080/rpctlinks/images/CERNLogo.png
1110468481.464 RELEASE -1  4C96709C9E96380F544019B134C89F6D  503 
1110468481 0 1110468481 text/html 2049/2049 GET 
http://localhost:8080/favicon.ico

I also set up a test server so that squid developers can test my setup.
The template page can be accessed directly from the JBoss at:
http://212.87.7.89:8080/rpctlinks/esi/template.html
and through squid at
http://212.87.7.89:8081/rpctlinks/esi/template.html
I would be very grateful for some help.
Michal Pietrusinski

Joe Cooper napisaÅ(a):
Michal Pietrusinski wrote:
After compiling squid - which was not a simple task, as there are 
conflicts between expat and libxml2, squid does not cache anything 
but stylesheets and images.

I don't have an opinion on whether ESI works, as I haven't tried it 
(but I built it for a customer, and I suspect they will be trying it 
soon if they haven't already), but I didn't have any problems building 
it.  I did have to explicitly add the include flag pointing to 
/usr/include/libxml2, since that's where it hides on my Fedora Core 3 
system, but I wouldn't say it wasn't simple to build.  And that's 
always been a requirement...similarly there was a kerberos library 
issue in the past.  Unavoidable side-effects of operating systems 
making incompatible changes in their include file locations.

Then again, I don't have expat installed on my system, as far as I 
know.  Perhaps forwarding your build errors will help get it fixed, if 
there is a problem in the Squid build process.  Since ESI will be 
included in Squid 3.0 STABLE, I imagine everyone would like to see ESI 
support build cleanly on any system that meets the dependencies.




Re: Limiting bandwidth rates on certain files

2004-05-05 Thread Joe Cooper
It is close enough to working that if you have some development skills, 
or the inclination to hire someone who does, it could be working in a 
few days.

I would not suggest it for production use in its current state.  It will 
require a developer to make it go.  Testing it won't do any good without 
someone to adopt the code and work on it a bit.  ;-)

Xavier Baez wrote:
Dear Joe

Could you please tell me if this solution was working already?

I will really, really like to test it

Regards


S. A. Tech Department


Joe Cooper wrote:

Henrik Nordstrom wrote:

On Sat, 1 May 2004, Xavier Baez wrote:


My question is this. If I use Squid as an http accelerator, could I 
configure it so that it will limit transfer rates of certain files?




Delay pools is not applicable to accelerator setups due to their 
design of limiting how fast Squid reads data from the server, not how 
fast Squid deliers data to clients.

For your situation another variant of shaping is needed, and some C 
coding is required to have this implemented in Squid.


There is a branch of this for 3.0.  Robert did the work about 9 months 
ago.  It could probably be brought up to date without too much pain, 
if someone takes the time to do it.  I think it is still in Roberts 
pseudo-private arch repository.  (Note: I don't plan to work on it, or 
spend any time/money on it at the moment and I'm pretty sure Robert 
doesn't either, but if anyone wants to pick up the project, I'll make 
sure you get the last revision I have.)





Re: Limiting bandwidth rates on certain files

2004-05-04 Thread Joe Cooper
Henrik Nordstrom wrote:
On Sat, 1 May 2004, Xavier Baez wrote:


My question is this. If I use Squid as an http accelerator, could I 
configure it so that it will limit transfer rates of certain files?


Delay pools is not applicable to accelerator setups due to their design of 
limiting how fast Squid reads data from the server, not how fast Squid 
deliers data to clients.

For your situation another variant of shaping is needed, and some C coding 
is required to have this implemented in Squid.
There is a branch of this for 3.0.  Robert did the work about 9 months 
ago.  It could probably be brought up to date without too much pain, if 
someone takes the time to do it.  I think it is still in Roberts 
pseudo-private arch repository.  (Note: I don't plan to work on it, or 
spend any time/money on it at the moment and I'm pretty sure Robert 
doesn't either, but if anyone wants to pick up the project, I'll make 
sure you get the last revision I have.)


Re: generic content encoding and gzip support

2004-02-26 Thread Joe Cooper
Henrik Nordstrom wrote:
On Thu, 26 Feb 2004, Jon Kay wrote:

Welcome back!


Joe would like me to merge this stuff with squid3 HEAD when it's
working right.  Please let me know if you guys see any problem with
that.


I don't see any problem once HEAD is opened again. Currently in extended
feature freeze for preparing the 3.0 release.
That is understood.  That's why I'm hoping to get some additional hands 
on stabilizing 3.0 (budget-permitting).
--
Joe Cooper <[EMAIL PROTECTED]>
Web caching appliances and support.
http://www.swelltech.com


Re: Feature Request for 3 release

2003-06-25 Thread Joe Cooper
Henrik Nordstrom wrote:

To this I agree, but there is technical reasons making it not that 
suitable to do within Squid.

What is a viable approach is to add a second database for this purpose 
in parallell to Squid, keeping track of the URLs in the cache. This 
database can be built automatically by tracking the store.log log 
file and feeding the data into a database of choice. For tracking the 
store.log file the per File::Tail module is very suitable, but some 
database design is probably needed to get a database which can be 
searched in interesting ways.
I wrote a perl utility to do this for a client about a year ago.  It 
records to an SQLite database and works fine for what it does.

However, we ended up using a different method of maintaining their cache 
(we triggered purges on site object update, rather than purging 
subdirectories), and so the development was never finished.  It will 
need work to be useful, and I simply don't have time to look at it at 
the moment--but I'll be happy to forward it to anyone who wants to work 
on it.  (Really, believe me when I say you will have to code to make it 
useful for you.  But the database maintenence code is complete and 
reliable.)

It is my opinion that the 'slow' purge tool you have referenced is more 
useful for general use.
--
Joe Cooper <[EMAIL PROTECTED]>
Web caching appliances and support.
http://www.swelltech.com



Re: Mysterious CPU eater in 3.0

2003-06-04 Thread Joe Cooper
One more data point:

Reducing disk/memory usage has no impact--the problem still occurs, 
possibly less frequently.  The machines have ample memory at 2GB and 
even with the 60GB cache_dir and 256MB cache_mem, there is 700+MB of 
available memory.

The disk I/O type has no impact.  aufs and ufs are both similarly effected.

I kind of suspect server side network issues, as it seemingly happens 
more frequently  when the byte hit ratio is extremely bad (due to 
range_offset_limit and quick_abort_min being set to -1).
--
Joe Cooper <[EMAIL PROTECTED]>
Web caching appliances and support.
http://www.swelltech.com



Mysterious CPU eater in 3.0

2003-06-04 Thread Joe Cooper
Hi all,

I've stumbled onto an interesting issue with 3.0.  I'm pretty sure the 
trigger is very large files (mean object size of 5MB on a cache_dir of 
60GB--some files over 100MB, many around 20MB) being served very fast to 
a few (~50) simultaneous clients.  It is not a particularly hard 
workload, though the single IDE disk is pushing a goodly amount of 
data--it would hold up find under 2.5.

CPU load is fine at ~50% idle, until something seems to 'snap' in Squid 
and CPU hits 100% usage and stays there for minutes or an hour or more, 
until either a crash with an assertion failure (below), or occasionally 
it hangs on and eventually recovers (very rare, and maybe I even 
imagined it--I might have missed the crash in the chattiness of the 
logs).  During this time, data rates drop to almost nothing 
(2-3Kbytes/sec, as opposed to potentially MB/sec).

The assertion failure:

2003/06/03 03:16:22| assertion failed: ../include/Array.h:298: "theVector"

I think the assertion failure is triggered by the CPU hogging bug rather 
than the assertion being caused directly by the same problem.  But I 
could be wrong...I'm just thinking that the CPU peak and the assertion 
failure never come at the same time, and always at somewhat randomized 
intervals apart.

Robert suggested I take a look at it with the cpu-profile configure 
option, and so I did...Everything looks mostly normal in the 1 and 5 
second averages, in that comm_poll_normal takes 85% for the 1 second 
average and 55% for the 5 second, with everything else dividing up in 
tiny little pieces everything else.  But then for the 30 sec and 1 min 
averages, PROF_UNACCOUNTED gets 92% and everything else just gets a few 
scraps, including comm_poll_normal.

Other possible data points:
range_offset_limit and quick_abort_min, when set to -1 seems to cause 
the bug to happen more frequently (but maybe traffic was higher before I 
turned these off).

Anyone have a clue where my CPU is going?  Any thoughts on what this bug 
is about?  (I have a nice little budget for killing this bug fast, if 
anyone wants to make a quick dig into it.)
--
Joe Cooper <[EMAIL PROTECTED]>
Web caching appliances and support.
http://www.swelltech.com



Re: HEAD snapshots broken?

2003-05-31 Thread Joe Cooper
Henrik Nordstrom wrote:
On Saturday 31 May 2003 11.00, Joe Cooper wrote:

The last 3.0 HEAD snapshot is from 27-May-2003.  Might be a problem
there?


Checking...

x - extracting anthony-xpm.gif (binary)
mkdir ../squid-3.0.DEVEL-20030531/contrib/nextstep
make: don't know how to make crtpe_md5.h. Stop
*** Error code 1
Thanks!
--
Joe Cooper <[EMAIL PROTECTED]>
Web caching appliances and support.
http://www.swelltech.com


HEAD snapshots broken?

2003-05-31 Thread Joe Cooper
The last 3.0 HEAD snapshot is from 27-May-2003.  Might be a problem there?
--
Joe Cooper <[EMAIL PROTECTED]>
Web caching appliances and support.
http://www.swelltech.com


Re: Introduction / accelerator feature ideas

2003-02-21 Thread Joe Cooper
Flemming Frandsen wrote:
Henrik Nordstrom wrote:

On Thursday 20 February 2003 22.53, Flemming Frandsen wrote:
reply-to is not set. This is intentional. Just remember to hit the 
"reply to all" then responding to messages on the mailinglist and 
everything is fine.


Actually, you end up with a mail to both the poster and the list, which 
is a bit silly (IMHO) as the poster is subscribed to the list, hitting 
reply will only reply to the poster and annoy the crap out of the other 
readers that never get the answer to that interesting question that the 
poster had...
http://www.unicom.com/pw/reply-to-harmful.html

Let's avoid entering this particular flame war here on the dev list. 
Some mail clients are smarter than others...If using a dumber one (I 
myself use one that doesn't have a good means of responding to lists) we 
just have to accept the slight inconvenience of either replying to both 
list and sender, or taking the time to remove the original sender from 
the To field.  It is worth the trouble in order to avoid losing 
information through reply-to munging.

For what it's worth, I know for a fact that none of the folks here will 
be offended if you take the easy way out and Reply-to-all whenever 
posting.  We don't mind getting second copy every now and then.
--
Joe Cooper <[EMAIL PROTECTED]>
Web caching appliances and support.
http://www.swelltech.com



Re: boo!

2003-02-19 Thread Joe Cooper
Adrian Chadd wrote:

Hi,

After yet another long break I'm kind of back.
Expect bits and pieces of things to start coming out..
:)


It's always good to see you're alive, Adrian.  Welcome back.
--
Joe Cooper <[EMAIL PROTECTED]>
Web caching appliances and support.
http://www.swelltech.com




Re: 2.5.STABLE2?

2003-02-10 Thread Joe Cooper
Robert Collins wrote:

On Mon, 2003-02-10 at 09:00, Henrik Nordstrom wrote:



The headers only works with Samba-2.2.4 and 2.2.5 (June 2002), but not
with 2.2.6 (Oct 2002) or 2.2.7 (Nov 2002, security update) or 2.2.7a
(Dec 2002) and almost certainly won't work with 2.2.8 either when
released..



Well, like I said, I won't object to someone doing the right thing for
2.5. I've said my bit.

3.0 should definately have this fixed before release though.


I have a humble suggestion for compromise to get STABLE2 out now rather 
than later:

Include the Samba 2.2.7 headers, as these will work with the majority of 
OS versions that are up to date on their errata.  Earlier versions have 
security bugs, and so will not be in use on any system that is 
well-maintained.  2.2.8 doesn't exist yet.  I can't say whether this 
works with 2.2.7a, but I would hope it would.  If folks are running CVS 
checkouts of devel versions of Samba, they ought to know how to patch. 
If necessary due to frequent problem reports, I'll be happy to modify 
the FAQ to indicate this decision and its reasons.
--
Joe Cooper <[EMAIL PROTECTED]>
Web caching appliances and support.
http://www.swelltech.com



Re: Nevermind...Re: aufs/aiops.c:312: `yes' undeclared (first useinthis function)

2003-02-04 Thread Joe Cooper
Yep.  It was an old spec in which I had been experimenting...I didn't 
notice I had the extraneous --with-aufs-threads configure option.

Thanks.

Henrik Nordstrom wrote:
--with-aufs-threads without specifying how many theads?

If so we should perhaps add a trap there, asking the user to provide
correct arguments..

Regards
Henrik


Joe Cooper wrote:


Oops...Ignore my previous two posts.

Configuration error in my RPM spec file was to blame.

Joe Cooper wrote:


BTW-I'm referring to the squid-2.5.STABLE1.20030204.tar.gz snapshot.

Joe Cooper wrote:



Hi all,

Looks like something is broken in aufs in the latest daily snapshot.I
haven't built a snapshot since early December of last year, so I don't
know precisely when this showed up.

I get the following on both of my build machines (Red Hat 8.0 and 7.2
with gcc3 and 2.96, respectively):

aufs/aiops.c: In function `squidaio_init':
aufs/aiops.c:312: `yes' undeclared (first use in this function)
aufs/aiops.c:312: (Each undeclared identifier is reported only once
aufs/aiops.c:312: for each function it appears in.)
aufs/aiops.c: In function `squidaio_queue_request':
aufs/aiops.c:492: `yes' undeclared (first use in this function)
make[4]: *** [aufs/aiops.o] Error 1





--
Joe Cooper <[EMAIL PROTECTED]>
Web caching appliances and support.
http://www.swelltech.com



--
Joe Cooper <[EMAIL PROTECTED]>
Web caching appliances and support.
http://www.swelltech.com




Nevermind...Re: aufs/aiops.c:312: `yes' undeclared (first use inthis function)

2003-02-04 Thread Joe Cooper
Oops...Ignore my previous two posts.

Configuration error in my RPM spec file was to blame.

Joe Cooper wrote:

BTW-I'm referring to the squid-2.5.STABLE1.20030204.tar.gz snapshot.

Joe Cooper wrote:


Hi all,

Looks like something is broken in aufs in the latest daily snapshot.I 
haven't built a snapshot since early December of last year, so I don't 
know precisely when this showed up.

I get the following on both of my build machines (Red Hat 8.0 and 7.2 
with gcc3 and 2.96, respectively):

aufs/aiops.c: In function `squidaio_init':
aufs/aiops.c:312: `yes' undeclared (first use in this function)
aufs/aiops.c:312: (Each undeclared identifier is reported only once
aufs/aiops.c:312: for each function it appears in.)
aufs/aiops.c: In function `squidaio_queue_request':
aufs/aiops.c:492: `yes' undeclared (first use in this function)
make[4]: *** [aufs/aiops.o] Error 1






--
Joe Cooper <[EMAIL PROTECTED]>
Web caching appliances and support.
http://www.swelltech.com




Re: aufs/aiops.c:312: `yes' undeclared (first use in this function)

2003-02-04 Thread Joe Cooper
BTW-I'm referring to the squid-2.5.STABLE1.20030204.tar.gz snapshot.

Joe Cooper wrote:

Hi all,

Looks like something is broken in aufs in the latest daily snapshot.I 
haven't built a snapshot since early December of last year, so I don't 
know precisely when this showed up.

I get the following on both of my build machines (Red Hat 8.0 and 7.2 
with gcc3 and 2.96, respectively):

aufs/aiops.c: In function `squidaio_init':
aufs/aiops.c:312: `yes' undeclared (first use in this function)
aufs/aiops.c:312: (Each undeclared identifier is reported only once
aufs/aiops.c:312: for each function it appears in.)
aufs/aiops.c: In function `squidaio_queue_request':
aufs/aiops.c:492: `yes' undeclared (first use in this function)
make[4]: *** [aufs/aiops.o] Error 1



--
Joe Cooper <[EMAIL PROTECTED]>
Web caching appliances and support.
http://www.swelltech.com




aufs/aiops.c:312: `yes' undeclared (first use in this function)

2003-02-04 Thread Joe Cooper
Hi all,

Looks like something is broken in aufs in the latest daily snapshot.I 
haven't built a snapshot since early December of last year, so I don't 
know precisely when this showed up.

I get the following on both of my build machines (Red Hat 8.0 and 7.2 
with gcc3 and 2.96, respectively):

aufs/aiops.c: In function `squidaio_init':
aufs/aiops.c:312: `yes' undeclared (first use in this function)
aufs/aiops.c:312: (Each undeclared identifier is reported only once
aufs/aiops.c:312: for each function it appears in.)
aufs/aiops.c: In function `squidaio_queue_request':
aufs/aiops.c:492: `yes' undeclared (first use in this function)
make[4]: *** [aufs/aiops.o] Error 1

--
Joe Cooper <[EMAIL PROTECTED]>
Web caching appliances and support.
http://www.swelltech.com