Re: [squid-users] Squid on DualxQuad Core 8GB Rams - Optimization - Performance - Large Scale - IP Spoofing

2007-10-16 Thread Michel Santos
Adrian Chadd disse na ultima mensagem:
 On Tue, Oct 16, 2007, Paul Cocker wrote:
 For the ignorant among us can you clarify the meaning of devices?

 Bluecoat. Higher end Cisco ACE appliances/blades. In the accelerator
 space,
 stuff like what became the Juniper DX can SLB and cache about double what
 squid can in memory.


o really? how much would that be? do you have a number or is it just talk?


 Just so you know, the Cisco Cache Engine stuff from about 8 years ago
 still beats Squid for the most part. I remember seeing numbers of
 ~ 2400 req/sec, to/from disk where appropriate, versus Squid's current
 maximum throughput of about 1000. And this was done on Cisco's -then-
 hardware - I think that test was what, dual PIII 800's or something?
 They were certainly pulling about 4x the squid throughput for the same
 CPU in earlier polygraphs.



I am not so sure if this 2400 req/sec wasn't per minute and also wasn't
from cache but only incoming requests ...

I pay you a beer or even two if you show me a device type pIII which can
satisfy 2400 req from disk



 I keep saying - all this stuff is documented and well-understood.
 How to make fast network applications - well understood. How to have
 network apps scale well under multiple CPUs - well understood, even better
 by the Windows people. Cache filesystems - definitely well understood.



well, not only well-understood but also well-known a Ferrari seems to run
faster than the famous john-doo-mobile - but also very well-known the
price issue and even if well-documented it makes no sense at all comparing
both



squid does a pretty good job not only getting high hit rates but
especially considering the price

unfortunatly squid is not a multi-threaded application what by the way
does not disable you running several instances as workaround

unfortunatly again, diskd is kind of orfaned but certainly is
_the_kind_of_choice_ for SMP machines, by design and still more when
running several diskd processes per squid process


again unfortunatly, people are told that squid is not SMP capable and that
there is no advantage of using SMP machines for it so they configuring
their machines to death on single dies with 1 meg or 2 and getting nothing
out of it so where does it end??? Easy answer, squid is going to be a
proxy for natting corporate networks or poor ISPs which do not have
address space - *BUT NOT* as a caching machine anymore

but fortunatly true that caching performance is in first place a matter of
fast hardware

that you can see and not only read common bla-bla I add a well-known mrtg
graph of the hit rate of a dual-opteron sitting in front of a 4MB/s ISP
POP

and I get pretty much more hits as you told at the beginning on larger
POPs - so I do not know where you get your squid's 1000 req limit from ...
must be from your P-III goody ;)


but then at the end the actual squid marketing is pretty bad, nobody talks
caching but talks proxying, authenticating and acling, even the makers are
not defending caching at all and appearently not friends of running squid
as multi-instance application because any documentation about it is very
poor and sad


probably an answer to actual demands and so they go with the croud,
bandwidth is almost everywhere very cheap so why people should spend their
brains and bucks on caching technics. Unfortunatly my bandwidth is
expensive and I am not interesting in proxying or and other feature so
perhaps my situation and position is different and is not the same as
elsewhere.

Michel

...





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.
attachment: squid0-hit-day.png

Re: [squid-users] 2.6-16 compile error on freebsd

2007-10-15 Thread Michel Santos

Thomas-Martin Seck disse na ultima mensagem:
 * Michel Santos ([EMAIL PROTECTED]):


 
  I get a compile error with squid-2.6-STABLE-16 as follows
 

 ...

  ./cf_gen cf.data ./cf.data.depend
  *** Signal 10
 
  Stop in /usr/local/squid/squid-2.6.STABLE16/src.
  *** Error code 1
 
 


 is it possibly a compiler problem?

 This is a bug in cf_gen that only manifests itself on FreeBSD 7
 (either because the new malloc implementation handles things
 differently in general or because its internal debugging code was
 active until FreeBSD-7 was officially branched in CVS). Please look
 at http://www.squid-cache.org/Versions/v2/2.6/changesets/ for the
 patch to fix this. [Shameless plug: or just use the port, it contains
 the fix.]





thank you
don't know why I haven't seen it myself, I looked over the page before

anyway, it works, thank's

Michel



...





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




Re: [squid-users] 2.6-16 compile error on freebsd

2007-10-14 Thread Michel Santos


 I get a compile error with squid-2.6-STABLE-16 as follows


...

 ./cf_gen cf.data ./cf.data.depend
 *** Signal 10

 Stop in /usr/local/squid/squid-2.6.STABLE16/src.
 *** Error code 1




is it possibly a compiler problem?

gcc 4.2.1 is the only difference on FreeBSD7 I can find ( on the machines
FreeBSD6 with gcc 3.4.6 it compiles fine)

on the other hand, squid compiled with gcc 3.4.6 on FreeBSD6 runs fine on
FreeBSD7



Michel
...





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




[squid-users] 2.6-16 compile error on freebsd

2007-10-12 Thread Michel Santos


I get a compile error with squid-2.6-STABLE-16 as follows

2.6-15 compiles normally


awk -f ./cf_gen_defines ./cf.data.pre cf_gen_defines.h
sed  [EMAIL PROTECTED]@%3128%g; [EMAIL PROTECTED]@%3130%g;
[EMAIL PROTECTED]@%/usr/local/squid/etc/mime.conf%g;
[EMAIL PROTECTED]@%/usr/local/squid/libexec/`echo dnsserver | sed
's,x,x,;s/$//'`%g; [EMAIL PROTECTED]@%/usr/local/squid/libexec/`echo
unlinkd | sed 's,x,x,;s/$//'`%g;
[EMAIL PROTECTED]@%/usr/local/squid/libexec/`echo pinger | sed
's,x,x,;s/$//'`%g; [EMAIL PROTECTED]@%/usr/local/squid/libexec/`echo
diskd-daemon | sed 's,x,x,;s/$//'`%g;
[EMAIL PROTECTED]@%/usr/local/squid/var/logs/cache.log%g;
[EMAIL PROTECTED]@%/usr/local/squid/var/logs/access.log%g;
[EMAIL PROTECTED]@%/usr/local/squid/var/logs/store.log%g;
[EMAIL PROTECTED]@%/usr/local/squid/var/logs/squid.pid%g;
[EMAIL PROTECTED]@%/usr/local/squid/var/cache%g;
[EMAIL PROTECTED]@%/usr/local/squid/share/icons%g;
[EMAIL PROTECTED]@%/usr/local/squid/share/mib.txt%g;
[EMAIL PROTECTED]@%/usr/local/squid/share/errors/Portuguese%g;
[EMAIL PROTECTED]@%/usr/local/squid%g; [EMAIL PROTECTED]@%/etc/hosts%g;
[EMAIL PROTECTED]@%2.6.STABLE16%g;  ./cf.data.pre cf.data
if gcc -DHAVE_CONFIG_H
-DDEFAULT_CONFIG_FILE=\/usr/local/squid/etc/squid.conf\ -I. -I.
-I../include -I. -I. -I../include -I../include -Wall -g -O2 -MT
cf_gen.o -MD -MP -MF .deps/cf_gen.Tpo -c -o cf_gen.o cf_gen.c;  then mv
-f .deps/cf_gen.Tpo .deps/cf_gen.Po; else rm -f .deps/cf_gen.Tpo;
exit 1; fi
if gcc -DHAVE_CONFIG_H
-DDEFAULT_CONFIG_FILE=\/usr/local/squid/etc/squid.conf\ -I. -I.
-I../include -I. -I. -I../include -I../include -Wall -g -O2 -MT
debug.o -MD -MP -MF .deps/debug.Tpo -c -o debug.o debug.c;  then mv -f
.deps/debug.Tpo .deps/debug.Po; else rm -f .deps/debug.Tpo; exit 1;
fi
/usr/bin/perl ./mk-globals-c.pl  ./globals.h  globals.c
if gcc -DHAVE_CONFIG_H
-DDEFAULT_CONFIG_FILE=\/usr/local/squid/etc/squid.conf\ -I. -I.
-I../include -I. -I. -I../include -I../include -Wall -g -O2 -MT
globals.o -MD -MP -MF .deps/globals.Tpo -c -o globals.o globals.c;  then
mv -f .deps/globals.Tpo .deps/globals.Po; else rm -f
.deps/globals.Tpo; exit 1; fi
gcc  -Wall -g -O2  -g -o cf_gen  cf_gen.o debug.o globals.o -L../lib
-lmiscutil -lm
./cf_gen cf.data ./cf.data.depend
*** Signal 10

Stop in /usr/local/squid/squid-2.6.STABLE16/src.
*** Error code 1



here the options I use

./configure --enable-default-err-language=Portuguese \
--enable-storeio=diskd,ufs,null \
--enable-removal-policies=heap,lru --enable-underscores
--disable-ident-lookups \
--disable-hostname-checks --enable-large-files
--disable-http-violations \
--enable-snmp --enable-truncate \
--enable-external-acl-helpers=session \
--disable-wccp --disable-wccpv2 \
--enable-follow-x-forwarded-for \
--disable-linux-tproxy --disable-linux-netfilter --disable-epoll \


Michel
...





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




Re: [squid-users] acl [NO] bug (when peers configured)

2007-09-01 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On fre, 2007-08-31 at 21:10 -0300, Michel Santos wrote:

 well, I was trying .. asking, begging 'endless' (=_almost) for six
 month
 with logs until i did finally that scary magic touch of /32 and bingo ..
 everything works

 And if you now remove the /32?


just checked

'now' it is working

when the secret service fixed it? I never saw a note


michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] acl [NO] bug (when peers configured)

2007-08-31 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On tor, 2007-08-30 at 08:27 -0300, Michel Santos wrote:

 *THIS* is the thing here: that any acl configured on the frontend cache
 is
 not beeing applied to any request from the peer

 Then check your http_access rules. You have something else in there...


hey thank you!

I found it, there was an extra 'http_access allow peer' above the acls in
two older frontend squids

looking this over means that when the IP address of any 'acl peer src $1'
match the IP range of 'acl all src ip/mask' then I do not need to specify
an additional 'http_access deny peer we_acl' if 'http_access deny all
we_acl' is defined before right


michel


...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] How can i block this type of script

2007-08-31 Thread Michel Santos

jeff donovan disse na ultima mensagem:
 greetings

 i am using squidguard for content filtering.

 How can i block this type of script?

 http://www.softworldpro.com/demos/proxy/

 it's easy to block the url. but when the script is executed there is
 nothing in the url that will let me key in on.


what do you mean with 'let me key in on'?


 here is the regex I am using:

 #Block Cgiproxy, Poxy, PHProxy and other Web-based proxies
 (cecid.php|nph-webpr|nph-pro|/dmirror|cgiproxy|phpwebproxy|nph-
 proxy.cgi|__new_url)



using squid resources in squid config you would do

acl src clients 200.1.1.0/27

acl bla urlpath_regex cecid\.php
acl bla ...

http_access deny clients bla



...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] acl [NO] bug (when peers configured)

2007-08-31 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On fre, 2007-08-31 at 09:24 -0300, Michel Santos wrote:

  192.168.1.0/24 is the same as 192.168.1.0-192.168.1.255
 

 really ;)

 a range indicator is allowed?

 Yes.

I was asking about the dash '-'


 The full specification is

 IPA-IPB/MASK


well, no need teaching a dog to bark ;)

 where IPB defaults to IPA if not specified, and /MASK defaults to /32 if
 not specified (at least unless you use a old now obsolete Squid version
 where it guesses the mask size based on the format of the IP...)


well, I guess in 2.6 is something wrong at this special point, unless some
secret work fixed it (I have not checked  14S), if you remember this is
not working with any 2.6 when coming from a local address, but with 2.5 it
is

shortcut:

#on 127.0.0.2
acl peer src 127.0.0.1

gets 'access denied' for all requests from 127.0.0.1

#on 127.0.0.2
acl peer src 127.0.0.1/32

and 127.0.0.1 goes through ...


michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] acl [NO] bug (when peers configured)

2007-08-31 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On fre, 2007-08-31 at 19:16 -0300, Michel Santos wrote:

 well, I guess in 2.6 is something wrong at this special point, unless
 some
 secret work fixed it (I have not checked  14S), if you remember this is
 not working with any 2.6 when coming from a local address, but with 2.5
 it
 is

 shortcut:

 #on 127.0.0.2
 acl peer src 127.0.0.1

 gets 'access denied' for all requests from 127.0.0.1

 #on 127.0.0.2
 acl peer src 127.0.0.1/32

 and 127.0.0.1 goes through ...

 Then I guess you must have changed something else as well. 127.0.0.1
 127.0.0.1/32 and 127.0.0.1/255.255.255.255 is all equivalent and matches
 the exact ip 127.0.0.1, and has always been..


hmm, I haven't changed anything else than the squid version

 The magic autodetection of the mask size in earlier releases only kick
 in if the ip ends in .0, but was inconsistent and therefore removed...


this is what scares me to death: 'magic' ...

my obs.:
magic starts where maths ends ... ;)

 There has not been any changes in this part of the code since 31 July
 2006 when the mask size detection was removed..


well, I was trying .. asking, begging 'endless' (=_almost) for six month
with logs until i did finally that scary magic touch of /32 and bingo ..
everything works


michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] acl bug (when peers configured)

2007-08-30 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On tor, 2007-08-30 at 06:02 -0300, Michel Santos wrote:
 There is appearently an acl bug

 acls do not work for peers

 They do work for peers, just the same as any other http client. There is
 nothing special about peers in the access controls.

 acl all src 200.152.80.0/20

 Warning: Don't redefine the all acl unless you are very careful. It's
 used in a number of defaults and meant to match the whole world, and
 results can become a bit confusing if redefined...

 Instead define a mynetwork acl to match your clients..



I just did this but does not change the misbehaviour I described


 acl danger urlpath_regex -i instal\.html
 http_access deny all danger
 #

 so far this works for all, I mean it blocks as wanted


 #
 acl all src 200.152.80.0/20
 acl peer src 200.152.83.40
 acl danger urlpath_regex -i instal\.html
 http_access deny all danger
 http_access deny peer danger

 Nothing obviously wrong, apart from the use of the all acl..

ok, in fact the acl all ... is not the point and works anyway despite your
observation, what is NOT working as supposed is acl peer ... and it's
following deny clause for the peer



 does NOT when accessing directly from a browser from 200.152.83.40

 Should it? When going directly Squid is not used...

well well ... directly from a browser not as always_direct or something

I mean here when acessing the parent as client ok, since the frontend
cache is a transparent proxy it catch/intercept this connection and should
apply the acl what it in fact does so long as the IP does not part of acl
peer src

when I change the acl peer src *IP* then the acl works for this machine
as well as for all not_peer_clientes of the frontend cache


*THIS* is the thing here: that any acl configured on the frontend cache is
not beeing applied to any request from the peer


michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] acl bug (when peers configured)

2007-08-30 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On tor, 2007-08-30 at 08:27 -0300, Michel Santos wrote:

 *THIS* is the thing here: that any acl configured on the frontend cache
 is
 not beeing applied to any request from the peer

 Then check your http_access rules. You have something else in there...

 There is absolutely nothing special about peers in access controls. They
 are just HTTP clients just as any other HTTP client.


ok, then I will isolate a pair from the cluster at night and doublecheck
everything
thank's so far

michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] squid do the inverse of what it should do !!!!!!!!!!! help !!!!!!!!!

2007-08-27 Thread Michel Santos

pinky you disse na ultima mensagem:
 finallyyy

 I figured it out, with your help of course

 It’s not a squid issue , in fact my satellite provider
 NewSky has a defected Cisco interface in its site,
 which duplicate each packet I received ( Every request
 I send I receive a duplicate answer )
 I called the provider and told them about the
 duplicated packets I received from them , and they
 solve change the defected interface.


just curious, how an interface would do that?

Other reason as tcp retransmission timeout exceedings (which BTW would
resend one or another package not all) I can not even imagin a reason for
that other as an malicious attack (syn flood) because under normal
circunstancies the package sender *will_not* try retransmission endless
but mark the target unreachable

 In fact squid was getting double answer so , it does
 not know what to do


I guess squid would not get such packages at all but should be discarted
by your router at layer 3 or by your OS at layer 4 level where are checked
tcp flags and sequence numbers before they go to the application layer.


michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] squid do the inverse of what it should do !!!!!!!!!!! help !!!!!!!!!

2007-08-26 Thread Michel Santos

pinky you disse na ultima mensagem:

 but perhaps you check first who has access to your
 box and change lines
 like acl all 0.0.0.0 or so


 I checked that for sure , I mentioned that in my first
  email .


no you didn't

certainly squid does not download for itself right so it comes from
somewhere and if some can use your squid then you are allowing access to
it

anyway, traffic is what you complain about and traffic you can observe
easy (tcpdump) and find in seconds where it comes from, probably netstat
shows it already

your cisco is correct? may be you send squid's traffic also back to squid ...

michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] squid do the inverse of what it should do !!!!!!!!!!! help !!!!!!!!!

2007-08-26 Thread Michel Santos

pinky you disse na ultima mensagem:


 no you didn't

 well you can see I said that I put strict acl



strict acl says nothing to me but acl all src ip_range would ..


 anyway, traffic is what you complain about and
 traffic you can observe
 easy (tcpdump) and find in seconds where it comes
 I used tcpdump , but as  u can see I have 15Mbps (1000
 live users) , so thats not so easy.


:) nice excuse for guessing ... so who has a gigalink does what?

tcpdump -n tcp dst port squid_tcp_port and not src net your_cli_ip_network

or something will give you a clear result whats going on



michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] squid do the inverse of what it should do !!!!!!!!!!! help !!!!!!!!!

2007-08-25 Thread Michel Santos

pinky you disse na ultima mensagem:

 rpm that come with the distor)  in transparent mode ,
 cisco 2811 redirect the packet to squid via wccp2.

that is not transparent mode

 everything works great till that day when Squid
 inverse its purpose!!! ( its start to use far more
 bandwidth than my users do ) you can see the mrtg
 picturs below ( I put links for them).

inverse??? hmmm

 I tried everything . ( disabled the cache and make it

really?

 work as proxy only, used delay loop , change the
 distro and change the squid version and even changed
 the wccp options and version in the router and squid )

 but the problem remains .

 Please help me before losing my job !!! :(


depends on how much they pay me I dont care  and if it is enough I still
pay you a beer :)

but perhaps you check first who has access to your box and change lines
like acl all 0.0.0.0 or so




michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Opinions sought on best storage type for FreeBSD- suggestion for developers

2007-08-24 Thread Michel Santos

Adrian Chadd disse na ultima mensagem:
 On Mon, Aug 20, 2007, Nicole wrote:

 [snip good points]

  It has been found that people are more likely to donate money for
 something
 specific than for a general cause.

 Maybe we're not doing it right; but people seem quite happy to suggest
 functionality but not be willing to donate to see it happen.


I agree much more to what nicole just said

this donate thing does not work

firstable, everybody is scared as myself too because youhohoo developers
are scary guys, you're good in what you do so that means expensive, most
are kind of harsh with whom does not know at last 100% of the technical
vocabulary - so at the end you all kick yourself out of getting something
else then honors and often not even that :) but critics

and then when comes an fearless like me and ask how much would that cost
then the answer is -z ...

Adrian, you have lot of ideas and told me more than twice you have no time
(money) to do this and that. So appending to Nicol's idea I would like you
to put your ideas on a website, short description and give an idea of
project-cost so may be you will find easier a sponsor or several
co-sponsors. As in supermarkets, nobody would buy anything when there were
no prices on the cans.

Also then there is something to what everybody can compare and then
eventual or sure are coming in offers for other piece of work I guess

 I'd love to see that change!

sure, absolutely, all of us

michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Syslog configuration

2007-08-22 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On ons, 2007-08-22 at 01:37 -0300, Michel Santos wrote:

  access_log syslog:LOG_LOCAL4 squid
 

 hmm, isn't this how it should work?

 access_log syslog:local:4

 No, 2.6.STABLE14 and earlier 2.6 releases uses a bit twisted and
 undocumented syntax for specifying syslog facility and log level.

  syslog:LOG_FACILITY|LOG_LEVEL

 where LOG_FACILITY is LOG_ followed by the facility name in uppercase.
 And similar for LOG_LEVEL.. Borrowed from the C syntax when using the
 syslog(3) function.

 We have now changed this to use the more familiar syslog.conf syntax and
 documented it..


so then I have to take care when upgrading because my 2.6.S14 are still
working well with 'access_log syslog:local:4'

michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Syslog configuration

2007-08-22 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On ons, 2007-08-22 at 04:55 -0300, Michel Santos wrote:

 so then I have to take care when upgrading because my 2.6.S14 are still
 working well with 'access_log syslog:local:4'

 That syntax is not understood by any version and is silently ignored,
 resulting in the log being sent to daemon.info  (same as LOG_DAMON|
 LOG_INFO)

 This is true for 2.6.STABLE14 at least. Later versions may reject the
 invalid configuration as invalid.

 If you want the log sent to the local4 facility in 2.6.STABLE14 then
 specify syslog:LOG_LOCAL4 nothing else.


well, don't know there but here I am using it and it working here
perfectly as said in

access_log syslog:local:4

but in your defense :) I must say that your version is working also
exactly the same way configured as

access_log syslog:LOG_LOCAL4


both log to the file defined in syslog.conf for local4.* facility



michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Syslog configuration

2007-08-22 Thread Michel Santos

c0re dumped disse na ultima mensagem:
 It just won't work !

 access_log /squid/var/logs/access.log squid
 access_log syslog:LOG_LOCAL4 squid
 (I need to log to both: access.log and syslog)



I am not sure if you can log to both, I just tried here and it does not
log with two access_log lines

michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Syslog configuration

2007-08-21 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On fre, 2007-08-17 at 10:53 -0300, c0re dumped wrote:
 Hello guys,

 Hi would like to log to both: syslog on a remote machine AND
 /var/log/access.log.

 Is that possible ?

 In my squid squid.conf i seted it up:

   access_log /squid/var/logs/access.log squid
   access_log syslog squid

 Looks fine to me, but you probabl need to specify the facility if you
 want to use local4, the default is daemon I think.

 access_log syslog:LOG_LOCAL4 squid


hmm, isn't this how it should work?

access_log syslog:local:4

supposed the file referenced as local4 in syslog.conf exist and syslog is
already aware of it

michel


...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] endless growing swap.state after reboot

2007-08-15 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On tis, 2007-08-14 at 17:22 -0300, Michel Santos wrote:

 well, just got one, what now? Do you want the file?

 No, but I want you to hold on to it so you can test things without
 having to reboot a server and cross your fingers..

 now I did the same again but started squid with -F and all good

 so I guess we found where to look, something wrong while writing to
 swap.state when still rebuilding it

 Next test is to see if the problem is also seen without -F but with no
 traffic on the proxy.

no, when no traffic no problem, this statement is based on my not
succeeded weekend tests and 3 production servers yesterday and one of them
I cut off the client side while testing without -F and all was good



 Another test I'd like you to run is to try using aufs instead of diskd.
 And also the same for ufs.


yes same issue

seems that swap.state.new hangs on to 72 bytes while swap.state grows fast

michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] endless growing swap.state after reboot

2007-08-14 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 unfortunatly I was sleeping and didn't backed up the swap.file but I can
 do it again later if you need it.

 Please try. But as you indicate above it's possible the problem is not
 caused by the swap.state, but by concurrent traffic while the cache is
 being rebuilt in which case producing a test case is somewhat more
 complex..



if some likes to help catching this problem here is a sh script which
backup the swap files into /usr/local/squid/swap-bu before starting squid.

It should work for squid on freebsd else look into it before running it.
You should run it from your squid-startup script, put it into the first
line without '' at the end of the line. if you do not have a squid start
script execute it before squid or put it into /usr/local/etc/rc.d with a
000. prefix

http://suporte.lucenet.com.br/supfiles/swap.state.bu.sh.tar.gz

than as henrik said before, if squid get confused afetr startup we need
the backuped swap.state

thank's
michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Opinions sought on best storage type for FreeBSD

2007-08-14 Thread Michel Santos

Mark Nottingham disse na ultima mensagem:
 FreeBSD and aufs was discussed a while back, IIRC, and the upshot was
 that for FreeBSD 6, it's useful (threads on 4 is a no-no). The
 lingering doubt in my mind was this bug: http://www.freebsd.org/cgi/
 query-pr.cgi?pr=103127, which appears to have been patched in 6.1-
 RELEASE-p5.

 So, in a nutshell, can it be safely said that aufs is stable and
 reasonably performant on FreeBSD = 6.2, as long as the described
 thread configuration is performed?


on 6.2 you do not need to do anything alse as add the aufs in configure

--enable-storeio=diskd,ufs,aufs,null (or whatever options you like)

and it should work well, I had no problem at all with the aufs model
itself beside queue-congestion alert msgs while swap.state rebuilding was
in progress and sometimes under load. Whatever value I set with
--with-aufs-threads=N didn't helped

you probably should add or create your /etc/libmap.conf as follows

[/usr/local/squid/sbin/squid]
libpthread.so.2 libthr.so.2
libpthread.so   libthr.so


michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] endless growing swap.state after reboot

2007-08-14 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:

 Please try. But as you indicate above it's possible the problem is not
 caused by the swap.state, but by concurrent traffic while the cache is
 being rebuilt in which case producing a test case is somewhat more
 complex..



well, just got one, what now? Do you want the file?

But this confirms what I argued this morning, look:

copied swap state, stopped squid when saw it growing

copied back the backed up swap.state file, started squid and growing again

now I did the same again but started squid with -F and all good

so I guess we found where to look, something wrong while writing to
swap.state when still rebuilding it


michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Opinions sought on best storage type for FreeBSD

2007-08-13 Thread Michel Santos

Tek Bahadur Limbu disse na ultima mensagem:

 what size is your link?

 For each proxy, the link is burstable upto to 15 mbps. But they are
 grouped together in different groups. We have 6 groups. Each group has
 bandwidth ranging from 5 mbps to 20 mbps. However since our link comes via
 satellite, the proxies starts building a large number of mbufs especially
 when our uplink gets saturated. Since it's a satellite link, bandwidth is
 never enough no matter how big we are subscribing. We still have some time
 to go (maybe months, or years) before we get it from a fiber link.


 Sure this is not related to your crash and to your link either but
 somaxconn is the queue size of pending connections and not the number of
 connections and you are probably setting this far too high. somaxconn as
 1024 or max 2048 would be more reasonable and nmbcluster I would not set
 higher than 128 or 256k

 if you eat that up you have other troubles and increasing this values
 does
 not solve them I guess

 Well I am using nmbcluster = 256000 on some of my FreeBSD-6.2 machines
 because they don't support setting the nmbcluster to 0. Well let me try
 setting somaxconn to 2048.

I like to suggest again starting a clean system like said in a former msg
and observe and then check value for value instead of mixing it all up at
once



 - From my observation in recent months, the mbufs value has not crossed
 120K. I will probably use 128K or 256K. I read an article regarding
 setting somaxconn=32768 to help stop SYN flooding.

 http://silverwraith.com/papers/freebsd-ddos.php


who am i to understand miracles? without saying any else I suggest you
compare the man page or tuning what describes somaxconn and what the
author claims it is and figure out about the other statements ...


 In your opinion, what's wrong with setting nmbcluster to 0 since, in
 this way, I never run out of mbufs?


sorry if came over a wrong impression that I want to lecture or something,
I am not saying it is wrong (how would I know?), I am only changing ideas
here ok and am saying that I would do it different and what is my opinion



michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Opinions sought on best storage type for FreeBSD

2007-08-13 Thread Michel Santos

Adrian Chadd disse na ultima mensagem:

 well that was my knowledge about chances but here are not so many
 options,
 or you are a hell of forseer or you create an algorithm, kind of
 inverting
 the usage of the actual or other cache policies applying them before
 caching the objects instead of controlling the replacement and aging

 No, you run two seperate LRUs, glued to each other. One LRU is for new
 objects that are coming in, another LRU is for objects which have been
 accessed more than once.


well, I didn't mean to eliminate the cache policies by using instead, I
mean using them similar for this purpose, whatever basicly we say the same
I guess, or meant at least :)


 A few reasons:

 * I want to do P2P caching; who wants to pony up the money for open source
   P2P caching, and why haven't any of the universities done it yet?


there did exist some p2p cache projects and software which died because of
troubles with author/owner rights of the cached content which could be
interpreted as redistribution or something, seems a dutch network had a
good product


 * bandwidth is still not free - if Squid can save you 30% of your HTTP
   traffic and your HTTP traffic is (say) 50% of 100mbit, thats 30% of
   30mbit, so 10mbit? That 10mbit might cost you $500 a month in America,


absolutely, no need to convince me, I am working with cache for that
reasons, I brought it up because I believe I can understand the reasons
why people are not so in it anymore


   in developing nations..

tell me about ... we pay US$700-900 for each 2Mbit/s ... and now you know
why we are poor because we get milked dry by everyone :)



 Would you like Squid to handle 100mbit+ of HTTP traffic on a desktop PC
 with a couple SATA disks? Would you like Squid to handle 500-800mbit of
 HTTP traffic on a ~$5k server with some SAS disks? This stuff is possible
 on today's hardware. We know how to do it; its just a question of
 writing the right software.


yep, definitly people with great ideas are the owners of the present
future and seems you will continue working on cache projects and I hope
you make very much money with all that so you might have more *time* in
the future :)


michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] endless growing swap.state after reboot

2007-08-13 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On sön, 2007-08-12 at 12:49 -0300, Michel Santos wrote:

 that's from one cache dir and took 5.8 seconds seems to be really
 wrong,
 look at the time stamps:

 Time stamps during the rebuild process is not working well when you use
 -F. This because Squid is only rebuilding the cache index, and it's
 notion of time is a bit messed up.

 Things return to norma when the rebuild is finished.



sooo, first machine I rebooted without shutting down squid did it again,
swap.state grows endless

I rebooted two others but with -F and all good

so seems that writing to swap.state while still rebuilding the cache is
where the dog is berried

unfortunatly I was sleeping and didn't backed up the swap.file but I can
do it again later if you need it.

michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




[squid-users] acl bug or is it so?

2007-08-13 Thread Michel Santos

please have a look :


acl all src 200.152.80.0/24

acl danger urlpath_regex -i blabla

http_access deny all danger
miss_access deny all danger

blocks and works, ok so far

##

acl all src 200.152.80.0/24
acl peer src 200.152.80.21

acl danger urlpath_regex -i blabla

http_access deny all danger
miss_access deny all danger

http_access deny peer danger
miss_access deny peer danger

blocks for acl all but _NOT_ for peer IP, also not if the peer IP is
accessing as normal client with a browser and not as a peer

am I doing something wrong or is it a bug?


same result here when using dstdomain or url_regex in place of urlpath_regex

michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] endless growing swap.state after reboot

2007-08-12 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On fre, 2007-08-10 at 13:55 -0300, Michel Santos wrote:

 just to get it straight

 I start squid with this former swap.state but empty cache_dirs

 yes.

 Is it that exactly?

 yes.


 But before you do that we perhaps should do the same, but without
 erasing the cache directories.

 swap.state should shrink at this stage, eliminating it's reference when
 not finding the file right?

 only if the rebuild is successful, in which case this test failed..



I am in the visiting-the-doctor-and-pain-is-gone stage ...

I still was not able to get my test machine damaging the swap.state file


I am still loading the cache_dir and so fare I have 2Gigs in there and the
rebuild is some seconds only. No reset or kill did it, I tried every
couple of hours.

That brought me to check my startup scripts which I haven't touch since
long time and I am not using the -F option. Since my production caches do
have considerable size and the rebuild is up to 2 minutes and some big
caches need 4-5 minutes I start thinking that the swap.state mess has
something to do with that I am not starting with the -F option.

What do you think? is it possible that the problem is hidden here?

If I am not able to make it happen here on my test machine til monday
morning I will sacrify two production caches and restart one with -F and
the other not and under incoming request load. Then we'll see.

michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Opinions sought on best storage type for FreeBSD

2007-08-12 Thread Michel Santos

Tek Bahadur Limbu disse na ultima mensagem:

 how much mem the server has installed?

 Most of them have 1 GB memory


well, I believe that is really too low for such a busy machine and you
should think of 4-8 gigs (or more?) for such a server


 what is you kern.maxdsiz value?

 It's the default value of 512 MB. I guess I may have to increase it to say
 768 MB.

 I can put the following value in /boot/loader.conf:

 kern.maxdsiz=754974720

you can start here but still also too low, I set this to 4 or 6 gigs but I
have much more ram as you in my servers





 How much memory squid is using just before it crashs? is it using swap?
 what ipcs tells you then or under load?

 Squid could be using somewhere between 500 to 700 MB of memory before it
 crashes.

what do you mean? Could, nothing certain? what is your cache_mem setting?


 It was not using swap.

sure not, if you have 1GB of ram and there are 512Mb left then squid will
crash soon the 512 you allow are used, so no chance to get to swap either

set your maxdsize to 1 or 2 gigs and assist the magic happen



 Currently, ipcs tells me:


no good, ipcs -a at least


 Most of them are Dell SC-420 machines:
 CPU 2.80GHz (2793.09-MHz K8-class CPU)
 Hyperthreading: 2 logical CPUs
 OS: FreeBSD-6.0-6.1 (amd64).


6.2 is way better and releng_6 is really stable you could upgrade which
should be possible with no downtime beside one reboot



  By the way, do you have some optimal settings which can be applied to
  diskd? Below are some values I use:
 
  options SHMSEG=128
  options SHMMNI=256
  options SHMMAX=50331648 # max shared memory segment size
 (bytes)
  options SHMALL=16384# max amount of shared memory (pages)
  options MSGMNB=16384# max # of bytes in a queue
  options MSGMNI=48   # number of message queue identifiers
  options MSGSEG=768  # number of message segments
  options MSGSSZ=64   # size of a message segment
  options MSGTQL=4096 # max messages in system
 
  Correct me where necessary.
 


 that does not say so much, better you send what comes from sysctl
 kern.ipc

 #sysctl kern.ipc


you see? your kernel options are not exactly what you get at runtime right ;)




 You mean set SHMMAXPGS using sysctl or compile it? Also what the best
 value for SHMMAXPGS?

yes sysctl, they are runtime tunable

you must check out with ipcs and set your system to what works well
without using too high values

other values I saw are eventually not so good choices, as somaxconn seems
way to high and nbmclusters are 0 ?


may be you trust the fbsd auto-tuning and compile your kernel with
max_user 0 and restart without sysctl values but maxdsiz to 1 gb or so and
see what happens.




Michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] endless growing swap.state after reboot

2007-08-12 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On sön, 2007-08-12 at 11:59 -0300, Michel Santos wrote:

 That brought me to check my startup scripts which I haven't touch since
 long time and I am not using the -F option. Since my production caches
 do
 have considerable size and the rebuild is up to 2 minutes and some big
 caches need 4-5 minutes I start thinking that the swap.state mess has
 something to do with that I am not starting with the -F option.

 What do you think? is it possible that the problem is hidden here?

 Quite possible. -F is not actively tested and do change the rebuild
 procedure a bit.


Couldn't wait, I just did it on a server now. I am pretty sure that
normally the shit had begun but with -F it built the swap.state and
started working normally and no problem

Aug 12 12:39:49 wco-mir squid[991]: Done reading /c/c2 swaplog (2659207
entries)
Aug 12 12:39:49 wco-mir squid[991]: Finished rebuilding storage from disk.
Aug 12 12:39:49 wco-mir squid[991]:   2289447 Entries scanned
Aug 12 12:39:49 wco-mir squid[991]: 0 Invalid entries.
Aug 12 12:39:49 wco-mir squid[991]: 0 With invalid flags.
Aug 12 12:39:49 wco-mir squid[991]:   2289447 Objects loaded.
Aug 12 12:39:49 wco-mir squid[991]: 0 Objects expired.
Aug 12 12:39:49 wco-mir squid[991]:362061 Objects cancelled.
Aug 12 12:39:49 wco-mir squid[991]: 0 Duplicate URLs purged.
Aug 12 12:39:49 wco-mir squid[991]: 0 Swapfile clashes avoided.
Aug 12 12:39:49 wco-mir squid[991]:   Took 5.8 seconds (396434.2
objects/sec).

that's from one cache dir and took 5.8 seconds seems to be really wrong,
look at the time stamps:

Aug 12 12:28:21 wco-mir squid[991]: Starting Squid Cache version
2.6.STABLE14-20070731 for amd64-unknown-freebsd6.2...


that's ten minutes, has probably to do with the wrong percentage
caclulation when things go wrong also ?



michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Opinions sought on best storage type for FreeBSD

2007-08-12 Thread Michel Santos

Tek Bahadur Limbu disse na ultima mensagem:


 Ok let me upgrade my memory before setting it to 2 GB or more.
 I will set it to 768 MB for now since I have only 1 GB of memory at the
 moment.


I believe with stock maxdsiz your squid process can not use more than the
512MB limit ... so I do not know where you get 600 from

maxdsiz is not only RAM related but defines the upper limit of memory a
process can use and so I believe your machine does not swap even if not
exist RAM enough for the process (generally) but enough to get to the
limit (maxdsiz) and that might be the reason your squid process crash when
it tries to use more than the 512 limit



 other values I saw are eventually not so good choices, as somaxconn
 seems
 way to high and nbmclusters are 0 ?

 Well I will reduce somaxconn to 8192. The reason why I set nbmclusters
 to 0 is because of satellite link delays and high number of tcp
 connections, I run out of mbufs. They easily reach between 64000 -
 128000 and sometimes even more. Every now and then, I would lose tcp
 connections due to the high number of mbufs in use. So I found this
 little hack which keeps the number of mbufs utilization at bay.


what size is your link?

Sure this is not related to your crash and to your link either but
somaxconn is the queue size of pending connections and not the number of
connections and you are probably setting this far too high. somaxconn as
1024 or max 2048 would be more reasonable and nmbcluster I would not set
higher than 128 or 256k

if you eat that up you have other troubles and increasing this values does
not solve them I guess




michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Opinions sought on best storage type for FreeBSD

2007-08-11 Thread Michel Santos

Tek Bahadur Limbu disse na ultima mensagem:

 diskd indeed seems to fail under load especially when approaching
 200/300 requests per second.

are you sure this numbers are correct? where do you get them from?


 It causes Squid to crash and restart automatically. Though, the side
 effects are not noticed to the causal user, it prevents the cache from
 stabilizing in the first place.



in first place diskd does not cause automatic restart ;) that is RunCache
who does it and I also do not believe that diskd cause squid to crash


if the crash really happens then there is something wrong on your machine

if the problem is the load and your computer can not handle the load then
it first gets slow or you get out of memory and then squid may crash but
you better should look what is really wrong there before blaming the fs
type you use


 If I opt to use aufs, will the following compilations work?

 '--enable-async-io' '--with-pthreads'


with-pthreads is not necessary

but certainly this switch is kind of strange for freebsd since you need to
remap the process-threads to kernel-threads in order to get it right
(faster), both thread implementations should work well then with kqueue
which also is correctly detected by configure when available


Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Opinions sought on best storage type for FreeBSD

2007-08-11 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On lör, 2007-08-11 at 15:10 +0545, Tek Bahadur Limbu wrote:

 As far as I know and seen with my limited experience, diskd seems good
 for BSD boxes. But I guess I have to try other alternatives too.

 If I opt to use aufs, will the following compilations work?

 '--enable-async-io' '--with-pthreads'

 --enable-storeio=aufs

 pthreads is automatically enabled, so no need to specify that. Won't
 hurt if you do however.

 If you are on FreeBSD then remember to configure FreeBSD to use kernel
 threads for Squid or it won't work well. See another user response in
 this thread.


Hi
not sure, both thread implementations work well but kernel threads pretend
to be faster. In order to sense it you need some real load on the machine
and I am not sure if there is a difference at all on an UP machine

Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Opinions sought on best storage type for FreeBSD

2007-08-11 Thread Michel Santos

Adrian Chadd disse na ultima mensagem:
 On Sat, Aug 11, 2007, Tek Bahadur Limbu wrote:

 Or simply, what is the best compilation parameters to use on a
 Linux/Unix machine if I want to use aufs?


 coss at first seemed a good choice but it's long rebuilding process is
 not suitable for production use.

 We know how to fix COSS. Time (ie, funding) is the only issue here.
 We'd love to work with any groups who would be willing to help fund
 an effort to mature Squid's storage code (post Squid-3.0, which is
 almost ready from what I hear) into something 21st-century compliant.


nice words but IMO the fs usage is kind of fine-tuning because the
difference between the actual competitors aufs and coss is not sooo big

but what this and next year (and not century) is about is SMP, soon and I
guess very soon you might not be able to buy single cores anymore, when I
said a year ago all people will run quad-cores in 07/08 I got laughed at
but look at the marked, that is what it is and so I guess making squid SMP
friendly is way more important but who knows if we have soon globally
unlimited bandwidth and don't need caches anymore :) what sure will happen
first if you stay thinking of centuries in computer business :)

please don't hate me, nothing personal ok you just picked the wrong word
to compare and I couldn't hold it back :)


michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Opinions sought on best storage type for FreeBSD

2007-08-11 Thread Michel Santos

Adrian Chadd disse na ultima mensagem:
 On Sat, Aug 11, 2007, Michel Santos wrote:

 nice words but IMO the fs usage is kind of fine-tuning because the
 difference between the actual competitors aufs and coss is not sooo big

 Yeah, but the difference between AUFS/COSS and whats actually possible and
 done in the commercial world - and documented in plenty of thesis papers
 out there - is a -lot-. I'm talking double, triple the small object
 (256k)
 size.


I must admit I can't talk in there because I never could test it really
but I do not convinve myself easy by reading papers.




 And I'd love to continue work on the test SMP proxy code I've been working
 on
 on the side here, but to continue that I need money. Its easy to code this
 stuff when you're working for someone who is happy to pay you to do open
 source
 stuff that benefits them, but I'm doing this for fun. Maybe I shouldn't,
 I ain't getting paid (much.) There's only so many 45 minute bus trips
 to/from university a day atm.

 There's plenty of examples of multi-threaded network servers out there.
 Whats stopping Squid from taking advantage of that is about 6 months of
 concentrated work from some people who have the clue and time. None of
 us with the clue have any time, and noone else has stepped up to the
 plate and offered assistance (time, money, etc.) We'd love to work on it
 but the question is so how do we eat.


I agree, completely understandable

but look, easy to code but 6 month of concentrated work ar not so really
the same things ... ;)




Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Opinions sought on best storage type for FreeBSD

2007-08-11 Thread Michel Santos

Adrian Chadd disse na ultima mensagem:
 On Sat, Aug 11, 2007, Michel Santos wrote:

 I must admit I can't talk in there because I never could test it really
 but I do not convinve myself easy by reading papers.

 Good! Thats why you take the papers and try to duplicate/build from them
 to convince yourself. Google for UCFS web cache, should bring out one
 of the papers in question. 2 to 3 times the small object performance is
 what people are seeing in COSS under certain circumstances as it
 eliminates
 the multiple seeks required in the worst cases for normal UNIX
 filesystems.
 It also reduces write overhead and fragmentation issues by writing in
 larger chunks. Issuing a 512 byte write vs a 16k write to the same sector
 of disk is pretty much an equivalent operation in terms of time taken.

 The stuff to do, basically, involves:

 * planning out better object memory cache management;
 * sorting out a smarter method of writing stuff to disk - ie, exploit
   locality;

 * don't write everything cachable to disk! only write stuff that has
   a good chance of being read again;

there is a good chance beeing hit by a car when sleeping in the middle
of a highway as well there is a chance not beeing hit at all :)

well that was my knowledge about chances but here are not so many options,
or you are a hell of forseer or you create an algorithm, kind of inverting
the usage of the actual or other cache policies applying them before
caching the objects instead of controlling the replacement and aging



 * do your IO ops in larger chunks than 512 bytes - I think the sweet
   spot from my own semi-scientific tests is ~64k but what I needed to
   do is try to detect the physical geometry of the disk and make sure
   my write sizes match physical sector sizes (ie, so my X kbyte writes
   aren't kicking off a seek to an adjacent sector, and another rotation
   to reposition the head where it needs to be.)
 * handle larger objects / partial object replies better


well, the theory behind coss is quiet clear


 I think I've said most/all of that before. We've identified what needs
 doing - what we lack is people to do it and/or to fund it. In fact,
 I'd be happy to do all the work as long as I had money available once
 it was done (so I'm not paid for the hope that the work is done.)
 Trouble is, we're coders, not sales/marketing people, and sometimes
 I think thats sorely what the Squid project needs to get itself back
 into the full swing of things.


not sure, squid is long time on top now and probably there is no other
interesting project because caching is not so hot anymore, bandwidth is
cheap in comparism to 10 years ago and the heck today is PtP so I mean,
probably hard to find a sponsor with good money. The most wanted feature
is proxying and acl but not cache so I guess even if there are ever geeks
like us which simply like the challenge to get a bit more out of it most
people do not know what this is about and do not feel nor see the
difference between ufs and coss or whatever. To be realistic I understand
that nobody cares about diskd as nobody cares really about coss because it
would be only for you or for me and some more and so Henrik works on aufs
because he likes it but at the end it is also only for him and some
others. And this sum of some do not have money to spend it into
coss/aufs/diskd. And probably it is not worth it when the principal users
have a 8Mb/s adsl for 40 bucks why they should spend money on squid's fs
development?



Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Opinions sought on best storage type for FreeBSD

2007-08-11 Thread Michel Santos

Tek Bahadur Limbu disse na ultima mensagem:
 Michel Santos wrote:
 Tek Bahadur Limbu disse na ultima mensagem:
 diskd indeed seems to fail under load especially when approaching
 200/300 requests per second.

 are you sure this numbers are correct? where do you get them from?

 Hi Michel,

 I am getting these numbers from one of my busy proxy server. At peak
 times, I get anywhere from 150-200 requests per second. However to cross
 the 300 mark, it only happens when 1 or 2 of my other proxy servers go
 down and then our load balancer redirects web requests to whichever
 proxy server is up and functioning.


so you get 12000/min right? But when I asked where you get them from I
wanted to know how you count them, snmp? cachemgr?

how much mem the server has installed?



 I guess that I may have to really commit my time and resources to find
 out if other factors could be causing this to happen.

 Haven't you faced any automatic restart of your Squid process. Does that
 mean that your Squid process uptime is months?


never dies by it's own, my problem are power problems and loose human
endpoints (fingers) :)

what is you kern.maxdsiz value?

How much memory squid is using just before it crashs? is it using swap?
what ipcs tells you then or under load?


 They have been in production for years and each of their average uptime
 is about 120 days. As far as the load is concerned, my CPU usage never
 goes above 30-40% but sometimes my memory usage crosses 80% of it's
 capacity though.


what hardware is it? Which freebsd version your run? And how is your
layout, standalone proxy server, gateways or cache hierarchy?


 By the way, do you have some optimal settings which can be applied to
 diskd? Below are some values I use:

 options SHMSEG=128
 options SHMMNI=256
 options SHMMAX=50331648 # max shared memory segment size (bytes)
 options SHMALL=16384# max amount of shared memory (pages)
 options MSGMNB=16384# max # of bytes in a queue
 options MSGMNI=48   # number of message queue identifiers
 options MSGSEG=768  # number of message segments
 options MSGSSZ=64   # size of a message segment
 options MSGTQL=4096 # max messages in system

 Correct me where necessary.



that does not say so much, better you send what comes from sysctl kern.ipc

anyway you probably should not limit SHMMAX but set SHMMAXPGS so then
SHMMAX is correctly calculated and no need to compile them, that are
sysctl tunables

I believe any wrong value would not make your server crash, worst case
that your msg queues get stucked what would then put squid's disk r/w
access on hold but not to crash, well, let say I never saw a server
crashing for ipc congestion, the client simply stops communicating

Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Opinions sought on best storage type for FreeBSD

2007-08-10 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On tor, 2007-08-09 at 10:18 -0700, Nicole wrote:
 As some have pointed out, it's a shame diskd is horked, since it seemed
 to be nice and fast.

 Well, it's been broken for several years now, an no one has been willing
 to commit any resources to get it fixed.


please be a little bit more specific about comitting resources, what do
you exactly mean?


what is what you agree to be broken beyond the shutdown issue?


 However, since I have not heard of any progress on fixing
 the bug, I am curious what others have been using or prefer as their
 alternative to diskd and why?

 aufs is seen as the best alternative currently, with FreeBSD also
 supporting kernel threads.

 Note: running aufs without kernel threads is a dead end and won't
 perform well, you might just as well run with the ufs cache_dir type
 then.


ok you mean threads instead of pthreads right?


Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] endless growing swap.state after reboot

2007-08-10 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:

 then I start squid with one of the above versions and squid starts
 rebuilding swap.state

 when it starts failing we get what you want?

 That you try the same again, by shutting down Squid, then clear the
 cache and restore the backed up swap.state files and start Squid again.
 Hopefully the problem will manifest itself again, if so then there is an
 frozen state which produces the problem, and which can be debugged
 further to isolate what goes wrong.


just to get it straight

when it fails I shut squid down again

I wipe out the cache_dirs and recreate them?

I copy the former original (first) backup swap.state back in place

I start squid with this former swap.state but empty cache_dirs

Is it that exactly?

swap.state should shrink at this stage, eliminating it's reference when
not finding the file right?



Michel


...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] endless growing swap.state after reboot

2007-08-10 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On tor, 2007-08-09 at 14:25 -0300, Michel Santos wrote:

 ok the first is easy, the latter you mean what, you want the file?

 Unfortunately the file is a bit platform dependent, but I want you to
 hold on to the file and check if the problem can be reproduced by simply
 placing it back in the cache dir.


so let's mount the scenario

I shutdown squid letting rc.shutdown killing the squid process before it
had time to close correctly the cache_dirs

then I backup swap.state

or do I backup before shutting down?

then I start squid with one of the above versions and squid starts
rebuilding swap.state

when it starts failing we get what you want?


Michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Opinions sought on best storage type for FreeBSD

2007-08-10 Thread Michel Santos

Alexandre Correa disse na ultima mensagem:
 after reading this email, i switched from aufs to diskd to see
 performance of them under high load ..

 with aufs, squid never used more than 10% of cpu and response time is
 very low (5ms to 150ms).. with diskd cpu usage goes to 50% +- and
 median response time up to 900ms !!

 i´m running CentOS 5.0 with kernel 2.6.22, quad opteron 64 bits with
 4gb ram and hd are SAS 15.000 rpm



don't know anything about Centos but when a Quad Opteron does not handle
the load you obviously have something wrong in your config, either squid
or OS settings


Michel



...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Opinions sought on best storage type for FreeBSD

2007-08-10 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On fre, 2007-08-10 at 06:50 -0300, Michel Santos wrote:

 what is what you agree to be broken beyond the shutdown issue?

 Bug #761 unstable under high load when using diskd cache_dir

 diskd falls over under load due to internal design problems in how it
 maintains callback queues. Duane fixed most of it quite recently so it's
 no longer near as bad as it has been, but there is still stuff to do.
 The problems was first reported 5 years ago.


indeed the cpu load went extremly down after this changes, I won on much
machines more then 30-40%, or better 70/80% cpu load felt down to 30-40%
overall. That was very good

but I could get araound of it before and still do using at least 2 or
better 4 or more diskd processes

 ok you mean threads instead of pthreads right?

 I don't know the FreeBSD thread packages very well to call them by name.
 I only know there is two posix threads implementations. One userspace
 which is what has been around for a long time and can not support aufs
 with any reasonable performance, and a new one in more current releases
 using kernel threads which is quite capable of supporting aufs.

it it pthread versus thr (kernel threads) and who is interested, it's easy
to do on 6.2 by creating /etc/libmap.conf or adding if exist, no further
compile thing is necessary

[/usr/local/squid/sbin/squid]
libpthread.so.2 libthr.so.2
libpthread.so   libthr.so



Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Opinions sought on best storage type for FreeBSD

2007-08-10 Thread Michel Santos
Adrian Chadd disse na ultima mensagem:
 On Thu, Aug 09, 2007, Michel Santos wrote:

  the bug, I am curious what others have been using or prefer as their
  alternative to diskd and why?

 diskd for sure is the fastest specially on SMP machines but there are
 not
 so much people sharing my opinion ...

 Just supply real-world numbers showing which is faster.


oook, let's agree first what fast means fast here since fast can be
relative depending on who sense the speed and what he is used to right ...

when I say speed I mean especially response time which then often depends
on local network and wan connection latency and server quality (hardware)
so then it's kind of hard to measure that all together. Like you know
well, often squid might be blamed for performance problems and in the end
it was something else.

But then perhaps a req/hit relationsship satisfies your curiousity? Then
have a look at the image attached which shows a average server I have.

 Remember - the overlap between the people doing the development and the
 people saving/making money using Squid is almost 0..


hum, may be may be not. Problem here is that most people have one or two
servers (if) and eventually do not have enough real life data to reflect
the hundreds of different situations we find in the wild. Also a corporate
or home frontend proxy running nat and controlling internet access
probably is not exactly a performance relative comparism since such a
machine never comes to it's limits nor has much to do in means of cache
functions

people saving/making money I guess are for you those who sell their
consultant services but for me would be those who use squid for spending
less or getting more out of their internet connection - or shorter -
interested in it's cache funcionality only

so you see a bunch of different purposes and basics which are not easy to
compare in general statements as you are used to

technically speaking we do have 4 fs as choice and to not forget, this
thread is dedicated to freebsd and I have no idea about linux and less
about windows

so then first we discard ufs as good, stable and standard and we discard
coss because of it's kind of excessive startup time of 1-3 hours ... ;)

then we have left aufs and diskd for performance geeks

aufs is good but not good enough it starts choking same way as ufs under
load and this happens on the exat same hardware as the diskd I tell you
next. IMO this is happening because of missing real SMP support. may be
this is wrong and other things are making the difference here but don't
forget on freebsd our choice is ufs2 and eventually this does not work
exactly as extN on Linux

diskd probably is not very much used since it needs SHM/IPC tuning and
that is not as easy as it seems so my guess most people do not even try it
(no offense). Diskd by it's own runs several processes, one per cache_dir
what makes it naturally more SMP friendly as any other fs squid offers.

diskd also is lightning fast when configured well, specially under load
and I like to remember terabytes of databases using the same technology
with success since years so it can not be so bad ...

then resuming, for me, diskd is my choice on loaded servers and choked
links because it is faster for my application as a transparent frontend
cache on the only network router in an ISP environment. I am using diskd
since it came out and sure I ever tried the other options but none came
close.




Michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.
attachment: squid0-hit-week.png

Re: [squid-users] endless growing swap.state after reboot

2007-08-09 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On ons, 2007-08-08 at 07:12 -0300, Michel Santos wrote:
 I am coming back with this issue again since it is still persistent

 This problem is real and easy to repeat and destroys the complete
 cache_dir content. The squid vesion is 2.6-Stable14 and certainly it is
 with all 2.6 versions I tested so far. This problem is not as easy to
 launch with 2.5 where it happens in a different way after an unclean
 shutdown.

 And my problem is that I have not been able to reproduce the problem,
 and nothing apparent sticks out when reading the source.


hmm, what can I say else then asking you for suggestions. Just thinking,
you say you d not have it on your linux box but me and others are having
it on freebsd so where we go hounting it?

In your other reply you say unlikely a fs problem. What else can it be?


Michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] endless growing swap.state after reboot

2007-08-09 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On tor, 2007-08-09 at 10:39 -0300, Michel Santos wrote:

 hmm, what can I say else then asking you for suggestions. Just thinking,
 you say you d not have it on your linux box but me and others are having
 it on freebsd so where we go hounting it?

 Start with trying to find a as simple possible test case, not requiring
 a live populated cache..

 Quite likely the swap.state from a unclean shutdown triggering the
 problem is suffifient.

ok the first is easy, the latter you mean what, you want the file?



 May also be dependent on the number of cache_dir you have, or other
 configuration details (esp cache_swap_state directive), but not sure.


good, normally I use 64 64 (up to 15G) or if the cache_dirs are bigger I
use 64 128 (up to 40G) or even 128 128 for larger ones


 In your other reply you say unlikely a fs problem. What else can it be?

 It does smell like there may be a Squid bug lurking here. But without
 being able to reproduce it or it sticking out when reading the source
 hunting it down is a bit problematic..



ok so whatever you need I will try to help


thank's
Michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] endless growing swap.state after reboot

2007-08-09 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On tor, 2007-08-09 at 10:39 -0300, Michel Santos wrote:

 hmm, what can I say else then asking you for suggestions. Just thinking,
 you say you d not have it on your linux box but me and others are having
 it on freebsd so where we go hounting it?

 Start with trying to find a as simple possible test case, not requiring
 a live populated cache..

 Quite likely the swap.state from a unclean shutdown triggering the
 problem is suffifient.

ok the first is easy, the latter you mean what, you want the file?



 May also be dependent on the number of cache_dir you have, or other
 configuration details (esp cache_swap_state directive), but not sure.


good, normally I use 64 64 (up to 15G) or if the cache_dirs are bigger I
use 64 128 (up to 40G) or even 128 128 for larger ones


 In your other reply you say unlikely a fs problem. What else can it be?

 It does smell like there may be a Squid bug lurking here. But without
 being able to reproduce it or it sticking out when reading the source
 hunting it down is a bit problematic..



ok so whatever you need I will try to help


thank's
Michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Opinions sought on best storage type for FreeBSD

2007-08-09 Thread Michel Santos

Nicole disse na ultima mensagem:

  Hello
  I run a large number of FreeBSD based servers as cache accelerators for
 large
 scale image serving. (amd64 and most with dual core)

  Each server has (3) 147G disks and 36G of the boot disk.
  Altho I have some older servers that have 36G and (3) 72G disks.

  The older (smaller) servers seem mostly fine with FreeBSD 6.1-STABLE
 using
 diskD on Version 2.6.STABLE12. However the larger servers on 6.2-STABLE
 and
 (Version 2.6.STABLE12 and up) seem to be falling over and falling over
 themselves every so often.


Hi
could you explain better what happens ?


 I assume due to the diskD bug with FreeBSD.

what bug is it you found?


 (enormous
 disk usage and swapfiles as compared to AUFS for instance)


if your server use swap you are short on memory (ram)


  I have been testing both AUFS and COSS as an alternative and both with
 mixed

AFAIN aufs works well but slower in comparism to diskd

 the bug, I am curious what others have been using or prefer as their
 alternative to diskd and why?

diskd for sure is the fastest specially on SMP machines but there are not
so much people sharing my opinion ...

Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




[squid-users] endless growing swap.state after reboot

2007-08-08 Thread Michel Santos

I am coming back with this issue again since it is still persistent

This problem is real and easy to repeat and destroys the complete
cache_dir content. The squid vesion is 2.6-Stable14 and certainly it is
with all 2.6 versions I tested so far. This problem is not as easy to
launch with 2.5 where it happens in a different way after an unclean
shutdown.

How to repeat this is easy, on any 2.6 version you shut down the machine
with rc.shutdown time shorter than squid needs to close the cache_dirs
what then kills the still open squid process[es] - no hard reset or power
failure is necessary.

After reboot squid gets crazy with swap.state on the affected cache-dirs
as you can see in messages and cache_dir graphs I put together from two
different machines in the following file

Important here, the partitions ARE clean from OS's view and fsck is not
beeing invoked and running fsck manually before mounting them does NOT
change anything.

You also can see on the machine with 4 cache_dirs that only two dirs are
beeing destroyd, probably because of their size which needed longer to
close them

http://suporte.lucenet.com.br/supfiles/cache-prob.tar.gz

This happens with 100% sure hit with AUFS and DISKD and UFS still does
what squid-2.5 did:


- squid-2.6 creates a never-ending-growing swap.state until the disk is
full and the squid process dies becaus of disk full

- squid-2.5 let the swap.state as is and empties the cache_dirs partially
or completely


Even I can see that this can be understood as unclean shutdown I must
insist that the growing swap.state and cache_dir Store rebuild negative
values and it's 2000%-and-what-ever values in messages are kind of strange
and probably wrong

What I do not understand here is the following.

So fare I ever was told that the problem is a corrupted swap.state file

But for my understandings the cached file is beeing referenced in
swap.state soon it is cached.

This obviously should have been happened BEFORE squid is shutting down or
dies so why squid still needs to write to swap.state at this stage?

And if it for any reason did not happened than the swap.state rebuild
process detect and destroys the invalid objects in each cache_dir on
startup

If squid needs to read swap.state in order to close the cache_dirs than it
would be enough to have swap.state open for reading? Then certainly it
does not get corrupted or not?


Since you tell me that *nobody* has this problem what I certainly can not
believe ;) but seems you guys are using linux or windows then might this
be related to freebsd's softupdate on the file system and squid can not
handle this? Should I disable it and check it out?


michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] endless growing swap.state after reboot

2007-08-08 Thread Michel Santos

Michel Santos disse na ultima mensagem:


 Since you tell me that *nobody* has this problem what I certainly can not
 believe ;) but seems you guys are using linux or windows then might this
 be related to freebsd's softupdate on the file system and squid can not
 handle this? Should I disable it and check it out?



I better add this here from the man page which made me think so because I
do not know how linux and windows either handle this :

Softupdates drastically improves meta-data performance, mainly file
creation and deletion.

First, softupdates guarantees file system consistency in the case of a
crash but could very easily be several seconds (even a minute!) behind on
pending write to the physical disk.  If you crash you may lose more work
than otherwise.  




Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] endless growing swap.state after reboot

2007-08-08 Thread Michel Santos

m0f0x disse na ultima mensagem:
 I think you should setup -CURRENT FreeBSD boxes to test gjournal[1].
 Maybe gjournal can help you out, but you'll only know if you test it on
 your own.

 gjournal will be probably on the next FreeBSD engineering release,
 7.0-RELEASE[2].


yep I know, zfs eventually can be interesting too but that does not change
the problem.

certainly using current on production servers is not so very wise at all

still I do not know if the fs is or not the problem because since squid is
running and compiling well on freebsd it should handle the default ufs2 wo
any problem

Michel



 Cheers,
 m0f0x

 [1] http://wiki.freebsd.org//gjournal (historic)
 ... http://docs.freebsd.org/cgi/mid.cgi?20060619131101.GD1130
 [2] http://www.freebsd.org/releases/7.0R/schedule.html

 On Wed, 8 Aug 2007 07:12:37 -0300 (BRT)
 Michel Santos [EMAIL PROTECTED] wrote:


 I am coming back with this issue again since it is still persistent

 This problem is real and easy to repeat and destroys the complete
 cache_dir content. The squid vesion is 2.6-Stable14 and certainly it
 is with all 2.6 versions I tested so far. This problem is not as easy
 to launch with 2.5 where it happens in a different way after an
 unclean shutdown.

 How to repeat this is easy, on any 2.6 version you shut down the
 machine with rc.shutdown time shorter than squid needs to close the
 cache_dirs what then kills the still open squid process[es] - no hard
 reset or power failure is necessary.

 After reboot squid gets crazy with swap.state on the affected
 cache-dirs as you can see in messages and cache_dir graphs I put
 together from two different machines in the following file

 Important here, the partitions ARE clean from OS's view and fsck is
 not beeing invoked and running fsck manually before mounting them
 does NOT change anything.

 You also can see on the machine with 4 cache_dirs that only two dirs
 are beeing destroyd, probably because of their size which needed
 longer to close them

 http://suporte.lucenet.com.br/supfiles/cache-prob.tar.gz

 This happens with 100% sure hit with AUFS and DISKD and UFS still does
 what squid-2.5 did:


 - squid-2.6 creates a never-ending-growing swap.state until the disk
 is full and the squid process dies becaus of disk full

 - squid-2.5 let the swap.state as is and empties the cache_dirs
 partially or completely


 Even I can see that this can be understood as unclean shutdown I must
 insist that the growing swap.state and cache_dir Store rebuild
 negative values and it's 2000%-and-what-ever values in messages are
 kind of strange and probably wrong

 What I do not understand here is the following.

 So fare I ever was told that the problem is a corrupted swap.state
 file

 But for my understandings the cached file is beeing referenced in
 swap.state soon it is cached.

 This obviously should have been happened BEFORE squid is shutting
 down or dies so why squid still needs to write to swap.state at this
 stage?

 And if it for any reason did not happened than the swap.state rebuild
 process detect and destroys the invalid objects in each cache_dir on
 startup

 If squid needs to read swap.state in order to close the cache_dirs
 than it would be enough to have swap.state open for reading? Then
 certainly it does not get corrupted or not?


 Since you tell me that *nobody* has this problem what I certainly can
 not believe ;) but seems you guys are using linux or windows then
 might this be related to freebsd's softupdate on the file system and
 squid can not handle this? Should I disable it and check it out?


 michel
 ...







 A mensagem foi scaneada pelo sistema de e-mail e pode ser considerada
 segura.
 Service fornecido pelo Datacenter Matik  https://datacenter.matik.com.br





...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] What is the most data anyone has cached with squid?

2007-08-03 Thread Michel Santos

Mark Vickers disse na ultima mensagem:
 I was thinking of building several boxes with between 10TB and 20TB of
 SATA drives, for some squid caches.

 Has any used squid to cache that much data?

 Any idea what the upper limit is?  The practical limit?


Hi
not sure if you will get something reasonable out of Sata drives ...

My experience is that a small ISP/POP *might* work reasonable with Sata
but more than ~100GB of cache_dir size seems the max. Soon squid passes
2-2500requ/min or so a Sata drive seems to reach it's limit and the hit
rate drops. Whatever configuration, with Sata under load, you often can
feel the queue when the page does not load as usual.

With scsi I have used 3x300GB disks for caches but seems too big and seems
that a reasonable size is just up to 1Tb but obviously depends on object
size and number of connected clientes. The limit seems to be set by squid
and not by the hardware.

Michel







Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] 4 squid with multiple cache_dir and cache_peer

2007-08-03 Thread Michel Santos

Shekhar Gupta disse na ultima mensagem:
 All,

 I have a following senario in my office , where i have decided to
 install 4 squid servers with the follwing hardware config

 Model HP DL380 G5 , with 2 GB RAM , 256 Raid Card and 300 GB with Raid 0 .

 Now i have 4 servers of the similar config , and i want to have the
 all the server query to each other for cache before they go out and
 bring the content . all of the 4 servers are in the same  DMZ segment
 . Also at the same time if you look at the partition i have made a
 partition with /squid of 110 GB and would like to host multiple
 cache_dir so that i can utilize the space of the server . So if anyone
 of you who can tell me with some config example to resolve these 2
 issues will be of gr8 help .


you probably don't want to make multiple cache-dirs out of 100 gigs

then do this on the first

cache_peer srv2 sibling tcp_port icp_port proxy-only
cache_peer srv3 sibling tcp_port icp_port proxy-only
cache_peer srv4 sibling tcp_port icp_port proxy-only

and this on the second
cache_peer srv1 sibling tcp_port icp_port proxy-only
cache_peer srv3 sibling tcp_port icp_port proxy-only
cache_peer srv4 sibling tcp_port icp_port proxy-only

and guess what on the others

you can try and test to add round-robin at the end of each line

but why do you need so much servers? are there zillions of users?

Michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Transparent proxy ACCESS DENIED

2007-07-23 Thread Michel Santos

Funieru Bogdan disse na ultima mensagem:
 fixed it ... i think there is a bug in the version 2.5
 or smth ... replaced it with 2.6 and it works just
 fine the same config nothing changed
 on my previous tests i even put

 acl all src 0.0.0.0/0
 http_access allow all


I had a similar problem between squid 2.5 and 2.6 and what I did to solve
it was using

0.0.0.0/0.0.0.0

or an exact mask identifier as

127.0.0.1/32
200.153.80/20

squid 2.6 seems to have problems without mask or with /0


Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Transparent proxy ACCESS DENIED

2007-07-22 Thread Michel Santos

Funieru Bogdan disse na ultima mensagem:

 Now i'm no genius but ... here is the problem that i'm
 confronting with :
 When i try to access the internet via the secondary
 server everything is ok , everything works, BUT if i
 try to go through the 3rd server.. i get an access
 denied now the nice part is that not the 3rd server
 denies the access, but the primary server.

 Any clues ??? i use
 SQUID 2.6 STABLE9-20070208 on primary server
 SQUID 2.5 STABLE9-20050409 on third server
 SQUID 2.6 STABLE12-20070404 on second server



would you mind posting your

acl all ...
acl peers ...

settings from each server?

Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Squid and level 4 switch

2007-07-20 Thread Michel Santos

Ming-Ching Tiew disse na ultima mensagem:

 From: Michel Santos [EMAIL PROTECTED]

 aren't you mixing things here? *layer* 4 and *level* 4 are different
 things and policy routing eventually is still another


 I know you are the expert but your answers are not helping at all.

 I don't need to be told that you are the expert but I will be glad
 to be told how different and in what way they are different.

thank's for the glory but it is not an expert qualification but a
necessarily basic knowledge for anyone who works with TCPIP/routing in
order to understand what he is doing ...

anyway, level 3 switch/bridge understand up to OSI Layer4 and layer 4
switch/bridge understand up to OSI layer 7 as I said already before

so you can google for OSI Layer definition and see what that is, that
are the differente network layers from hardware up to application layer




 for policy routing you do not need a level 4 bridge neither a level 4
 switch because any OS with any kind of forwarding capable firewall
 package
 can do that and in order to do routing (any) you do not need a bridge
 setup at all


 I was just trying to slip in a box which does things transparently.
 I intend to get a little further than this, I want to even add gre to it
 so then it will hopefully behave like a Cisco doing WCCP2 with an
 external squid box with wccp2 configured.

 Purpose is modest :- Use it to check if the squid  is set up correctly
 without disturbing existing network.

 Maybe you could be a little more specific about if you were to do it,
 how would you go about doing it. More specifically when the
 squid is 'tproxy transparent', ie when the forward path is spoofed,
 how to you handle the routing of the return path.


oook, but so far you did not told us what you wanted to do but asked for
level and layer things ...


I believe you do not need WCCP2 if you do not use a Cisco router and I
myself am not sure if this is a solution but kind of a workaround at all
but that is only my opinion

In order to get to a remote cache you need to configure only and lonely
package forwarding in order to make it work, this is supposed to happen on
your gateway where you intercept tcp:80 traffic destined to the external
world and forward it to the tcp port where your squid is listening at the
remote server

that is all you need, requirements that your gateway linux runs as a
gateway and has any kind of firewall package which can do the forward

you find zillions of examples for any kind of firewall on the net or in
the man page of your firewall package, it is easier as you might think




Michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Squid and level 4 switch

2007-07-19 Thread Michel Santos

Ming-Ching Tiew disse na ultima mensagem:
 From: Henrik Nordstrom [EMAIL PROTECTED]

 Can I simulate a level 4 switch behaviour using Linux ? If yes,
 any insight to the necessary ebtables/iptables rules ?

Linux policy routing is an example of layer 4.

 I am wondering if this setup shall be a reason representation of a
 so-called
 level 4 bridge. This configuration works under both 'tproxy transparent'
 as well as 'transparent' mode for squid 2.6 stable 13.

seeing clearly the high risk of beeing shooten to death ... but

aren't you mixing things here? *layer* 4 and *level* 4 are different
things and policy routing eventually is still another


for policy routing you do not need a level 4 bridge neither a level 4
switch because any OS with any kind of forwarding capable firewall package
can do that and in order to do routing (any) you do not need a bridge
setup at all


Michel




 Assuming :-

 NETMASK=255.255.192.0
 SQUID_IP=192.168.128.50
 L4_SWITCH_IP=192.168.128.51
 INTERNET_GW=192.168.128.1

 1. On the L4 switch create bridge br0 consisting of 3 ethernet interfaces.

 eth1 is connected to internet
 eth0 is connected to inside network
 eth2 is connected to squid

 # ifconfig eth0 0.0.0.0 promisc up
 # ifconfig eth1 0.0.0.0 promisc up
 # ifconfig eth2 0.0.0.0 promisc up
 # brctl addbr br0
 # brctl addif br0 eth0
 # brctl addif br0 eth1
 # brctl addif br0 eth2
 # ifconfig br0 $L4_SWITCH_IP netmask $NETMASK up

 2. Set up the bridge to mark the packets so that policy routing works :-

from inside network go to internet destination port 80, mark 1.
from internet come back with source port 80, mark 1 as well.

# ebtables -t broute -A BROUTING -i eth0 -p IPv4 --ip-protocol 6 \
   --ip-destination-port 80 -j redirect --redirect-target DROP
# iptables -t mangle -A PREROUTING -i eth0 -p tcp --dport 80 \
-j MARK --set-mark 1

#ebtables -t broute -A BROUTING -i eth1 -p IPv4 --ip-protocol 6 \
 --ip-source-port 80 -j redirect --redirect-target DROP
# iptables -t mangle -A PREROUTING -i eth1 -p tcp --sport 80 \
-j MARK --set-mark 1

 3. Set up additional routing table and ip rule :-

 # echo '100 one'  /etc/iproute2/rt_tables
 # ip rule add fwmark 1 lookup one
 # ip route add default via $SQUID_IP table one

 ( routing table 'one' need only to have one line, ie the default route,
 local interface routes will interfere with tproxy  )

 # ip route add default via $INTERNET_GW table main

 Regards.





...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Squid and level 4 switch

2007-07-18 Thread Michel Santos

Ming-Ching Tiew disse na ultima mensagem:

 Anyone has experience with level 4 switch  ? What is the working
 principle of a level 4 in respect to redirecting web traffic to a cache
 engine ? Does it do dst IP address rewrite ( iptables DNAT ) or
 does it do dst MAC address rewrite ( ebtables dnat ) when redirecting
 traffic to the cache engine ?


a level 4 switch should understand up to osi layer 7 what means it can
understand nat and you can use it for loadbalancing the requests to
between several servers

 Can I simulate a level 4 switch behaviour using Linux ? If yes,
 any insight to the necessary ebtables/iptables rules ?


it should so and probably you like to have a look also at Linux LVS
project (www.linuxvirtualserver.org)




Michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Recommended Cache Settings for cache_mem

2007-07-05 Thread Michel Santos

Alexandre Correa disse na ultima mensagem:
 i have 1 dedicated server for squid serving about 700 users simultaneous
 and
 80 req/s !!

 server is dual opteron dual core and 4gb of ram..

 this cache_mem is fine ?

 cache_mem 256 MB

 or is best decrease ou increase this ?

how much you need depends in parts on maximum_object_size_in_memory also

you might like to monitor squid's memory usage and increase cache_mem to a
value which just not cause swap usage but I would say you could set it
initially to 350-450MB or something


Michel


...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Recommended Cache Settings for cache_mem

2007-07-04 Thread Michel Santos

Snow Wolf disse na ultima mensagem:
From my experience,when squid has used full of cache_dir close to
 20-30G,the OS would become very high-load due to the disk cache
 swap.So I think even you have 160G disk,you may not set the cache_dir
 bigger than 20G.


ahem ... so I would need 8 x 160G disks in order getting 160G of cache_dir
or 4 x 250 also would do it? And the rest of each disk is used by this
disk cache swap thing?

Michel



 2007/7/4, Adam Parsons [EMAIL PROTECTED]:
 Hi,  I would like advise on what the best settings would be for
 cache_mem on a SquidNT (Note: SquidNT) box that we will be putting out
 into a number of schools.  The specs of the workstation are Pentium Duo
 Core 1.86, 1 GB Ram, 160GB Hard drive.  We have set aside 60GB of space
 for the cache (though we could go to 80GB?, as we have the space).  The
 box is mostly going to be used for a local squid caching box, but it is
 also going to be used as a McAfee repository (which will only take up
 150MB of space).  What would be the best setting for cache_mem and would
 setting the cache_dir to 80GB be beneficial, seeing as most of these
 sites would use less than 10GB of internet a month?

 Thanks in advance - Adam









 A mensagem foi scaneada pelo sistema de e-mail e pode ser considerada
 segura.
 Service fornecido pelo Datacenter Matik  https://datacenter.matik.com.br





...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Recommended Cache Settings for cache_mem

2007-07-04 Thread Michel Santos

Joel Jaeggli disse na ultima mensagem:
 Jeff Pang wrote:
 2007/7/4, Michel Santos [EMAIL PROTECTED]:

 Snow Wolf disse na ultima mensagem:
 From my experience,when squid has used full of cache_dir close to
  20-30G,the OS would become very high-load due to the disk cache
  swap.So I think even you have 160G disk,you may not set the cache_dir
  bigger than 20G.
 

 The real measure here is number of i/o's per second per spindle...
 you'll something on the order of 50-100 from 7200 rpm disks (read
 service time is around 20ms) given that not all activities (writes)
 require an immediate seek. In the context of squid, fast disks or lots
 of disks are more important than large disks because scaling comes from
 the number of requests that can be served


I better hold my peace here ...


 ahem ... so I would need 8 x 160G disks in order getting 160G of
 cache_dir
 or 4 x 250 also would do it? And the rest of each disk is used by this
 disk cache swap thing?


 the rest of the disk is empty.



as long as they are new this empty disks are really cool but I, may be I
am kind of wierd, buy them to fill them up  ;)



Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Recommended Cache Settings for cache_mem

2007-07-04 Thread Michel Santos

Jeff Pang disse na ultima mensagem:
 2007/7/4, Michel Santos [EMAIL PROTECTED]:

 Snow Wolf disse na ultima mensagem:
 From my experience,when squid has used full of cache_dir close to
  20-30G,the OS would become very high-load due to the disk cache
  swap.So I think even you have 160G disk,you may not set the cache_dir
  bigger than 20G.
 

 ahem ... so I would need 8 x 160G disks in order getting 160G of
 cache_dir
 or 4 x 250 also would do it? And the rest of each disk is used by this
 disk cache swap thing?


 Hmm,you can try it.I mean no more than 20G totally for use.


I thought cache_dir is for caching so I for my case I have some servers
having more than 1TB overall cache space and I am pretty happy with it.
20G or so might be enough for a not dedicated server or a small business
or home proxy. Anyway using for above mentioned reasons 20G of a 160g disk
seems kind of awkward to me

Michel


...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Optimizing squid? Does anyone have an docs on optimizing squid?

2007-07-02 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On Mon, 2007-07-02 at 16:54 -0400, Mark Vickers wrote:
 I have it set up as reverse proxy.

 In test I hit it with eight load clients but can't but can't get the CPU
 on the squid or carp boxes to go above 30%, and I can only pull down
 about 500 10k files per second.

 On what kind of server?

 Squid can only use a single CPU core, so if this is a SMP or multi-core
 server then you won't be able to fully utilize the CPU with a single
 Squid instance..



I don't know if that is so, I get easily 80-90% CPU on one core at peak time


  PID USERNAMETHR PRI NICE   SIZERES STATE  C   TIME   WCPU COMMAND
 1092 squid 1  87  -19   497M   492M select 0  524.14 59.81% squid0
 1094 squid 1  78  -19   958M   955M select 3 326:34 23.69% squid2
 1093 squid 1  78  -19   960M   957M select 1 325:08 21.86% squid1
 1103 squid 1  -4  -19  5280K  1304K msgwai 2   0:37  0.00% diskd
 1102 squid 1  -4  -19  5280K  1304K msgwai 2   0:19  0.00% diskd
 1099 squid 1  -8  -19  2444K   720K piperd 2   0:00  0.00% unlinkd
 1100 squid 1  -8  -19  2444K   720K piperd 3   0:00  0.00% unlinkd
 1101 squid 1  -8  -19  2444K   664K piperd 2   0:00  0.00% unlinkd


michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




[squid-users] aufs is broken too in 2.6 (and badly)

2007-06-24 Thread Michel Santos
Henrik Nordstrom disse na ultima mensagem:

 And yes, I don't care much for diskd. Never have. My main focus is on
 aufs and ufs. But how swap.state is maintained should be the same in all
 three ufs based cache_dir types. And with current FreeBSDs also fully
 capable of using aufs...

I followed your advice and changed to aufs but aufs has serious problems too

firstable, after unclean reboot swap.state grows until disk is full and
squid dies, so same problem as diskd, only in messages appears until it
dies something different:

Jun 24 08:02:20 gw squid[1026]: Store rebuilding is 100.0% complete
Jun 24 08:03:50 gw squid[1026]: Store rebuilding is 100.0% complete
Jun 24 08:04:05 gw squid[1026]: Store rebuilding is 100.0% complete
Jun 24 08:04:20 gw squid[1026]: Store rebuilding is 100.0% complete
Jun 24 08:05:05 gw squid[1026]: Store rebuilding is 100.0% complete
Jun 24 08:06:05 gw squid[1026]: Store rebuilding is 100.0% complete
Jun 24 08:06:20 gw squid[1026]: Store rebuilding is 100.0% complete
Jun 24 08:06:35 gw squid[1026]: Store rebuilding is 100.0% complete


swap.state grows and grows until disk is full and squid dies


but it get worse because without any visible reason aufs starts wiping out
cache subdirs until it is nothing left on the disk. This case has nothing
to do with the unclean shutdown problem. The following takes place on a
running machine without any visible trigger, no reboot, no error neither
form the OS or squid but this:

storeSwapOutFileClosed: dirno 0, swapfile 0009FC83, errflag=-1 (2) No such
file or directory
storeDirClean: WARNING: Creating /c/c2/63/52
storeDirClean: /c/c2/63/52: (2) No such file or directory
storeDirClean: WARNING: Creating /c/c1/0A/02
storeDirClean: /c/c1/0A/02: (2) No such file or directory
storeSwapOutFileClosed: dirno 0, swapfile 0009FC83, errflag=-1 (2) No such
file or directory
storeSwapOutFileClosed: dirno 0, swapfile 0009FC85, errflag=-1 (2) No such
file or directory


and to answer your upcoming questions, hardware is 100% ok, this are SMP
machines running freebsd releng_6 amd64

wipe_out starts on all cache_dirs (look at the attached images)

aufs was running for a week (after your advice) until it happened, some
weeks before on diskd, and before months diskd with 2.5-S14-20060721

 same problem on 3 server

I went back to diskd and 2.5 now

but what kind of evil advice you gave me here :)



Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.
attachment: ds_cache2-week.pngattachment: ds_cache1-week.png

Re: [squid-users] aufs versus 2.6-S13 + diskd is freaky bugged

2007-06-24 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 tor 2007-06-21 klockan 06:40 -0300 skrev Michel Santos:

 Store rebuilding is 100% complete
 ... repeating ...

 and swap.state grows until disk is full

 You sure it's swap.state that grows? During the rebuild there is two
 files..

 swap.statethe old index log
 swap.state.newthe new index log while it's rebuilt

 and it's only swap.state.new that grows..

 when the rebuild has completed swap.state is replaced by swap.state.new
 by removing swap.state and renaming swap.state.new.

yes

swap.state.new stops growing at a certain point and then swap.state grows
out of limits until disk is full and squid dies


the moment seems to be when 100% complete is starting to be repeated
endless in messages


Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Re: aufs is broken too in 2.6 (and badly)

2007-06-24 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 sön 2007-06-24 klockan 09:37 -0300 skrev Michel Santos:

 but it get worse because without any visible reason aufs starts wiping
 out
 cache subdirs until it is nothing left on the disk.

 That's something Squid is incapable of doing. It can delete files, but
 not directories..


humm ...

how can it then happen? I have no user access to the machines and the
cache_dirs are chowned and chmoded to be accessed only by squid. I also
checked history if I did something stupid but didn't find anything else
than changing to aufs.


I compile with --enable-truncate



Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] aufs versus 2.6-S13 + diskd is freaky bugged

2007-06-24 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 sön 2007-06-24 klockan 19:24 -0300 skrev Michel Santos:

 swap.state.new stops growing at a certain point and then swap.state
 grows
 out of limits until disk is full and squid dies

 At that point, do you still have a swap.state.new file, or only
 swap.state?


swap.state.new stays there but without increasing size anymore


Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] aufs versus 2.6-S13 + diskd is freaky bugged

2007-06-24 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 sön 2007-06-24 klockan 19:36 -0300 skrev Michel Santos:

 swap.state.new stays there but without increasing size anymore

 Very very odd.

 Hmm.. one idea. Is it possible there was a squid -k rotate or squid
 -k reconfigure call while the rebuild was in progress?


I do not use logging
I have logfile_rotate 1 in squid.conf but I guess it doesn't matter as
long as the rotate command isn't issued

reconfigure I do not remember on the test machine where I looked at the
ongoing process but on the other machines where it happened sure not


Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




[squid-users] aufs versus 2.6-S13 + diskd is freaky bugged

2007-06-21 Thread Michel Santos

Only to remember, I get similar to the following after unclean diskd shutdown

 Store rebuilding is -0.3% complete
 Store rebuilding is -0.4% complete
 Store rebuilding is -0.4% complete
 Store rebuilding is -0.3% complete
 Store rebuilding is -0.4% complete
 Store rebuilding is -0.3% complete
 Store rebuilding is -0.4% complete
 
 until suddenly ...
 
 Store rebuilding is 1291.7% complete
 Store rebuilding is 743.5% complete
 Store rebuilding is 1240.4% complete
 Store rebuilding is 725.0% complete
 Store rebuilding is 1194.1% complete
 Store rebuilding is 1150.4% complete
 Store rebuilding is 707.9% complete


with squid-2.5 I get cache_dir emptying (believes disk is full) instead of
swap.state growing


so then I re-tested again aufs following advices from the list and
firstable I get soon

squidaio_queue_request: WARNING - Queue congestion
squidaio_queue_request: WARNING - Queue congestion

with default configure options but I do not care of this now, what really
matters, I forced resets and aufs  *do also get it wrong*  only it act
different:

Store rebuilding is 100% complete
Store rebuilding is 100% complete
Store rebuilding is 100% complete
Store rebuilding is 100% complete
Store rebuilding is 100% complete
... repeating ...

and swap.state grows until disk is full

difference between diskd and aufs is that I get a hit with diskd almost
after any reset, aufs needs to be reset twice or trice in order to get it
done


then, about swap.state corruption, which is told to be culprit for
cache_dir emptying after unclean shutdowns

I let run a script which copies/backup each second the swap.state and
reset the server

after fsck finishes the swap.state is identical to it's copy

soon I start squid/diskd things get crazy

so I guess this swap.state story might be the cause in some cases but I do
not found a single case after 20 forced cache_dir problems, so I guess
there is something else doing wiered things

any idea?

Might it be possible that there is ctime/mtime problem or some other time
comparism confusion in the code for rebuilding the cache_dirs?


Michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] 2.6-S13 + diskd is freaky bugged

2007-06-17 Thread Michel Santos

Tek Bahadur Limbu disse na ultima mensagem:

 After an unclean reboot squid builds a monster swap.state which fills up
 the disk in seconds (graph attached)

 funny is that until the disk is full it logs

 Store rebuilding is -0.3% complete
 Store rebuilding is -0.4% complete
 Store rebuilding is -0.4% complete
 Store rebuilding is -0.3% complete
 Store rebuilding is -0.4% complete
 Store rebuilding is -0.3% complete
 Store rebuilding is -0.4% complete
 
 until suddenly ...
 
 Store rebuilding is 1291.7% complete
 Store rebuilding is 743.5% complete
 Store rebuilding is 1240.4% complete
 Store rebuilding is 725.0% complete
 Store rebuilding is 1194.1% complete
 Store rebuilding is 1150.4% complete
 Store rebuilding is 707.9% complete


 what shall I do with this?


 Hi Michel,

 Have you tried stopping Squid and deleting your swap.state file and
 restarting Squid again?


No because that does not solve anything, I deleted the partitions (newfs)
and -zeed the cache_dirs what is faster

the problem is not unique to this particular situation, it is repeatable
easily by shutting down the machine without waiting for squid closing the
files and bang ...

diskd had already a problem after unclean shutdowns before but only
unlinked slowly the cache_dir content what btw was already anoying but did
not killed the service

Michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




[squid-users] COSS unusable on FreeBSD?

2007-06-17 Thread Michel Santos

squid needs about two hours to build a 8GB coss_dir on a clean partition
while it is building the cache_dir the service is practically unusable slow

even a 1Gb needs more than 30 minutes

the same time is spend each time squid is starting

and this is on a dual-cpu with U320 disks. freeBSD releng_6 amd64. ufs2,
while it is building there is no high cpu-usage or disk-usage, seems to be
simply slow by itself

when then cache_dir is rebuild than it works ok with good performance but
until getting there it is a burden

is it meant to be that slow?


Michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] COSS unusable on FreeBSD?

2007-06-17 Thread Michel Santos

Adrian Chadd disse na ultima mensagem:
 On Sun, Jun 17, 2007, Michel Santos wrote:

 squid needs about two hours to build a 8GB coss_dir on a clean partition
 while it is building the cache_dir the service is practically unusable
 slow

 Nope, not meant to be that bad.
 Whats squid -v say, and whats your /etc/libthr.conf say?



Squid Cache: Version 2.6.STABLE13-20070603
configure options: '--enable-default-err-language=Portuguese'
'--enable-storeio=diskd,ufs,aufs,coss,null'
'--enable-removal-policies=heap,lru' '--enable-underscores'
'--disable-ident-lookups' '--disable-hostname-checks'
'--enable-large-files' '--disable-http-violations'
'--enable-truncate' '--disable-wccp' '--disable-wccpv2'
'--enable-follow-x-forwarded-for' '--disable-linux-tproxy'
'--disable-linux-netfilter' '--disable-epoll'

I tried w/o

'--enable-large-files'
'--enable-truncate'

but makes no difference

I don't use libthr.conf but libmap.conf

[/usr/local/squid/sbin/squid]
libpthread.so.2 libthr.so.2
libpthread.so   libthr.so


Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] COSS unusable on FreeBSD?

2007-06-17 Thread Michel Santos
Adrian Chadd disse na ultima mensagem:
 On Sun, Jun 17, 2007, Michel Santos wrote:

 squid needs about two hours to build a 8GB coss_dir on a clean partition
 while it is building the cache_dir the service is practically unusable
 slow

 Nope, not meant to be that bad.
 Whats squid -v say, and whats your /etc/libthr.conf say?



Squid Cache: Version 2.6.STABLE13-20070603
configure options: '--enable-default-err-language=Portuguese'
'--enable-storeio=diskd,ufs,aufs,coss,null'
'--enable-removal-policies=heap,lru' '--enable-underscores'
'--disable-ident-lookups' '--disable-hostname-checks'
'--enable-large-files' '--disable-http-violations'
'--enable-truncate' '--disable-wccp' '--disable-wccpv2'
'--enable-follow-x-forwarded-for' '--disable-linux-tproxy'
'--disable-linux-netfilter' '--disable-epoll'

I tried w/o

'--enable-large-files'
'--enable-truncate'

but makes no difference

I don't use libthr.conf but libmap.conf

[/usr/local/squid/sbin/squid]
libpthread.so.2 libthr.so.2
libpthread.so   libthr.so


Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] 2.6-S13 + diskd is freaky bugged

2007-06-17 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 sön 2007-06-17 klockan 07:22 -0300 skrev Michel Santos:

 the problem is not unique to this particular situation, it is repeatable
 easily by shutting down the machine without waiting for squid closing
 the
 files and bang ...

 Shutting down hard by pulling the power, or by the shutdown command?


actually both
reducing rc_shutdown time in order killing the running processes does the
same harm to squid's cache_dirs

 Squid will be very unhappy if swap.state contains garbage, which might
 happen if you suddently pull the power and your OS is using a filesystem
 which don't guarantee file integrity in such conditions..


of course but the fs should be recovered by the systems fsck which
definitly happens in cases after a power or hardware failure, so I mean
the cash_dirs and their content as well as swap.state are in perfect
conditions (no file corruption I mean) when squid starts


so let's say that because of a power shortage swap.state is not written
perfectly so in my opinion when squid rebuilds the cache_dir it should be
build to the latest correct written transaction. You must have a kind of
check point in it or not? So let's say cache_dir state up to a minute
before the power off or so. Then squid discards the overhead in cache_dir
- but squid actually deletes the complete cache_content within a day or
so, that can not be ok

In my opinion this is a problem of the system time and how diskd handles
it, because sometimes the above process initiates when going into
summertime automatically in a bad moment

so that was so on 2.5

2.6 now calculates the % wrong and crashes and never comes back because
the disk gets full

it certainly seems wrong to me that squid builds a 20GB swap.state on a
1GB cache_dir (under whatever condition)



Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] 2.6-S13 + diskd is freaky bugged

2007-06-17 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:


 Not much if anything has changed in this area since 2.5. At least not
 after the 2GB changes in 2.5.STABLE10.



you say so but diff counts 111+/90- on store_dir_diskd.c and 50+/22- on
store_io_diskd.c between 2.5.S14-20060721 and 2.6.S13 but not much is
pretty relative


 2.6 now calculates the % wrong and crashes and never comes back because
 the disk gets full

 Should not happen, and have not heard of it happening from anyone else.

first part I agree but hum :)
I am not sure if this means not true until someone else calls it or not
true coming from me ... either way, strange answer, does it mean you don't
 care and I'm on my own with this?


Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] 2.6-S13 + diskd is freaky bugged

2007-06-17 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:

 It means it's the first time we have heard of this problem.

 And yes, I don't care much for diskd. Never have. My main focus is on
 aufs and ufs. But how swap.state is maintained should be the same in all
 three ufs based cache_dir types. And with current FreeBSDs also fully
 capable of using aufs...



that is too sad to hear ...

capable is one thing but the fast thing is what diskd is, especially on
smp machines


anyway, I wonder, why then the swap.state problem does not appear when
using ufs. On FreeBSD you can tear off the powercable twice and trice and
squid with ufs  cache_dir comes up fine after fsck corrected the errors -
but - diskd goes wild

it is clearly a diskd isolated problem since the cache_dir and swap.state
are the same for both, so I guess it has nothing to do with the swap.state
file it is only wrongly rebuild when running diskd


Michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




[squid-users] 2.6-S13 + diskd is freaky bugged

2007-06-15 Thread Michel Santos

After an unclean reboot squid builds a monster swap.state which fills up
the disk in seconds (graph attached)

funny is that until the disk is full it logs

Store rebuilding is -0.3% complete
Store rebuilding is -0.4% complete
Store rebuilding is -0.4% complete
Store rebuilding is -0.3% complete
Store rebuilding is -0.4% complete
Store rebuilding is -0.3% complete
Store rebuilding is -0.4% complete

until suddenly ...

Store rebuilding is 1291.7% complete
Store rebuilding is 743.5% complete
Store rebuilding is 1240.4% complete
Store rebuilding is 725.0% complete
Store rebuilding is 1194.1% complete
Store rebuilding is 1150.4% complete
Store rebuilding is 707.9% complete


what shall I do with this?

Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.
attachment: ds_cache2-day.png

Re: [squid-users] 2-gigabit throughput, 2 squids

2007-06-13 Thread Michel Santos

Dave Dykstra disse na ultima mensagem:
 Hi,

 I wanted more throughput for my application than I was able to get with
 one gigabit connection, so we have put in place a bonded interface with
 two one-gigabit connections agregated into one two-gigabit connection.
 Unfortunately, with one squid, re-using objects that are small enough to
 fit into the Linux filesystem cache but large enough to be efficent (a
 few megabytes each), it maxes out a CPU core at around 140MB/s.  This is
 a dual dual-core AMD Opteron 270 (2Ghz) machine, so it is natural to
 want to take advantage of another CPU.  (This is a 64-bit 2.6.9 Linux
 kernel and I think I have squeezed about all I am going to out of the
 software).  At first I tried running two squids separately on the two
 different interfaces (without bonding, 2 separate IP addresses) but that
 confused the Cisco Service Load Balancer (SLB) we're using to share the
 load  availability with another machine so I had to drop that idea.
 For much the same reason, I don't want to use to two different ports.
 So then the problem is how to distribute the load coming in on the one
 IP address  port to two different squids.  Two different processes
 can't open the same address  port on Linux, but one process can open a
 socket and pass it to two forked children.  So, I have modified
 squid2.6STABLE13 to accept a command line option with a file descriptor
 of an open socket to use instead of opening its own socket.  I then
 wrote a small perl script to open the socket and fork/exec the two
 squids.  This is working and I am now getting around 230MB/s throughput
 according to the squid SNMP statistics.


I use another approach. I run three squids. using two on 127.0.0.2 and .3
which serve as parent. So the Ip address which contacts the remote sites
is the IP address of the server. I get very high performance and the setup
is easy without helper programs. What was important to me that I can
sibling both in order not getting objects cached twice

I am curious if you can get higher throuput using sockets or tcp over
loopbacks.

michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] 2-gigabit throughput, 2 squids

2007-06-13 Thread Michel Santos

Dave Dykstra disse na ultima mensagem:

 I use another approach. I run three squids. using two on 127.0.0.2 and
 .3
 which serve as parent. So the Ip address which contacts the remote sites
 is the IP address of the server. I get very high performance and the
 setup
 is easy without helper programs.

 So everything is filtered through one squid?  I would think that would
 be a bottleneck.  Does it not actually cache anything itself, just pass
 objects through with proxy-only?


well, the first is not caching but talks only to the clients and use the
both other squids as parents in round-robin fashion. Both of them query
each other as sibling proxy-only then



 We have a pair of machines for availability  scaling purposes and I
 wanted them to be siblings so they wouldn't both have to contact the
 origin server for the same object.  The problem with cache_peer siblings
 is that once an item is in a cache but expired, squid will no longer
 contact its siblings.  In my situation objects currently expire
 frequently so having siblings was pretty much useless (as I like to

my setup is working just fine for me but I do not use extra external
servers anymore



 I am curious if you can get higher throuput using sockets or tcp over
 loopbacks.

 What kind of throughput do you get with your arrangement?


what I meant here is the inter-squid throughput not the network performance

sincerly I never did number-testing since this configuration gives
considerable higher respond time and bandwidth reduction in comparism to
single squid servers. The largest POP I have this running is a atm
connection and there is 18-20mb/s http going in where the conventional
squid setup could not handle it properly.

Michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




[squid-users] calcru went backwards panics with 2.6

2007-06-08 Thread Michel Santos
Hi
since I upgraded some server from latest 2.5 to 2.6-13 I get squid panics
freezing the machine with calcru went backwards

How can I debug this further or what should I do at all?

I am running freebsd releng_6 amd64

Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] peer problem with 2.6

2007-06-02 Thread Michel Santos

Hi
probably you do not remember, so a short resume

I got an access forbidden error on the client from the cache peer and the
request coming from a transparent proxy between them, all squid versions
are 2.6

The same setup is/was working fine with 2.5, sure that the new transparent
options for squid 2.6 were correct

So after letting it go for a long time I tried again and the error still
persisted and as last option after some source reading I tried this

acl peer src 127.0.0.2
into
acl peer src 127.0.0.2/32

ooops and works


So it seems that 2.5 accept src IP without mask specifier and 2.6 not

would be nice to have it described in the docs

anyway, my problem is solved now

...
Michel





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




[squid-users] coss bs=value

2007-06-02 Thread Michel Santos

Hi

when I do initial coss configuration the coss bs=size value should match
the fs bs value or does it not matter?

anyway, what bs=size do you have as suggestion for the filesystem?

thank's

...

Michel





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] optimizing squid and FreeBSD

2007-03-20 Thread Michel Santos

Chris Robertson disse na ultima mensagem:

 all this sugestions are kind of high, hardly you get over 2000 open
 files
 unless you have a heavy loaded server, this starts somewhere over
 6-10mb/s
 sustained http througput when you may need more open files


 High bandwidth, high latency connections (satellite links) also eat file
 descriptors quite quickly.



Yes I really haven't considered this in my statement as also not slow disk
system or slow disks themself




 Suggested settings are always welcome, but the most general advice is
 available from http://wiki.squid-cache.org/BestOsForSquid.  Note there
 are not much in the way of OS tuning tips.  Unless you are really
 pushing the boundaries of what Squid is capable of, they just won't buy
 you much.



hum, may be on low traffic machines but there are certain priorities I
guess, firstable good hardware comes first and not only disks but also
network cards and memory. Bad cheap nics can do really terrible
performance downgrading as well steal important cpu times. After getting
the hardware straight you can get really great improvements by tweaking
values for a cache server. Since we talk Freebsd here you might get easy
20% or more overall performance benefit in comparism to a stock OS and
especially on SMP machines.


Michel




...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] optimizing squid and FreeBSD

2007-03-20 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 tis 2007-03-20 klockan 20:13 +0545 skrev Tek Bahadur Limbu:

 I admit there is no such rule but I am using it as a base for
 measurement and comparison. Obviously, req/sec is an easier and a
 better unit than say open_files/sec.

 What you want to monitor is the number of seeks/s, and if not that the
 amount of time something is waiting for i/o.

good point


 the first isn't very easily collected in most os:es, but the latter is
 usually available via sar, iostat etc.

I believe neither one is an effective method for measuring cache
performance. What the deal is with caching is less bandwidth consumption.
Speeding up network access is not so much the point anymore since we all
have large bandwidth everywhere. So then what does it matter getting 1000
requests satisfied if each of them is 1k?

I do compare the incoming http traffic to the outgoing. Higher the
difference better my cache performance right.



 the big difference between ufs and aufs (and also diskd) is that with
 aufs Squid does not wait while there is disk i/o, continuing network
 operations as the disk i/o takes place.

 With ufs each millisecond spent in iowait means network activity was
 paused..



that is certainly an interesting point. IO Bound I guess can be fight by
faster and cpu independent disks and subsystem (scsi) and then using
polling on for example em (intel pro) nics which seem to produce less
interrupts.

Also setting vfs.write_behind and vfs.vmiodirenable may give important
improvement on some hardware together with vfs.read_max.
I do not know why net.isr.direct is not on by default but at least on SMP
it is what you want.
This probably still does not work well on older versions than 6.2

All this does not cut ufs's bottleneck but helps a lot. So sure diskd is
the preferred cache_dir on FreeBSD. But again, not on low traffic machines
where I can not find any difference. IMO so long as your machine does not
handle more than 2mb/s it does not matter what you do FreeBSD does it 
well either way - supposed you have good hardware.

Michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] optimizing squid and FreeBSD

2007-03-19 Thread Michel Santos


 You can add kern.maxfilesperproc=8192 in /etc/sysctl.conf to increase your
 squid file descriptors to 8192.
 You may also have to change your kern.maxfiles parameter to say about 8192
 or 16384.


all this sugestions are kind of high, hardly you get over 2000 open files
unless you have a heavy loaded server, this starts somewhere over 6-10mb/s
sustained http througput when you may need more open files

when you use coss you do not get even close to half of it

on FBSD you ever should query your system as with sysctl kern.openfiles to
see what is going on and then when *really* coming to the limit you might
like to raise it a little and otherwise not


 Well if your proxy serves less than 30 requests per second, then ufs
 storage is fine. However if your demands are above 30 requests per second,
 then either diskd and aufs will be good. However you may need to tweak
 your kernel to implement diskd for FreeBSD.


you say it so easy as if were that easy, firstable what your machine
supports and needs is relative to the machine's processing power. There is
no such 30 req/sec limit or switch-over-rule ...

but I agree, on FreeBSd you might consider diskd but the difference is
small and depends on the machine and the throughgoing http-traffic and if
your HD can really take the load (or better: answer the requests in time)

so my opinion hear is using ufs is good and stable and fits high load for 
whom is not a specialist in system fine tuning, if you are knowing nasty
kernel stuff *and* have really nasty hardware and like to get the most out
of it then you should go diskd - but - better having a perfect UPS and a
server which never crashs, you may loss your cache content, anyway it's a
long way to get this 5-10% more (in comparism to ufs)

aufs? hands off


 Try using these in your kernel config file:

 options MSGMNB=8192 # max # of bytes in a queue
 options MSGMNI=40   # number of message queue identifiers
 options MSGSEG=512  # number of message segments per queue
 options MSGSSZ=64   # size of a message segment
 options MSGTQL=2048 # max messages in system

 options SHMSEG=16
 options SHMMNI=32
 options SHMMAX=2097152
 options SHMALL=4096


this values might be kind of unreasonable but probably does not influence
anything depending on your load, so you may not see if it is or not is
unless you monitor SHM and MSG on your system. So I believe when you can
live with SHMSEG=16 you do not need to set anything at all, it is lower
than FreeBSD's default

btw setting SHMMAX is old stuff, you should set SHMMAXPGS which adjust
automatically SHMMAX considering the other tweaked SHM values, if you do
it your way you may find undesired behaviour

anyway ipc.* are tunables so you do *not* need to compile them into your 
kernel

if you want to tune diskd read first a lot of postgres sql tuning matter
which are the only lonly guys which seem ever having worked serious
(except me of course ;) ) with this IPC stuff on FreeBSD. What you find on
squid's website regarding FreeBSD makes diskd work on old versions but not
tuned.




michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] maximum netowrk interfaces

2007-03-12 Thread Michel Santos

[EMAIL PROTECTED] disse na ultima mensagem:

 While this is not strictly a squid question,

   does anyone know the maximum number of virtual interfaces that can
 be
 created on a Linux box? I've got a proposal being shoved at me to create a
 virtual interface per section here at work and have individual squids
 listening to those interfaces (don't get me started on how bad that idea
 is).



as much as the computer supports

squid will listen on all of them unless you configure expl. in squid.conf

multi-instance squid is a good idea but I am not sure if it usefull having
an instance bound to an interface


João

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Timeouts/browser hang with autodetect proxy

2007-03-12 Thread Michel Santos

Brian Riffle disse na ultima mensagem:
 I am having an issue with timeouts using squid with both IE and
 Firefox when using auto detect proxy  When I am autodetecting the
 proxy server, if I type in an invalid domain name (like google.comm,
 or googlec.om, etc) it will take upwards of 20 seconds to timeout, and
 give me the squid error page that the domain does not exist.  During
 this time, the browser completely locks up, and is unusable.  However,
 during my troubleshooting, I have noticed that if I manually set the
 proxy settings in the browsers (with the same rules and execptions as
 the proxy.pac and wpad.dat files, the timeout does not happen.  The


are you sure your DNS or mime settings are correct so the auto-detection
will not fail?

João
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Squid and larger environments

2007-02-12 Thread Michel Santos

Denys disse na ultima mensagem:
 2xXeon 2.4 Ghz / 2 GB RAM / 2x U320 36.4 Gb 10K RPM HDD
 client_http.requests = 264.370612/sec
 client_http.hits = 95.849013/sec
 client_http.errors = 0.00/sec
 client_http.kbytes_in = 184.948096/sec
 client_http.kbytes_out = 2548.093772/sec

 server.all.requests = 175.838190/sec
 server.all.errors = 0.00/sec
 server.all.kbytes_in = 1938.163383/sec
 server.all.kbytes_out = 146.185162/sec
 server.http.requests = 175.824857/sec
 server.http.errors = 0.00/sec
 server.http.kbytes_in = 1938.110050/sec
 server.http.kbytes_out = 146.158496/sec


when this is traffic from 25000 users then one of us is dreaming

btw this machinely do never stand 25000 users ...

Michel



 On Mon, 12 Feb 2007 19:45:24 -0200 (BRST), Michel Santos wrote
 Denys disse na ultima mensagem:
  I have more than 25000 users, but i recommend to not rely on single
  server.

 but not on UP machine ... what do you use here?


...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Squid Under High Load

2007-02-02 Thread Michel Santos

Adrian Chadd disse na ultima mensagem:
...

 So as long as you're able to store small objects seperately from large
 objects and make sure one doesn't starve IO from the other then you'll
 be able to both enjoy your cake and eat it too. :P


that is really *the* issue

I guess coss certainly is the first step in this direction

when you seperate by object size you can even tune the file system and OS
exactly for this kind of file size which can give you extreme performance
boost

since you can manage cache_dir very well by minimum_object_size and
maximum_object_size (unfortunatly there is no
minimum_object_size_in_memory ...) this is an easy approach

often it seems difficult to do it on one machine and on a small network 
may not be a budget for having 2 or 3 cache server, or the benefit does no
justify it

Now, interesting issue that I can tune the partition for larger files
since COSS is a large file and so i could use diskd together with it

I use today one squid instance with null fs cache_dir, storing only small
objects up to 300k in memory and two more instances storing mid size
objects on disk from 300k up, I have another server with very large disks
only to store objects 50Mb as parent proxy-only.

most people alert me about memory stuff and so but I do not care, hardware
is easy and cheap even if expensive because at the end it pays off because
I get a constant 30-40% tcp:80 benefit, in peaks very very much more,
200-500% to be exact. I measure the incoming tcp:80 on the outside of my
transparent proxy router and measure the outgoing tcp:80 on the inside.
Means, supposing 2MB tcp:80 incoming, I get 2.6 - 2.8Mb - that is money.
May be for some countries this is not important but here we pay about
US1200-1500 per 2Mb so each byte I get out is important, no matter how


Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] How to exempt ftp from squid?

2007-02-02 Thread Michel Santos

John Oliver disse na ultima mensagem:
 I banged up an autoconf.pac script (which isn't easy, considering the
 only slivers of documentation I can find are a good ten years old!).
 It looks like my browser just assumes that ftp should go through squid,
 and that doesn't seem to want to work.  Since I see no real value in
 proxying FTP, how do I exempt FTP in the autoconf.pac script?

probably a browser configuration using same proxy for all protocols

something like shExpMatch (url, ftp*, return DIRECT;) might work

Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] File Descriptors

2007-02-02 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 fre 2007-02-02 klockan 10:54 +0800 skrev Adrian Chadd:

 If your system or process FD limits are lower than what Squid believes
 it
 to be, then yup. It'll get unhappy.

 Only temporarily. It automatically adjusts fd usage to what the system
 can sustain when hitting the limit (see fdAdjustReserved)

 But this also causes problems if there is a temporary system-wide
 shortage of filedescriptors due to other processes opening too many
 files. Once Squid has detected a filedescriptor limitation it won't go
 above the number of filedescriptor it used at that time, and you need to
 restart Squid to recover after fixing the cause to the system wide
 filedescriptor shortage.



In a former msg you said:

When Squid sees it's short of filedescriptors it stops accepting
 new requests, focusing on finishing what it has already accepted.

isn't this conflicting with what you said before?

do squid recover or do it need to be restarted?

Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] squid.conf bug?

2007-02-01 Thread Michel Santos

Adrian Chadd disse na ultima mensagem:
 On Wed, Jan 31, 2007, Michel Santos wrote:

 but should have an aditional check because any other char !ALL should be
 out here as well or not? bitch is certainly unacceptable :) the
 debug_option are ints aren't they?

 You can do that check in the argument parsing routine and do the # check
 in _db_init().



I will try to find some time at the weekend and send you the diffs after
to check it out.

the src versions seem to be old so any 2.6 will do it or do you have some
advice?


Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Squid Under High Load

2007-02-01 Thread Michel Santos

Manoj Rajkarnikar disse na ultima mensagem:
 On Wed, 31 Jan 2007, Michel Santos wrote:

 16MB. we analyzed the access logs for size distribution and the hitrate
 and number of request distribution shows only very few requests are made
 for objects of size greater than 20MB and every big object requested will
 take up a large cache space where there could have been more smaller
 objects.

 I use 150-250Gb for cache_dirs and still feel it too small, but I permit
 700MB max so a complete iso image can get cached


 objects of that size is rarely downloaded here and is not worth caching at
 all. may not be true in your situation.


depends how you look at it
disk space is cheap and serving one 650MB object is a fat win even if it
happens only twice a month



 use scalar squid log analyzer and analyze your access logs daily and
 you can find out what max object size is better suited for you. I rotate
 the access log daily and have wrote a simple shell script that'll analyze
 the access log daily and generate a webpage. can send it if you like. it
 uses scalar.awk (downloadable).



nice and thanks for you kind offer, but  I turn logging completly off,
even if the performance boost is not very impressive I do not need to take
care of log files and I am really not so sure if this does not hurt users
privacy, but main reason I am lazy and like performance ;)

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Squid on a Desktop

2007-02-01 Thread Michel Santos

Nicolás Ruiz disse na ultima mensagem:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Henrik Nordstrom wrote:
 ons 2007-01-31 klockan 22:40 + skrev RW:
 Is there any advantage in running Squid on a single user desktop PC? In
 other words can Squid cache more effectively than a browser?

 My gut feeling is that the benefits will be very slim.. the browser
 cache is quite effective, and getting beyond that is a bit hard with a
 single user.


I disabled recently my browser cache and use squid with coss which gave me
a real boost here. Normally I use konqueror and sometimes firefox and on
both I got e real performance impact.


Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Squid Under High Load

2007-01-31 Thread Michel Santos

Manoj Rajkarnikar disse na ultima mensagem:
 On Wed, 31 Jan 2007, Matt wrote:

 To expand on that.  Say after 4 days its filled a little over 50
 percent of that 48G, does that mean its sized about right?  Or are you
 referring to total http traffic transferred in a week?

 48G sounds good to me. we use 32G cachedir for a 6Mbits link and we are
 able to achieve ~42-43% byte hit and 45-48% request hit.
 --

average 40% byte hit rate or peak?

what cache_object_max_size do you permit?

I use 150-250Gb for cache_dirs and still feel it too small, but I permit
700MB max so a complete iso image can get cached


Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] High CPU usage problem on Squid 2.6 STABLE9

2007-01-30 Thread Michel Santos

Adrian Chadd disse na ultima mensagem:
 I've started to flesh out a how to profile Squid section in the Squid
 FAQ:

 http://wiki.squid-cache.org/SquidFaq/SquidProfiling

 See if there's anything there that helps you out. Check for disk and CPU
 usage,
 consider running oprofile and identify where all the CPU use is going.



Hi
nice doc but I believe it is very neutral, most users perhaps do not know
how to find out which resources or loads are used by squid and which not.
may something more specific wuold be helpful, like as example top -U
squid

may I ask what is modern loads or better how much it is?

Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




[squid-users] squid.conf bug?

2007-01-30 Thread Michel Santos

is this probably a bug?


the following is one line and all options after the comment are used

debug_options ALL,1 #6,3 20,3 28,3 32,3 47,3 51,3 79,3 81,3



so I need to break the line in order to disable the other options:

debug_options ALL,1
#6,3 20,3 28,3 32,3 47,3 51,3 79,3 81,3

Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] squid.conf bug?

2007-01-30 Thread Michel Santos

Emilio Casbas disse na ultima mensagem:
 I don't think is a bug,
 from the squid.conf.default

 -
 Lines beginning with an hash (#) character are comments.
 -

 You can have a more clear way to achieve this,

 debug_options ALL,1
 #debug_options ALL,1 6,3 20,3 28,3 32,3 47,3 51,3 79,3 81,3



we know that all do we

but I will NOT comment out the debug_options line but some options

normally you can tab and # and anything after is a comment
this pattern does not work in debug_options

Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] High CPU usage problem on Squid 2.6 STABLE9

2007-01-30 Thread Michel Santos

Andrew Miehs disse na ultima mensagem:

 Top on linux/ (at least on debian) only shows one cpu (an average
 over all 4 in this case).


I'll better be quiet because I really do not know much about Linux, but I
believe redhat top also shows a CPU column like BSD, where you see the
CPUID of each process (when SMP enabled)

Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] High CPU usage problem on Squid 2.6 STABLE9

2007-01-30 Thread Michel Santos

Robert disse na ultima mensagem:

 My connection speed is 45 Mbit max, but real traffic is about 30-35 Mbit
 including P2P.


hey, now it is an interesting matter, so you may have 10-15Mb/s as http
traffic here, may be some more sometimes

doyou do NAt on this box? Or any fwd rules? or is it a standalone proxy
server?

I can tell you from my experience that for 10-15MB http traffic it needs
some heaver stuff than your machine. On a dual dual-core opteron I have
8Mbit/s http and my overall CPU state is at 60% but I have to say on the
same machine, it is a router, throughgoing traffic,  runs bw control and
firewall and squid as transparent. Squid is never running more than 15-20%
but I use only SCSI disks. Also my cache_dirs are min 250Gb

I have another with 14Mbit/s http almost same hardware but I needed to
take bw control out because cpu was at 90% always. I believe that for this
kind of traffic your SATAs are not good enough or your OS (with SMP) does
not handle the poll thing well or some other kind of deficiency.

Also you probably need at least 6-8Gb of RAM or more. You may like to
track your diskio via snmp to see that better. Also this kind of traffic
you should pay extreme attention to your NICs and driver for them. Often a
cheap NIC and bad driver eats all your interrupt time and the machine is a
blocker at the end. For sure the standard sysctl values, especially for
network, are not satisfying your needs.

And, no offense please, but serious traffic may need a more serious OS, I
mean kind of proven Linux or BSD which is known for beeing capable of this
kind of traffic load. May be you ask if somebody can suggest a Linux
distro which is good for that so you do not need to fiddle around here.

But if you don't switch to SCSI I believe you do not get lucky with any.
Dont forget most of disk io for IDE/Sata is done by the CPU, so may be
your machine get stucked here, the CPU tries and runs to death since the
disk can not handle it.

but only thoughts, because I have no idea with your OS, so please do not
get angry with me.

Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




  1   2   >