[squid-users] Helppppp

2016-08-12 Thread Michel
Hello everyone. I have the following problem. The squid server has the
system time correctly however messages
squid out with 5 hours ahead. For example, when you do not have access to a 
restricted website through squid that's where you put Generated Thu, 22 Jan 
2015 18:45:05 GMT and are 1:45 pm
Anyone know the solution? 


Regards,
Michel___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] NTLM Auth with Squid 4

2016-07-29 Thread Michel Peterson
Hi friends,

I've compiled the squid 4.0.12 in debian jessie and everything is woking
fine.
Now i want to configure NTLM authentication  with single sign on for every
user in my network. I joiner my server to windows domain with realmd and it
was sucess.

What is recommended method for integration from squid 4 with active
directory ?

Regards,

Michel Peterson
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid stops working

2016-07-21 Thread Michel Peterson
Hi friends,


The squid (4.0.12) proxy that I have running on debian jessie stops
accepting new requests after being online for a while. Before it stops
record this message in cache.log:

2016/07/19 09:45:20 kid1| assertion failed: client_side_reply.cc:2163:
"reqofs <= HTTP_REQBUF_SZ || flags.headersSent"

I've compiled from source with options:

configure options:  '--prefix=/usr' '--infodir=/share/info'
'--enable-auth-ntlm=fake,SMB_LM'
'--enable-auth-basic=fake,getpwnam,LDAP,PAM,SMB'
'--enable-external-acl-helpers=file_userip,kerberos_ldap_group,LDAP_group,session,unix_group,wbinfo_group'
'--enable-auth-negotiate=kerberos,wrapper' '--localstatedir=/var'
'--datadir=/usr/share/squid4' '--with-swapdir=/var/spool/squid4'
'--with-default-user=proxy' '--enable-url-rewrite-helpers=fake'
'--mandir=/usr/share/man' '--srcdir=.' '--with-logdir=/var/log/squid4'
'--with-pidfile=/var/run/squid4.pid' '--with-filedescriptors=65536'
'--enable-zph-qos' '--enable-translation' '--enable-async-io'
'--enable-useragent-log' '--enable-snmp' '--with-openssl'
'--enable-cache-digests' '--enable-follow-x-forwarded-for'
'--enable-storeio=aufs,rock' '--enable-removal-policies=heap,lru'
'--with-maxfd=16384' '--enable-poll' '--disable-ident-lookups'
'--enable-truncate' '--exec-prefix=/usr' '--bindir=/usr/sbin'
'--libexecdir=/lib/squid4' '--with-large-files'
'--with-coss-membuf-size=2097152' '--enable-linux-netfilter'
'--enable-ssl' '--enable-ssl-crtd' 'CFLAGS=-DNUMTHREADS=60
-march=nocona -O3 -pipe -fomit-frame-pointer -funroll-loops
-ffast-math -fno-exceptions'


I need to solve this. Please.

Regards
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Fwd: Squid 3.5.2 Compile Error

2015-03-06 Thread Michel Peterson
Hi friends,

I'm trying to compile squid 3.5.2 on debian wheezy and I am getting
the following error after running the command make all:

Making all in compat
make[1]: Entrando no diretório `/root/squid-3.5.2/compat'
depbase=`echo assert.lo | sed 's|[^/]*$|.deps/|;s|\.lo$||'`;\
/bin/bash ../libtool  --tag=CXX   --mode=compile g++
-DHAVE_CONFIG_H   -I.. -I../include -I../lib -I../src -I../include
-Wall -Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Werror
-pipe -D_REENTRANT -m32 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64  -g
-O2 -march=native -std=c++11 -MT assert.lo -MD -MP -MF $depbase.Tpo -c
-o assert.lo assert.cc \
mv -f $depbase.Tpo $depbase.Plo
libtool: compile:  g++ -DHAVE_CONFIG_H -I.. -I../include -I../lib
-I../src -I../include -Wall -Wpointer-arith -Wwrite-strings -Wcomments
-Wshadow -Werror -pipe -D_REENTRANT -m32 -D_LARGEFILE_SOURCE
-D_FILE_OFFSET_BITS=64 -g -O2 -march=native -std=c++11 -MT assert.lo
-MD -MP -MF .deps/assert.Tpo -c assert.cc  -fPIC -DPIC -o
.libs/assert.o
In file included from ../include/squid.h:43:0,
 from assert.cc:9:
../compat/compat.h:49:57: error: operator '' has no right operand
make[1]: ** [assert.lo] Erro 1
make[1]: Saindo do diretório `/root/squid-3.5.2/compat'
make: ** [all-recursive] Erro 1



Help me plz.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Common log file format conversion??

2011-04-28 Thread michel

Hi List

I recently installed Mysar utility for generating reports from Squid  
logs, but the script does importer.php only import files in native  
format logs into the database.


I modified the configuration of my squid to generate logs in native  
mode, since even here there are no problems, statistics generated  
Mysar me perfectly.


Where is the problem?

Well, but I have several logs in common format, which would convert to  
import into my database.


There is a program or script that allows to convert several common log  
files in native format?


  and that in turn allows to import the logs converted to mysql??


Thanks
--
Webmail, servicio de correo electronico
Casa de las Americas - La Habana, Cuba.




[squid-users] Squid - Squidguard ssl pages error code 404

2010-03-01 Thread Michel Bulgado

Hello

It may sound a little off-topic topic but I posted a message on the list 
of squidguard and I just not received a reply, it is likely that the 
list is not working or just having support.


My problem is this:

I'm trying to implement squid with squidGuard 1.4  black lists 
management. but it happens that when I log login.yahoo.com great site as 
it returns me an error code 404 that the page not found. only happens to 
me with ssl pages.


I'm using squid-2.6.STABLE21-3.el5 on CentOS, authenticated access 
against active directory users stating my squidguard.


I made another test by removing the authentication and access by putting 
the IP address can access squidguard and perfectly, I guess not this 
happening squid user to squidguard squidguard or perhaps can not handle 
requests ssl?


This is my config and my logs:
squid.conf

auth_param basic program /usr/lib/squid/ldap_auth -v 3 -b 
\ou=HOME,dc=home,dc=cu\ -D \cn=conector,cn=Users,dc=home,dc=cu\ -w 
 -f 
\((objectClass=user)(!(objectClass=computer))(sAMAccountName=%s))\ -H 
ldap://ads.home.cu


auth_param basic children 5
auth_param basic realm Server Proxy
auth_param basic credentialsttl 30 minute

redirect_program /usr/bin/squidGuard -c /etc/squid/squidGuard.conf
redirect_children 8
redirector_bypass on

acl loginmail proxy_auth REQUIRED

http_access allow loginmail


squidGuard.conf
---

dbhome /var/lib/squidguard
logdir /var/log/squid

src webmailusers {
userlist usersmail
}

dest mail {
domainlist db/mail/domains
urllist db/mail/urls
log accessdenied
}

dest webmail {
domainlist db/webmail/domains
urllist db/webmail/urls
log accessdenied
}

dest onlinegames {
domainlist db/onlinegames/domains
urllist db/onlinegames/urls
log accessdenied
}


dest porn {
domainlist db/porn/domains
urllist db/porn/urls
log accessdenied
}

acl {

webmailusers {
pass mail webmail !porn !onlinegames all
}

default {
pass !mail !webmail !porn !onlinegames all
redirect http://www.home.cu/block.html
}


}

10.71.53.27 - - [01/Mar/2010:16:53:04 -0500] CONNECT 
login.yahoo.com:443 HTTP/1.1 404 0 TCP_MISS:DIRECT
10.71.53.27 - - [01/Mar/2010:16:53:05 -0500] CONNECT 
login.yahoo.com:443 HTTP/1.1 404 0 TCP_MISS:DIRECT
10.71.53.27 - - [01/Mar/2010:16:53:09 -0500] CONNECT 
login.yahoo.com:443 HTTP/1.1 404 0 TCP_MISS:DIRECT



Thanks

Michel




Re: [squid-users] Squid - Squidguard ssl pages error code 404

2010-03-01 Thread michel

Henrik Nordström hen...@henriknordstrom.net escribió:


mån 2010-03-01 klockan 17:29 -0500 skrev Michel Bulgado:


 I'm trying to implement squid with squidGuard 1.4  black lists
management. but it happens that when I log login.yahoo.com great site as
it returns me an error code 404 that the page not found. only happens to
me with ssl pages.


Blocking/filtering SSL pages with SquidGuard do not work very well. You
need to use Squid acls for that, or wrap up SquidGuard as an external
acl instead of url rewriter..

The reason is that
a) Most browsers will not accept a browser redirect in response to
CONNECT.

b) You can't rewrite a CONNECT request into a http:// requrest.

c) Most browsers will be quite upset if you rewrite the CONNECT to a
different host than requested.

meaning that there is not much you actually can do with CONNECT requests
in SquidGuard that won't make browsers upset.

Regards
Henrik




Hello Henrik

Thanks for answering so quickly. this that tells me not to use  
squidGuard as a url rewriter acl but as a few examples showing how to  
do it?


So I have almost no alternatives to find another option to work with  
than blacklisting squidguard which supports ssl connections.


What most strikes me is that it works if in squiguard specific IP  
addresses that have access to pages with Webmail access using https  
here to login instead of using user name.

--
Webmail, servicio de correo electronico
Casa de las Americas - La Habana, Cuba.



Re: [squid-users] Squid - Squidguard ssl pages error code 404

2010-03-01 Thread michel

Marcus Kool marcus.k...@urlfilterdb.com escribió:


Michel,

Proxies are the URL filter circumventors, so if you like
to use a URL filter, you should always block proxies.

Henrik stated in a separate response that some browsers have
problems with HTTP 302 redirect responses.  I have no access
to all types of web browsers, and Microsoft Internet Explorer
has indeed problems (displays a vague error) and Firefox 3.0.6
has no problem.

You may also want to look at ufdbGuard, a free alternative
for squidGuard which has more features like
Safesearch enforcement, HTTPS connection verification for
proper SSL certificates and use of FQDNs.

Marcus


Hi Marcus

Just use firefox for my tests.

This variant you show me, very interesting of course is totally free,  
but its database does not strike me, though I wondered if it was  
possible to use the same database that I use with squidguard?


But from what I read seems to not work. in this case could send me or  
put in a great site or ftp.


On the site mention that you have to pay for it, but if I sign I  
suppose I can get a portion of this.


Someone will have the full version but this outdated???

Thanks

Michel


--
Webmail, servicio de correo electronico
Casa de las Americas - La Habana, Cuba.



[squid-users] Blacklist in Squid

2009-11-05 Thread michel

Hello list

A query

There will be some way, program, script or other than squidGuard  
redirector, allowing the use of blacklists that uses squidguard  
directly from the squid?


with the goal of keeping acl in squid I have without having to create  
them from scratch in the squidguard.


Thanks

Michel


This message was sent using IMP, the Internet Messaging Program.




[squid-users] External Script for checks

2009-10-02 Thread michel

Hello


Using freeradius to connect my remote users via Dialin, each time they  
connect my users, is assigned a random ip.



My question:

Would like to make a script for my squid server then checks against  
mysql search if the user is connected, compare against a file if the  
user exists in that list, take the ip address that I assign freeradius  
(stored in mysql) and squid allows Internet access.


Currently my users must be authenticated to access group because they  
have permission to access certain sites while the rest did not.



Is there any page where this is documented?

Suggestions?

Greetings
Michel
--
Webmail, servicio de correo electronico
Casa de las Americas - La Habana, Cuba.




[squid-users] External Script for checks

2009-10-01 Thread michel


Hello


Using freeradius to connect my remote users via Dialin, each time they  
connect my users, is assigned a random ip.



My question:

Would like to make a script for my squid server then checks against  
mysql search if the user is connected, compare against a file if the  
user exists in that list, take the ip address that I assign freeradius  
(stored in mysql) and squid allows Internet access.


Currently my users must be authenticated to access group because they  
have permission to access certain sites while the rest did not.



Is there any page where this is documented?

Suggestions?

Greetings
Michel
--
Webmail, servicio de correo electronico
Casa de las Americas - La Habana, Cuba.




[squid-users] Squid ldap failover

2009-08-18 Thread michel

Hello

Using squid 2.6.STABLE21 Version.

I authenticate my users against active directory of windows. need to  
add another server to possible technical failures, if no response from  
the primary controller, then to consult a secondary.


is it possible?

Michel

--
Webmail, servicio de correo electronico
Casa de las Americas - La Habana, Cuba.



Re: [squid-users] Script Check

2009-08-10 Thread michel

Henrik Nordstrom hen...@henriknordstrom.net ha escrito:


fre 2009-08-07 klockan 21:34 -0400 skrev mic...@casa.co.cu:


Using squid 2.6 on my work, I have a group of users who connect by
dial-up access to a NAS and a server freeradius to authenticate each
time they log my users are assigned a dynamic IP address, making it
impossible to create permissions without authentication by IP address.


Ok.


I want to create a script for when you get a request to the squid from
the block of IP addresses, run a script that reads the username and IP
address from the server freeradius radwho tool that shows users
connected + ip address or mysql  from which you can achieve the same
process


The user= result interface of external acls is intended for exacly this
purpose.

What you need is a small script which reads IP addresses on stdin (one
at a time) and prints the following on stdout:

OK user=radiususername

if the user is authenticated via radius, or

ERR

if the user is not and should fall back on other authentication methods.

You can then plug this into Squid using external_acl_type, and bind an
acl to that using the external acl type. Remember to set ttl=nnn and
negative_ttl=nnn as suitable for your purpose.

Regards
Henrik





Hello

This Script could be in Perl?


Could get some example how to be able to guide me?

Sorry for the inconvenience

Thanks

--
Webmail, servicio de correo electronico
Casa de las Americas - La Habana, Cuba.



Re: [squid-users] Script Check

2009-08-09 Thread michel


Hello

My server freeradius users currently stored in a file on disk, but it  
is best to do what I want, I can configure it to store it in mysql.


What I need is help me, explain me, guide me how can I achieve my goal.

Thanks

Michel


Adrian Chadd adr...@squid-cache.org ha escrito:


don't do that.

As someone who did this 10+ years, I suggest you do this.

* do some hackery to find out how your freeradius server stores the
currently logged in users. It may be in a mysql database, it may be
in a disk file, etc, etc
* have your redirector query -that- directly, rather than running
radwho. When I did this 10 years ago, the radius server kept a wtmp
style file with current logins which worked okish for a few dozen
users, then sucked for a few hundred users. I ended up replacing it
with a berkeley DB hash table to make searching for users faster.
* then in the helper, cache the IP results for a short period (say, 5
to 10 seconds) so frequent page accesses wouldn't result in a flurry
of requests to the backend
* keep the number of helpers low - you're doing it wrong if you need
more than 5 or 6 helpers doing this..



Adrian

2009/8/8  mic...@casa.co.cu:

Hello

Using squid 2.6 on my work, I have a group of users who connect by dial-up
access to a NAS and a server freeradius to authenticate each time they log
my users are assigned a dynamic IP address, making it impossible to create
permissions without authentication by IP address.

now to assign levels of access to sites are
authenticating against an Active Directory, but I want to change that.

I want to create a script for when you get a request to the squid from the
block of IP addresses, run a script that reads the username and IP address
from the server freeradius radwho tool that shows users connected + ip
address or mysql  from which you can achieve the same process

and can be compared to a text file if the user is listed, then access it
without authentication of any kind.

It is possible to do this?

Sorry for my english, is very poor.

Thanks

Michel





--
Webmail, servicio de correo electronico
Casa de las Americas - La Habana, Cuba.








--
Webmail, servicio de correo electronico
Casa de las Americas - La Habana, Cuba.



Re: [squid-users] Script Check

2009-08-08 Thread michel

Hi Kinkie

Could give me a little hint or idea can be made possible?

Thanks

Michel


Kinkie gkin...@gmail.com ha escrito:


Yes.
It's maybe a bit tricky to do it right if you want reasonable
performance, but it's doable.

Ciao

On 8/8/09, mic...@casa.co.cu mic...@casa.co.cu wrote:

Hello

Using squid 2.6 on my work, I have a group of users who connect by
dial-up access to a NAS and a server freeradius to authenticate each
time they log my users are assigned a dynamic IP address, making it
impossible to create permissions without authentication by IP address.

now to assign levels of access to sites are
authenticating against an Active Directory, but I want to change that.

I want to create a script for when you get a request to the squid from
the block of IP addresses, run a script that reads the username and IP
address from the server freeradius radwho tool that shows users
connected + ip address or mysql  from which you can achieve the same
process

and can be compared to a text file if the user is listed, then access
it without authentication of any kind.

It is possible to do this?

Sorry for my english, is very poor.

Thanks

Michel





--
Webmail, servicio de correo electronico
Casa de las Americas - La Habana, Cuba.





--
/kinkie





--
Webmail, servicio de correo electronico
Casa de las Americas - La Habana, Cuba.



[squid-users] Script Check

2009-08-07 Thread michel

Hello

Using squid 2.6 on my work, I have a group of users who connect by  
dial-up access to a NAS and a server freeradius to authenticate each  
time they log my users are assigned a dynamic IP address, making it  
impossible to create permissions without authentication by IP address.


now to assign levels of access to sites are
authenticating against an Active Directory, but I want to change that.

I want to create a script for when you get a request to the squid from  
the block of IP addresses, run a script that reads the username and IP  
address from the server freeradius radwho tool that shows users  
connected + ip address or mysql  from which you can achieve the same  
process


and can be compared to a text file if the user is listed, then access  
it without authentication of any kind.


It is possible to do this?

Sorry for my english, is very poor.

Thanks

Michel





--
Webmail, servicio de correo electronico
Casa de las Americas - La Habana, Cuba.



[squid-users] Squid Access

2008-10-01 Thread Michel Venter
Hi

 

Would like to know how to allow specific user in squid (Centos 5) to
specific sites only?

 

Can anyone help?

 

Thanks

Mike

 



-
This e-mail is subjected to the disclaimer that can be viewed at:
* http://www.cut.ac.za/www/disclaimer/email_disclaimer
-


[squid-users] Squid3(Bridge)+Tproxy+Mikrotik - HELP ME

2008-09-22 Thread Michel Peterson


Hi Guys,

I am trying to configure a proxy squid with
tproxy
support. My squid is in a machine in bridge. The structure of
my
network
is below:

Clients - Squid
Bridge(Tproxy)- Mikrotik Router

I've compiled my
kernel
(2.6.24)
and
iptables(1.4) with
Tproxy support.
I'm using
Squid
Version
3.HEAD-20080917.

My
routing and
iptables
rules:

ip rule add fwmark 1
lookup 100
Local
ip route
add 0.0.0.0 / 0
dev lo table
100
ip
rule add
fwmark 1
lookup 100
ip route
add local
0.0.0.0/0 dev lo table 100
iptables -t mangle -N
DIVERT
iptables
-t mangle -A PREROUTING -p
tcp -m socket -j
DIVERT
iptables -t
mangle -A DIVERT -j MARK
--set-mark 1
iptables -t
mangle -A
DIVERT -j ACCEPT
iptables -t
mangle -A
PREROUTING
-p
tcp --dport 80 -j
TPROXY
--tproxy-mark
0x1/0x1 --on-ip
189.89.180.253 --on-port
3128


I see
packages
into
the rules, but
nothing
is displayed in the log of squid
and
no object
is cached.

Someone could help me with
this
problem?

Regards,

Michel Peterson





Re: [squid-users] size of cache_dir

2008-09-21 Thread Michel Peterson


Hi Jeff,
 
 According to the Squid. The Definitive Guide
Book: 
  ... Because Squid uses a small amount of memory
for every cached response, there is a relationship between disk space and
memory requirements. As a rule of thumb, you need 32 MB of memory for each
GB of disk space. Thus, the system with 512 MB of RAM can support a 16-GB
disk cache. Your mileage may vary, of course. Memory requirements depend
on factors such as the mean object size, CPU architecture (32 - or
64-bit), the number of concurrent users, and particular features that you
use ... 
 
 
 Regards,
 
 Michel Peterson




[squid-users] Squid3(Bridge)+Tproxy+Mikrotik???

2008-09-20 Thread Michel Peterson


Hi Guys,

I am trying to configure a proxy squid with tproxy
support. My squid is in a machine in bridge. The structure of my network
is below:

Clients - Squid Bridge (Tproxy) - Mikrotik
Router

I've compiled my kernel (2.6.24) and iptables(1.4) with
Tproxy support. I'm using Squid Version 3.HEAD-20080917.

My
routing and iptables rules:

ip rule add fwmark 1 lookup 100
Local
ip route add 0.0.0.0 / 0 dev lo table 100
ip rule add
fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100
iptables -t mangle -N DIVERT
iptables -t mangle -A PREROUTING -p
tcp -m socket -j DIVERT
iptables -t mangle -A DIVERT -j MARK
--set-mark 1
iptables -t mangle -A DIVERT -j ACCEPT
iptables -t
mangle -A
PREROUTING -p tcp --dport 80 -j TPROXY --tproxy-mark
0x1/0x1 --on-ip 189.89.180.253 --on-port 3128


I see
packages into the rules, but nothing is displayed in the log of squid and
no object is cached.

Someone could help me with this
problem?

Regards,

Michel Peterson






Re: [squid-users] Squid requirements

2008-07-16 Thread Michel

 --- On Wed, 7/16/08, Adrian Chadd [EMAIL PROTECTED] wrote:

 From: Adrian Chadd [EMAIL PROTECTED]
 Subject: Re: [squid-users] Squid requirements
 To: Chris Robertson [EMAIL PROTECTED]
 Cc: Squid Users squid-users@squid-cache.org
 Date: Wednesday, July 16, 2008, 9:28 AM
 What we're really missing is a bunch of hardware
 x, config y, testing
 z, results a, b, c. TMF used to have some stuff up
 for older hardware
 but there's just nothing recent to use as a measuring
 stick..


 The problem is that there's so much disparate technology out there.
 multi-core cpus, all kinds of different memory, all kinds of different disk
 technologies,  different filesystems,  different OS, different kernels, and 
 on and
 on.  It's hard to get useful measuring sticks.


shoot me, but as ever faster is more expensive, so if you can't afford a
Lamborghini but like what it does then buy something else what comes close and 
fits
your budget, hammer-speed and cheap does not exist, reasonable speed at 
reasonable
cost does exist, hammer-speed at low-cost does not exist unless you jump the 
cliff
what might result in sudden-death ... that is free and is fufufast (sudden=now)

 I still think it's a useful pursuit.  But I think that the reasons above make
 people less inclined to do it.ree and


to do what? caching? or proxying? or nothing?
while(my_input=0); (do='nothing');


 spec.org tries to level the field, if someone concocted a level field and 
 made it
 easy for people to do, then we'd see more results.


problem is most people look for easy=lazy and lazy=cheap but unfortunatly that
equation does not work either

as also do not exist any valuable hardware comparism since you need to do it
yourself, means you need to look (clients, uplink, machine, bandwidth_for_each,
disired_performance, budget) and finally look at your cache and at the end it is
what_you_get_is_what_you_get_(for_your_money) ... so my friend, at the end it 
does
not matter what they say to buy what you _CAN_ buy and get lucky with it :)


michel





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




Re: [squid-users] H/W requirement for Squid to run in bigger scene like ISP

2008-07-15 Thread Michel
 Hi,

 One thing to keep in mind is that in my experience, it makes sense to
 not only get fast disks, but put as much RAM in the box you can
 afford. Now *don't* give this all the squid via the mem_cache config;
 let the OS use the spare memory for caching disk reads. This will spee

 Additionally, don't RAID your disks beyond RAID 1, and only do that if
 you have to for reliability requirements. The more individual spindles
 attached to separate cache_dirs, the better. Amos is right that I/O
 trumps CPU here every time.

 When we swapped out older squid boxes that couldn't take more than 2GB
 of RAM, or more than one disk, and put in 64-bit boxen with 32GB and 3
 cache-dirs (6 drives, paired into three RAID1 devices), we saw things
 improve dramatically despite the fact that the CPUs were actually
 slower. We went from topping out at 5K queries per minute to being
 able to handle ~20K/minute without breaking a sweat. Pretty dramatic
 IMHO.

 Hope this helps,

 -Chris

 On Jul 14, 2008, at 10:04 AM, Amos Jeffries wrote:

 Anna Jonna Armannsdottir wrote:
 On mán, 2008-07-14 at 13:01 +0200, Angelo Hongens wrote:
 All the servers I usually buy have either one or two quad core
 cpu's,
 so it's more the question: will 8 cores perform better than 4?

 If not, I would be wiser to buy a single Xeon X5460 or so, instead
 of
 2 lower clocked cpu's, right?
 In that case we are fine tuning the CPU power and if there are 8
 cores in a Squid server, I would think that at least half of them
 would
 produce idle heat: An extra load for the cooling system. As You point
 out, the CPU speed is probably important for the part of Squid that
 does
 not use threading or separate process. But the real bottlenecks are
 in the I/O, both RAM and DISK. So if I was buying HW now, I would
 mostly be looking at I/O speed and very little at
 CPU speed. SCSI disks with large buffers are preferable, and if
 SCSI is not a viable choice, use the fastest SATA disks you can
 find - Western
 Digital Raptor used to be the fastest SATA disk, dot't know what is
 the
 fastest SATA disk now.

 This whole issue comes up every few weeks.

 The last consensus reached was dual-core on a squid dedicated
 machine. One for squid, one for everything else. With a few GB of
 RAM and fast SATA drives. aufs for Linux. diskd for BSD variants.
 With many spindles preferred over large disk space (2x 100GB instead
 of 1x 200GB).

 The old rule-of-thumb memory usage mentioned earlier (10MB/GB +
 something for 64-buts) still holds true. The more available the
 larger the in-memory cache can be, and that is still where squid
 gets its best cache speeds on general web traffic.

 Exact tunings are budget dependent.


and the whole issue again and again is understated, at least you guys admit
already
dualcores ...

I really do not understand the resistance, there *_IS_NO_* doubth that an 8core
machine is faster than a twocore - and whatever software you put in there it
*_IS_*
faster, unless it's idle what eventually is the problem at all, are your
machines
idle so you don't see it 

of course the budget IS a point (8core is expensive) but a AM2 quadcore is
absolutely cheap today so there is again NO doubth what kind of CPU to buy,
especially squid takes advantage of quads, especially of AMD quads what I
believe
cause of it's NUMA arquitecture

in order to show it once I attach 3 images of Average load comparism, and
please,
before flaming this up try to understand what Unix load average really means and
say, I graph it directly to the available cores so you can see when spikes
are high
and continuously near or above CPU than the machine can be seen as busy, so when
spikes are low machine is free to breath (less IO wait-state) and to serve of
course

so then, less your load average more response time you get, and as I ever
said: You
can feel it and that is what the connected clients spell as fast

so you can see a AM2 X2, a AM2 X4 and a Dual opteron dualcore machine, both AM2
sockets are with constant 6-8MB through-going traffic, the opteron serves
16MB, all
three have 3 SCSI-U320 250G disks on ZFS and a 16G for the OS, both AM2 are
with 4G
and the opteron with 16G of RAM, each of them with 5 ETHs with 4 internal
subnets
and one external, transparent proxy running three squid instances and diskd,
firewall and some scripts collecting stats and making rrdtool images, the AMS
serve
about 450 and the opteron 800 clients at peak hours, average of 250kbit/s of
download limit each and not to forget, this is freebsd 7-stable amd64

so my suggestion is get yourself an as-much-cores you can get and enjoy

michel

I send two more msgs with the next images attached, this is the 2x dualcore 
opteron





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.
attachment: wip_load

Re: [squid-users] H/W requirement for Squid to run in bigger scene like ISP

2008-07-15 Thread Michel
here the second image, AM2 X2

michel





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.
attachment: wip_load-day-X2.png

Re: [squid-users] H/W requirement for Squid to run in bigger scene like ISP

2008-07-15 Thread Michel
third, AM2 Quad (phenom)

michel





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.
attachment: wip_load-day-X4.png

Re: [squid-users] H/W requirement for Squid to run in bigger scene like ISP

2008-07-15 Thread Michel

 third, AM2 Quad (phenom)


I labeled the two last msgs wrong, X2 = X4 and X4 = X2, the image name is 
correct


 michel




 
 Tecnologia Internet Matik http://info.matik.com.br
 Sistemas Wireless para o Provedor Banda Larga
 Hospedagem e Email personalizado - e claro, no Brasil.
 


michel





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




Re: [squid-users] wccp and Cisco router identifier

2008-07-14 Thread Michel

 I am in the process of installing a transparent squid cache using wcpp
 using a Cisco Router C2600 (IOS Version 12.2(46a))

 Everything is working fine except there is something that I don't know
 how to change.

 The Cisco router identifier is the address that is used for GRE on the
 router. Our router has two FastEthernet interfaces, each configured
 with an IP, and the router chose one of the IPs at random as the Cisco
 router identifier. How can that be changed? (i.e. how can I force
 the Cisco router identifier to be a specific IP)

 I searched in this list and somebody said to use a loopback interface on 
 the
 Cisco,
 that it would much more predictable as the wccpv2 routerid is then always 
 loopback
 id.
 How is this done?



you can use the int loopback command to create one or go into interface
configuration and use the loopback sub command

anyway, this might not be the right way, eventually better to configure in 
config
mode the wccp webcache and on THE interface you want to use you issue the ip 
wccp
redir command so the reply comes from this interface ip address

details I do not remember, since long time I didn't do it but you figure it out
right :)



michel





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




Re: [squid-users] Recommend for hardware configurations

2008-07-07 Thread Michel

 Well, I based my argument from the 10 instances of reverse proxies
 I'm running. It has 266,268,230 objects and 3.7 TB of space.  CPU
 usage is always around 0.2 according to ganglia.  So unless you have
 some other statistics to prove CPU is that important, I'm stick w/ my
 argument that disk and RAM is way more important that CPU.


ok, reverse proxy does not so very much, so sure it depends on what you do with 
the
machine


 mike

 At 03:41 AM 7/6/2008, Michel wrote:

  The cpu doesn't do any IO, it's WAITING for the disk most of the
  time. If you want fast squid performance, CPU speed/count is
  irrelevant; get more disks and ram.  When I mean more disk, I mean
  more spindles.  eg: 2x 100GB will is better than a 200GB disk.
 


well well, get prepared ... take your cpu out and then you'll see
who is waiting
forever :)

even if IO wait is an issue it is or better WAS one on old giant
lock systems
where the cpu was waiting until getting the lock on a busy thread
because there was
only ONE CPU and even on multi-cpu-systems there was only one core a
time bound to
the kernel

to get around this issue good old posix aio_*calls where used in
order not to wait
for a new lock what I believe is squid's aufs cache_dir model which
is still very
good and even better on modern smp machines and even with squid's
not-smp-optimized
code - you really can drain disks to their physical limits - but
that is not all

SMP (modern) works around the global giant lock, the kernel is not
anymore limited
to get one core a time

SMP sistems are going to work with spin locks (Linux) and sleep
locks (freebsd)
where the linux way is focusing thread synchronizing which is going to be
outperformanced by the sleep lock mechanism. Spin locks certainly
still waste cpu
while spinning what sleeplocks do not, cpu is free to do other work.
This was kind
of benefit for Linux last couple of years when freebsd was in deep
developing of
it's new threading model which is now on top I think, especially in
shared memory
environments.

basicly is it not important if you use one or ten disks, this you
should consider
later as fine tuning but the threading model works the same, for one
or two disks,
or for 2 our 32Gigs of memory - so you certainly do NOT get araound
your IO-Wait
with more memory or more disk when the cpu(s) can not handle it
waiting for locks
as you say ...

So IMO your statement is not so very true anymore, with a modern
SMP-OS on modern
smp hardware of course.

michel





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.









 A mensagem foi scaneada pelo sistema de e-mail e pode ser considerada segura.
 Service fornecido pelo Datacenter Matik  https://datacenter.matik.com.br




michel





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




Re: [squid-users] Recommend for hardware configurations

2008-07-06 Thread Michel

 The cpu doesn't do any IO, it's WAITING for the disk most of the
 time. If you want fast squid performance, CPU speed/count is
 irrelevant; get more disks and ram.  When I mean more disk, I mean
 more spindles.  eg: 2x 100GB will is better than a 200GB disk.



well well, get prepared ... take your cpu out and then you'll see who is waiting
forever :)

even if IO wait is an issue it is or better WAS one on old giant lock systems
where the cpu was waiting until getting the lock on a busy thread because there 
was
only ONE CPU and even on multi-cpu-systems there was only one core a time bound 
to
the kernel

to get around this issue good old posix aio_*calls where used in order not to 
wait
for a new lock what I believe is squid's aufs cache_dir model which is still 
very
good and even better on modern smp machines and even with squid's 
not-smp-optimized
code - you really can drain disks to their physical limits - but that is not all

SMP (modern) works around the global giant lock, the kernel is not anymore 
limited
to get one core a time

SMP sistems are going to work with spin locks (Linux) and sleep locks (freebsd)
where the linux way is focusing thread synchronizing which is going to be
outperformanced by the sleep lock mechanism. Spin locks certainly still waste 
cpu
while spinning what sleeplocks do not, cpu is free to do other work. This was 
kind
of benefit for Linux last couple of years when freebsd was in deep developing of
it's new threading model which is now on top I think, especially in shared 
memory
environments.

basicly is it not important if you use one or ten disks, this you should 
consider
later as fine tuning but the threading model works the same, for one or two 
disks,
or for 2 our 32Gigs of memory - so you certainly do NOT get araound your IO-Wait
with more memory or more disk when the cpu(s) can not handle it waiting for 
locks
as you say ...

So IMO your statement is not so very true anymore, with a modern SMP-OS on 
modern
smp hardware of course.

michel





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




Re: [squid-users] Recommend for hardware configurations

2008-07-06 Thread Michel

 On lör, 2008-07-05 at 12:44 -0300, Michel wrote:

 I am not understanding why you keep suggesting single core as preferred cpu

 Did I? Not what I can tell.

 I said Squid uses only one core.


:) good answer ... but often it does not matter what we say but what is beeing
understood, what I meant is that it comes over as if you are suggesting single 
core
computers


 even if squid's core is actually not multi-thread capable a faster cpu is 
 better
 -
 there are also other things running on a machine so a smp machine ever is a
 benefit
 to overall performance

 Both yes and no. For an application like Squid you will find that nearly
 all OS:es gets bound to a single core running both networking and the
 application, leaving the other cores to run various tiny other stuff..


nope, not at all. probably on Linux's spin lock model it might be so , but I do 
not
know, on freebsd you can watch the squid process and it's threads, either aufs 
or
diskd related and see that they are handled by all cpus all the time

35867 squid4  -19  3921M  3868M kqread 3 200:28  0.00% squid0
 1481 squid4  -19   601M   581M kqread 0  86:03  0.00% squid1
 1482 squid4  -19   598M   579M kqread 0  84:49  0.00% squid2
 1495 squid   -4  -19  8300K  1376K msgrcv 1  20:19  0.00% diskd-daemon
 1496 squid   -4  -19  8300K  1372K msgrcv 3  20:11  0.00% diskd-daemon
 1497 squid   -4  -19  8300K  1324K msgrcv 3   5:42  0.00% diskd-daemon
 1498 squid   -4  -19  8300K  1224K msgrcv 2   5:31  0.00% diskd-daemon

35867 squid4  -19  3921M  3868M kqread 1 200:28  0.00% squid0
 1481 squid4  -19   601M   581M kqread 1  86:03  0.00% squid1
 1482 squid4  -19   598M   579M kqread 1  84:49  0.00% squid2
 1495 squid   -4  -19  8300K  1376K msgrcv 0  20:19  0.00% diskd-daemon
 1496 squid   -4  -19  8300K  1372K msgrcv 0  20:11  0.00% diskd-daemon
 1497 squid   -4  -19  8300K  1324K msgrcv 2   5:42  0.00% diskd-daemon
 1498 squid   -4  -19  8300K  1224K msgrcv 2   5:31  0.00% diskd-daemon

35867 squid4  -19  3921M  3868M kqread 1 200:29  0.00% squid0
 1481 squid4  -19   601M   581M kqread 2  86:03  0.00% squid1
 1482 squid4  -19   598M   579M kqread 3  84:50  0.00% squid2
 1495 squid   -4  -19  8300K  1376K msgrcv 1  20:19  0.00% diskd-daemon
 1496 squid   -4  -19  8300K  1372K msgrcv 1  20:11  0.00% diskd-daemon
 1497 squid   -4  -19  8300K  1324K msgrcv 2   5:42  0.00% diskd-daemon
 1498 squid   -4  -19  8300K  1224K msgrcv 2   5:31  0.00% diskd-daemon

three tops in 3 different seconds, 8'th column show on which cpu it runs, 
observing
threads it still is more fun


 Why I recommend dual core instead of quad core is simply because you get
 a faster core speed in dual core than quad core for the same price (and
 often availability as well..) which will directly benefit Squid in high
 performance.


I understood you recommend single core ... not dual

 Yes, Squid quite easily gets CPU bound, and is then limited to the core
 speed of your CPU, and the faster the core speed is the better in that
 situation. Selecting a slower core speed to fit more cores hurts
 performance for Squid when the server is mainly for Squid.


I am not so sure if the core speed does matter so much as long as there IS CPU 
left
... then there is CPU left for any other work...




 You are welcome to give numbers proving that for Squid a 4 core system
 outperforms a 2 core system with the exact same configuration in all
 other aspects. Don't forget to include price in the matrix..

 The most interesting test configurations is

 - no disk cache
 - single drive for disk cache
 - 4 drives for disk cache

 Until I see any numbers indicating quad core gives a significant
 increase outperforming what the same price configuration using dual core
 I will continue propagating that quad core is not beneficial to Squid.


two/three years ago I said next year there are no single cores to buy anymore 
and
everyone is running at least dualcore if not quad and was shot by almost all
freebsd 4.x and dragon-fly lovers or should I say by people which didn't saw 
where
the modern threading model was going and were hanging on to the global giant 
lock
because at *THAT* time network and disk performance was still better?

then to be honest I do not believe that you ever will be convinced by any test 
*I*
post here :), so do it yourself and get your own conclusions ... the test is 
easy,
get yourself an AM2 MB and a X2 and a X4 and nuke a fixed rate of http requests
over a certain time into each CPU and monitor CPU time and disk io (on FREEBSD
amd64 7STABLE ) and compare it and I say you show me that X2 is losing and then 
I
get myself a linux box and shut my mouth :)


 Similarly for dual core vs single core, but it's not as clear cut as
 there is not a big per core performance difference between single and
 dual core compared to prices..

as I said soon there are no single cores

Re: [squid-users] Recommend for hardware configurations

2008-07-05 Thread Michel

 On tor, 2008-07-03 at 12:04 +0800, Roy M. wrote:

 We are planning to replace this testing server with two or three
 cheaper 1U servers (sort of redundancy!)

 Intel Dual Core or Quad Core CPU x1 (no SMP)

 Squid uses only one core, so rather Dual core than Quad...


I am not understanding why you keep suggesting single core as preferred cpu

even if squid's core is actually not multi-thread capable a faster cpu is 
better -
there are also other things running on a machine so a smp machine ever is a 
benefit
to overall performance

modern OS also should give squid's aufs threading benefits but I am not totally
sure about your design here but at least diskd when running several diskd 
processes
is getting benefits from multicore cpus - and a lot and if you do not believe it
set up squid/disk on a 8-core machine and compare with 1|2|4|8 or more diskds to
your single-core-cpu-thing and measure it, in fact you do not even measure it, 
you
can see it and smell it ...

and at the buttom line more power more performance so there is no way a single 
core
runs faster than a dual or quad core on a modern OS, not even get close to it



michel





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




Re: [squid-users] Recommend for hardware configurations

2008-07-05 Thread Michel

 Squid is IO and memory bounded, not cpu bounded. Use the CPU money to
 buy more RAM/disks


guess who is doing the IO ... :)

and get yourself a pricelist, the diff between X2 and phenom is irrelevant and
whatever, hardware is cheap





michel





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




Re: [squid-users] bypass proxy for local addresses

2008-06-30 Thread Michel

 On 29.06.08 10:07, Michel wrote:
 in order not to bother with client configurations and browser problems a
 good solution (because support free) is a transparent proxy and then you
 configure your firewall to skip the fwd rules for the addresses of your
 choice

 However since intercepting of connections causes many troubles, It's much
 better to configure WPAD properly


well, I do not know about such problems, may be you should analise each of
it and configure things properly, in my experience most of common
interception problems are caused by wrong network settings or such
ping-pong-setups like router sending traffic back or gateway forwards to
external proxy




michel



Re: [squid-users] bypass proxy for local addresses

2008-06-29 Thread Michel

 On 27.06.08 07:40, Shaine wrote:
 For instance , if squid runs in port 8080 , when a specific url comes
 into
 the squid via port 8080 , before it receives to port 8080 cant we
 redirect
 to a web server , which that url searching?
 From the squid itself cant we find a solutions to have a proxy request
 by
 passing ???

 There is no such think in HTTP protocol that would tell the client, stop
 asking me and go directly. Some browsers even don't know they are using a
 proxy (when using interception, often incorrectly called transparent
 proxy).

 You just must configure browser when to use proxy and when not to use it.
 Either manualy, either via interception, or by using WPAD protocol, as
 others already mentioned (and I forgot in my last mail).

in order not to bother with client configurations and browser problems a
good solution (because support free) is a transparent proxy and then you
configure your firewall to skip the fwd rules for the addresses of your
choice

michel

-- 


michel





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




[squid-users] temp countermeasure against swap.state corruption

2008-06-17 Thread Michel (M)
hi

the swap.state corruption is a real problem. Since I have no time for
learning the squid sources and find out what it is I wrote a workaround
which seams to protect from this to happen.

the swap.state corruption is appearing after squid receives the first
requests while rebuilding the swap.state. In latest versions the -F flag
does not help anymore, some weeks ago ( 2.6-stable19) it still was a
valid workaround.

So what my startup script does is injecting a firewall rule blocking any
incoming tcp:8080, reading the log, detecting when swap.state is ready and
then on single instances remove the initial firewall rule, or in multi
instance scenario start the process which receive the client requests only
when the swap.dirs are ready.

So if someone is interested ask me in pvt or if I do not step on someones
tail here I can post it to the list.

michel
...





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




Re: [squid-users] what can I help to make this swap.state corruption go away?

2008-06-16 Thread Michel (M)
Henrik Nordstrom disse na ultima mensagem:
 On lör, 2008-06-14 at 13:38 -0300, Michel (M) wrote:
 friends
 this swap.state corruption is getting worse and worse and it is a
 pattern
 swap.state.new EVER stops at 72 and then one swap.state after the other
 grows until the disk is full

 This problem has only shown up in your installation. In all my tests and
 for all the years I have been supporting Squid users you are the only
 one who have encountered this.

that is not exactly true
people contact me directly to talk about the same problem they have
also there are similar reports on bugzilla and the mailing lists and
without any pointing, may be certain problems are misunderstood or by
others or by me?

fact is that I can give almost life demonstrations of it but nobody seems
to care


 It might be something simple that differs between your setup and
 everyone else. Question is what..


yeah sure good point

I am coming to the conclusion that nobody but me use caching, only
proxying  or the caches are very small.



michel

...





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




Re: [squid-users] what can I help to make this swap.state corruption go away?

2008-06-16 Thread Michel (M)

Adrian Chadd disse na ultima mensagem:
 On Mon, Jun 16, 2008, Michel (M) wrote:

 also there are similar reports on bugzilla and the mailing lists and
 without any pointing, may be certain problems are misunderstood or by
 others or by me?

 I believe you; but noone's said anything else on the list as far as I
 can tell.

 fact is that I can give almost life demonstrations of it but nobody
 seems
 to care

 Don't take inaction as noone seems to care.


well, If it came wrong over then I am sorry, what is better, nobody does
something, nobody asks something? What I said is nothing personal and
still less any classification, it is simply a fact that this problem
persists long long time



 Heh. Even I'm playing with 200 gig caches at home; trying to build a
 24-drive
 600 gig cache; is that big enough?



it is not the cache_size what matters, a big cache has N numbers of access
and that is what matters, cache_size is second factor only

also, what appearently is not seen, that I am running transparent GWs and
not standalone proxies, so the ALL http requestes of the entire network
connected to it are hammered into squids listen port independend of user
settings

this problem is NOT beeing triggered by some test requests on a lab machine


michel
...





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




Re: [squid-users] what can I help to make this swap.state corruption go away?

2008-06-16 Thread Michel (M)

Adrian Chadd disse na ultima mensagem:
 And the fact that the bulk of current Squid development is being
 done disconnected to people like who you run Squid in production
 should be a teeling sign as to -why- things aren't happening
 how you'd like.

not sure if I understand this? What do you mean by beeing done
disconnected to people?



 The fact is this - the most active Squid contributors and developers
 are not employed by a company with large Squid deployments, so the
 issues being addressed by the current set of Squid developers aren't
 the same as those seen by companies such as yourself.

 Its an open source community project - the idea is that enough -users-
 (commercial or otherwise) help develop and improve the software as a
 whole.
 This hasn't really been happening with the Squid project for a number
 of years.

 So, patches gladly accepted, or wait until it tickles my or someone elses
 fancy (and I have some time, which won't be until late July), or talk to
 someone about a commercial relationship. Or hire an admin with a coding
 interest and see if they'll fix it and contribute back the fix.




am I getting this right? have to pay or nothing will be done is what you
try to say here?


whatever ... I think that my problem, my explanations and willingness to
have a possible solutions being tested are perfectly contributions as the
problem report itself already is and as anything else as even
participating here and using squid IS a contribution. So I would say this
lecture was kind out of line here ...

on the other hand there is no reason to get sensitive here because when
you do a public project then it IS public and necessarily you have to hear
what is wrong also and not only swallowing the credits, if you can not
stand it then you need to hear what developers usually say to the users
when things get hot: echo you are free to use another project | sed -e
's/use/develope for/'

but thank's for your time to clear this up

michel

...





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




[squid-users] what can I help to make this swap.state corruption go away?

2008-06-14 Thread Michel (M)
friends
this swap.state corruption is getting worse and worse and it is a pattern
swap.state.new EVER stops at 72 and then one swap.state after the other
grows until the disk is full

what do you need to fix this? This is NOT only a diskd problem, it also
happens exactly same way with aufs



[wco-cds.omegasul.com.br]/home/sup# ll /c/c1/sw*
-rw-r-  1 squid  inet  16189927776 14 Jun 10:32 /c/c1/swap.state
-rw-r-  1 squid  inet   72 14 Jun 10:36 /c/c1/swap.state.new
[wco-cds.omegasul.com.br]/home/sup# ll /c/c2/sw*
-rw-r-  1 squid  inet  16931209216 14 Jun 11:26 /c/c2/swap.state
-rw-r-  1 squid  inet   72 14 Jun 10:36 /c/c2/swap.state.new
[wco-cds.omegasul.com.br]/home/sup# ll /c/c3/sw*
-rw-r-  1 squid  inet  7618037904 14 Jun 11:31 /c/c3/swap.state
-rw-r-  1 squid  inet 8751888 14 Jun 11:31 /c/c3/swap.state.new
[wco-cds.omegasul.com.br]/home/sup# ll /c/c4/sw*
-rw-r-  1 squid  inet  59606928 14 Jun 10:41 /c/c4/swap.state
-rw-r-  1 squid  inet72 14 Jun 11:32 /c/c4/swap.state.new

michel

...





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




Re: [squid-users] Searching squid logs for pornographic sites

2008-06-12 Thread Michel (M)

Ralf Hildebrandt disse na ultima mensagem:
 * Rob Asher [EMAIL PROTECTED]:
 Here's something similar to what you're already doing except comparing
 to a file of badwords to look for in the URL's and then emailing you
 the results.

 #!/bin/sh
 # filter.sh
 #
 cd /path/to/filterscript
 cat /var/log/squid/access.log | grep -if /path/to/filterscript/badwords
  hits.out

 Useless use of cat:
 grep -if /path/to/filterscript/badwords /var/log/squid/access.log 
 hits.out

 /path/to/filterscript/wordfilter.gawk hits.out

 cat /path/to/filterscript/word-report | /bin/mail -s URL Filter Report
 [EMAIL PROTECTED]

 Useless use of cat:
 /bin/mail -s URL Filter Report [EMAIL PROTECTED] 
 /path/to/filterscript/word-report


well, when you are doing optimizing do it entirely  :) - only one line:

grep arg file | $mail_cmd

then, if you awk the log and pipe the buffer into the mail_cmd you even do
not need to create files and delete them later, so you can have it all in
one line


but at the end this entire search might be useless since there is no
guaranty that www.mynewbabyisborn.org is no porn and that www.butt.com is
porn, or how do you catch www.m-y.d-i.c-k.a.t.microsoft.com ?
I abandoned all this keyword_stuff_searching long time ago because even if
it would work the user still could use a fantasyproxy somewhere on port
42779 or a vpn as hamachi and then you do what?


michel
...





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




Re: [squid-users] Searching squid logs for pornographic sites

2008-06-12 Thread Michel (M)

Rob Asher disse na ultima mensagem:


 Michel (M) [EMAIL PROTECTED] 6/12/2008 6:59 AM 


 but at the end this entire search might be useless since there is no
 guaranty that www.mynewbabyisborn.org is no porn and that www.butt.com
 is
 porn, or how do you catch www.m-y.d-i.c-k.a.t.microsoft.com ?
 I abandoned all this keyword_stuff_searching long time ago because even
 if
 it would work the user still could use a fantasyproxy somewhere on port
 42779 or a vpn as hamachi and then you do what?

 michel
 ...

 I agree too but until there's a better way, we'll still use the keyword
 searching to find the blatant sites.  In our case, we're blocking egress
 traffic for everything except known services(our own proxies) so anonymous
 proxies and vpn's won't be able to connectUNLESS they can get to them
 through the proxies somehow.  Things like PHProxy and all the anonymizing
 sites make it tougher.  There's ways around anything I know but we adapt
 and keep plugging away.


sure, if you need it you need it ...
we offer the inverse approach to our costumers, we block all but the sites
the user allows, so the parents decide which sites the kids can go to and
everything else is blocked

michel
...





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




Re: [squid-users] How does weighted-round-robin work?

2008-06-12 Thread Michel (M)

Henrik Nordstrom disse na ultima mensagem:
 On tor, 2008-06-12 at 23:54 +0800, Roy M. wrote:
 So since some weight might be affected, without patching, can you
 suggest the weight for my cases which could or the best to avoid the
 problem?

 e.g. I want to assign 1:2:1 to my server A:B:C, what weight I can
 assign?

 No idea really. Better to patch your Squid with the patch which is
 already available..

 Squid-2:
 http://www.squid-cache.org/Versions/v2/HEAD/changesets/12213.patch


soo is was not me that the weight thing wasn't working :)
this is in the daily 2.7 tar ball or only on Head?

 But I guess it may work out as you indend if you order your squid.conf
 so the servers is listed in 1, 1, 2 order. Or maybe it's 2, 1, 1...


and the other 1-1-2 of course ...

in my experience the first does it all so long as it can handle the load
but depends on sibling|peer|no-cache settings, but definitely more than
2:1

michel


...





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




Re: [squid-users] Re: RE : [squid-users] performances ... again

2008-06-09 Thread Michel (M)

Ionel GARDAIS disse na ultima mensagem:
 Hi Dean,

 I had these directives :

  dns_testnames apple.com redhat.com internic.net nlanr.net
  append_domain .beicip.fr


 I commented out append_domain as this is not relevant to our
 configuration now.


both are kind of not important at all
if you think they cause trouble set this

dns_testnames localhost

IMO this nasty var really should disappear from squid.conf, seems from the
90's when dns server still where dark stuff for most

append_domain does not matter because when you have internet access this
should be solved by your dns so let it at default ( none )


michel

...





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




Re: [squid-users] Re: RE : [squid-users] performances ... again

2008-06-09 Thread Michel (M)

Amos Jeffries disse na ultima mensagem:
 Michel (M) wrote:
 Ionel GARDAIS disse na ultima mensagem:
 Hi Dean,

 I had these directives :

 dns_testnames apple.com redhat.com internic.net nlanr.net
 append_domain .beicip.fr
 I commented out append_domain as this is not relevant to our
 configuration now.


 both are kind of not important at all
 if you think they cause trouble set this

 dns_testnames localhost

 IMO this nasty var really should disappear from squid.conf, seems from
 the
 90's when dns server still where dark stuff for most

 Question for all users:
Is anyone actually _needing_ this to stay? Or can we indeed drop it?


thank's for your vote here :)



michel
...





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




Re: [squid-users] 2.7 dns res problem (probably bug)

2008-06-02 Thread Michel (M)

Henrik Nordstrom disse na ultima mensagem:
 On sön, 2008-06-01 at 09:45 -0300, Michel (M) wrote:

 yes I understand the msgs but it is not the case, I run the exact same
 config on the exact same machine (only by stopping 2.7 and starting 2.6
 with the exact same configs) and 2.6 works but 2.7 does not

 And they are built with the same configure options?

 There is no difference between 2.6 and 2.7 how the internal DNS resolver
 accesses the DNS servers. Both uses udp_outgoing_address (or _incoming
 if _outgoing not set) as source address.


hmm, so then it gets awkward now

 Just a wild guess, but maybe your squid-2.6 is built with
 --disable-internal-dns making it fall back on the OS provided dns
 resolver?


yes, same configure options as follows and no dns tweaks

--enable-storeio=diskd,aufs,ufs,null --enable-async-io=90 \
--enable-removal-policies=heap,lru --enable-underscores
--disable-ident-lookups \
--disable-hostname-checks --enable-large-files
--disable-http-violations \
--enable-snmp --enable-truncate --enable-time-hack \
--enable-external-acl-helpers=session \
--disable-wccp --disable-wccpv2 --enable-follow-x-forwarded-for \
--disable-linux-tproxy --disable-linux-netfilter --disable-epoll


michel
...





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




Re: [squid-users] 2.7 dns res problem (probably bug)

2008-06-01 Thread Michel (M)

Henrik Nordstrom disse na ultima mensagem:
 On lör, 2008-05-31 at 18:35 -0300, Michel (M) wrote:
 hi
 when running multiple squid instances squid cannot not resolv host names
 from remote dns servers. It is necessary running named on localhost to
 get
 clear

 the failing squid instances below run on 127.0.0.2|3 and are parents for
 another instance on 127.0.0.1 which is running as transparent proxy

 this problem does not exist with 2.6STABLE-NN and same settings on same
 machine

 there are no !default dns settings in squid.conf

 michel

 May 31 18:10:06 wco-luc-bu squid[24299]: comm_udp_sendto: FD 6,
 200.152.83.33, port 53: (49) Can't assign requested address
 May 31 18:10:06 wco-luc-bu squid[24299]: idnsSendQuery: FD 6: sendto:
 (49)
 Can't assign requested address

 Your OS prevents Squid from talking UDP to the DNS server address.

 May be due to an invalid udp_outgoing/incoming_address, or something
 else...


yes I understand the msgs but it is not the case, I run the exact same
config on the exact same machine (only by stopping 2.7 and starting 2.6
with the exact same configs) and 2.6 works but 2.7 does not

have you seen that both requesting squid processes try to open the same FD?


michel
...





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




[squid-users] 2.7 dns res problem (probably bug)

2008-05-31 Thread Michel (M)

hi
when running multiple squid instances squid cannot not resolv host names
from remote dns servers. It is necessary running named on localhost to get
clear

the failing squid instances below run on 127.0.0.2|3 and are parents for
another instance on 127.0.0.1 which is running as transparent proxy

this problem does not exist with 2.6STABLE-NN and same settings on same
machine

there are no !default dns settings in squid.conf

michel

May 31 18:10:06 wco-luc-bu squid[24299]: comm_udp_sendto: FD 6,
200.152.83.33, port 53: (49) Can't assign requested address
May 31 18:10:06 wco-luc-bu squid[24299]: idnsSendQuery: FD 6: sendto: (49)
Can't assign requested address
May 31 18:10:11 wco-luc-bu squid[24299]: comm_udp_sendto: FD 6,
200.152.83.33, port 53: (49) Can't assign requested address
May 31 18:10:11 wco-luc-bu squid[24299]: idnsSendQuery: FD 6: sendto: (49)
Can't assign requested address
May 31 18:10:21 wco-luc-bu squid[24299]: comm_udp_sendto: FD 6,
200.152.83.33, port 53: (49) Can't assign requested address
May 31 18:10:21 wco-luc-bu squid[24299]: idnsSendQuery: FD 6: sendto: (49)
Can't assign requested address
May 31 18:10:41 wco-luc-bu squid[24302]: comm_udp_sendto: FD 6,
200.152.83.33, port 53: (49) Can't assign requested address
May 31 18:10:41 wco-luc-bu squid[24302]: idnsSendQuery: FD 6: sendto: (49)
Can't assign requested address
May 31 18:10:46 wco-luc-bu squid[24302]: comm_udp_sendto: FD 6,
200.152.83.33, port 53: (49) Can't assign requested address
May 31 18:10:46 wco-luc-bu squid[24302]: idnsSendQuery: FD 6: sendto: (49)
Can't assign requested address
May 31 18:10:56 wco-luc-bu squid[24302]: comm_udp_sendto: FD 6,
200.152.83.33, port 53: (49) Can't assign requested address
May 31 18:10:56 wco-luc-bu squid[24302]: idnsSendQuery: FD 6: sendto: (49)
Can't assign requested address



...





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




[squid-users] 2.7 min-size question

2008-05-31 Thread Michel (M)

applying min-size on existing cache_dirs will delete existing smaller
objects or will be valid only for new cached objects?

thank's
michel
...





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




Re: [squid-users] Re: debug ALL,1 too noisy

2008-05-31 Thread Michel (M)

Henrik Nordstrom disse na ultima mensagem:
 On fre, 2008-05-30 at 08:32 -0300, Michel (M) wrote:
 Hi

 I think that squid's debug level ALL,1 is very noisy and does flood the
 log, especially with the following events

 squid[14086]: ctx: exit level  0 ...
 squid[14086]: ctx: enter level ...
 squid[14086]: httpProcessReplyHeader ...

 which do not have any value because nothing can be done so they are not
 exactly a warning|error

 I would like to see under the lowest log level only really important
 warnings or erros

 lowest level is 0


smart answer but this also is a no log option right ...

what I wanted to say is that the above events would be better on a higher
level and not at 1 because they really are spamming messages but setting
to 0 I do not get some important events any more

michel
...





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




[squid-users] debug ALL,1 too noisy

2008-05-30 Thread Michel (M)

Hi

I think that squid's debug level ALL,1 is very noisy and does flood the
log, especially with the following events

squid[14086]: ctx: exit level  0 ...
squid[14086]: ctx: enter level ...
squid[14086]: httpProcessReplyHeader ...

which do not have any value because nothing can be done so they are not
exactly a warning|error

I would like to see under the lowest log level only really important
warnings or erros

could that be done?

thank's
michel...






Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




Re: [squid-users] Squid not running, PID exists

2008-05-23 Thread Michel (M)

Adrian Chadd disse na ultima mensagem:
 On Fri, May 23, 2008, Nick Duda wrote:
 Btw, doing squid -k shutdown still doesn't remove the PID.

 The reason why this is important to me is a script I am writing to do
 failover with WCCP.

 Thats why I generally use cachemgr to determine if Squid is running,
 not the .pid file.




well, unfortunatly the pid thing is kind of bad solved within squid startup

if you like the following you can get around and rid of the problem by
modifying your startup  scripts with the following

#!/bin/sh
pid_file=/squid/var/logs/squid.pid
pid=`cat $pid_file`
if [ ! `ps acocomm -p $pid | grep squid` ]; then rm $pid_file; fi

to make it complete you need to write another part to check if it is
running without pid file and another to check if the pid file is from the
running squid and then start it secure and for sure

... starting blabla comes here



michel

...





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




Re: [squid-users] serious squid (cache_dir) problem NOW confirmed with aufs

2008-05-21 Thread Michel (M)

Henrik Nordstrom disse na ultima mensagem:
 On mån, 2008-05-05 at 10:13 -0300, Michel (M) wrote:

 ok I will do it

 swap.state.new is written and stops after some bytes (  100 k), I guess
 then when the first client requests come in it stops writing it and
 swap.state grows out of bounds until disk is full



like you must have seen I filed it in bugzilla
meanwhile I can confirm the same problem with aufs and if some wants some
special more detailed info I have the logs and swap.states backup here




 seems to happen only when a considerable cache_dir size when the rebuild
 is needing more then 60 seconds

 this as said before happens after a clean shutdown and with diskd

 would that be enough for a bug report?

 Please also include your cache_dir lines, and cache.log up to the point
 where swap.state.new stops growing.

 Regards
 Henrik








 A mensagem foi scaneada pelo sistema de e-mail e pode ser considerada
 segura.
 Service fornecido pelo Datacenter Matik  https://datacenter.matik.com.br




...





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




Re: [squid-users] cache_dir (dirty) question

2008-05-06 Thread Michel (M)

Adrian Chadd disse na ultima mensagem:
 On Sun, May 04, 2008, Michel (M) wrote:


 I never thought so much about this but now it came up. I thought that
 the
 cache_dir dirty came when an unclean shutdown ocurred, or better, caused
 by file corruptions of the underlying FS

 thing is I am running ZFS and so there are no corrupt files even after
 power outage

 why squid still see dirty cache_dirs ?

 Its a function of the state of the cache log, -not- of the cache dir as
 a whole.



one more question here

lets say the swap.state is corrupt for any reason I can delete it and
squid should rebuild it correctly right?

so then when I am sure that the cache_dir date is consistent (in my case
by using ZFS) squid never should get into problems if I do so right?

so since squid does this swap.state - swap.state.new - swap.state thing
anyway I could change my startup script to delete any swap.state before
starting squid to make sure it is coming up clean, or am I wrong here?


michel


...





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




Re: [squid-users] cache_dir (dirty) question

2008-05-06 Thread Michel (M)

Henrik Nordstrom disse na ultima mensagem:
 On tis, 2008-05-06 at 06:45 -0300, Michel (M) wrote:

 lets say the swap.state is corrupt for any reason I can delete it and
 squid should rebuild it correctly right?

 Yes, but some information is lost when doing so, and it will take a very
 very long time if your cache is large.


 so since squid does this swap.state - swap.state.new - swap.state
 thing
 anyway I could change my startup script to delete any swap.state before
 starting squid to make sure it is coming up clean, or am I wrong here?

 Partially wrong.

 There is information in swap.state that can not be rebuilt from the
 cache directory. Mainly freshness updates and access counters.

 The rewrite of swap.state on startup/rotate is to compact the file
 pruning out no longer relevant details. While running swap.state is used
 as a journal for the cache.

 If swap.state is lost Squid will attempt to rebuild the cache index from
 the individual files, but not all information is available in the
 individual files and additionally it's a very I/O intensive task as each
 file has to be opened and read..


thank you very much for this clarification

I guess the counters are updated soon a file got hit again or not? So
worse case is it is being fetched again or not checked and served from
cache right?

since my hardware is fast and zfs is helping here a lot, I lose only 5-10
minutes in comparism to a normal clean rebuild with -F flag, seems better
than discovering hours later that something went wrong, so since there is
the swap.state problem I related already it seems to be a valid work
around for me at this moment to get me out of service outage.

thank's
michel


...





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




Re: [squid-users] cache_dir (dirty) question

2008-05-05 Thread Michel (M)

Adrian Chadd disse na ultima mensagem:
 On Sun, May 04, 2008, Michel (M) wrote:


 I never thought so much about this but now it came up. I thought that
 the
 cache_dir dirty came when an unclean shutdown ocurred, or better, caused
 by file corruptions of the underlying FS

 thing is I am running ZFS and so there are no corrupt files even after
 power outage

 why squid still see dirty cache_dirs ?

 Its a function of the state of the cache log, -not- of the cache dir as
 a whole.



thank's for clarification



 Adrian

 --
 - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid
 Support -
 - $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -







 A mensagem foi scaneada pelo sistema de e-mail e pode ser considerada
 segura.
 Service fornecido pelo Datacenter Matik  https://datacenter.matik.com.br




...





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




Re: [squid-users] serious squid (cache_dir) problem

2008-05-05 Thread Michel (M)

Adrian Chadd disse na ultima mensagem:
 Interesting! can you throw that into a bugzilla report? That seems like
 enough to start debugging the issue.



ok I will do it

swap.state.new is written and stops after some bytes (  100 k), I guess
then when the first client requests come in it stops writing it and
swap.state grows out of bounds until disk is full

seems to happen only when a considerable cache_dir size when the rebuild
is needing more then 60 seconds

this as said before happens after a clean shutdown and with diskd

would that be enough for a bug report?




 Adrian


 On Sat, May 03, 2008, Michel (M) wrote:

 Hi there

 this problem is around since long time but only when an incorrect
 shutdown
 (powerfailure or kill) was the reason, but now it became a pattern ...

 but there was a workaraound, adding -F to squid start config so it
 didn't
 attend any request so long as the logs were not ready

 but this is not the case anymore, any request before swap_state is ready
 is fucking up the swap_state and it is growing out of bounds beyond
 available disk space and then squid dies because out of disk space when
 RunCache didn't terminated earlier because of number of insuccessfull
 retries



 FreeBSD  7.0-STABLE amd64 and i386 (Latest Sources)
 Squid  2.6STABLE19-20080?* (I  do not know which exact version)

 I believe major problem is I use  diskd for cache_dir here which seems
 to
 be abandoned (unfortunatly) ...

 I do not know about this issue when using aufs and ufs because I am not
 using it

 some comment on this?


 Michel

 ...




 
 Tecnologia Internet Matik http://info.matik.com.br
 Sistemas Wireless para o Provedor Banda Larga
 Hospedagem e Email personalizado - e claro, no Brasil.
 

 --
 - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid
 Support -
 - $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -







 A mensagem foi scaneada pelo sistema de e-mail e pode ser considerada
 segura.
 Service fornecido pelo Datacenter Matik  https://datacenter.matik.com.br




...





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




[squid-users] cache_dir (dirty) question

2008-05-04 Thread Michel (M)


I never thought so much about this but now it came up. I thought that the
cache_dir dirty came when an unclean shutdown ocurred, or better, caused
by file corruptions of the underlying FS

thing is I am running ZFS and so there are no corrupt files even after
power outage

why squid still see dirty cache_dirs ?

...

michel





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




[squid-users] serious squid (cache_dir) problem

2008-05-03 Thread Michel (M)

Hi there

this problem is around since long time but only when an incorrect shutdown
(powerfailure or kill) was the reason, but now it became a pattern ...

but there was a workaraound, adding -F to squid start config so it didn't
attend any request so long as the logs were not ready

but this is not the case anymore, any request before swap_state is ready
is fucking up the swap_state and it is growing out of bounds beyond
available disk space and then squid dies because out of disk space when 
RunCache didn't terminated earlier because of number of insuccessfull
retries



FreeBSD  7.0-STABLE amd64 and i386 (Latest Sources)
Squid  2.6STABLE19-20080?* (I  do not know which exact version)

I believe major problem is I use  diskd for cache_dir here which seems to
be abandoned (unfortunatly) ...

I do not know about this issue when using aufs and ufs because I am not
using it

some comment on this?


Michel

...





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




Re: [squid-users] Squid on DualxQuad Core 8GB Rams - Optimization - Performance - Large Scale - IP Spoofing

2007-10-16 Thread Michel Santos
Adrian Chadd disse na ultima mensagem:
 On Tue, Oct 16, 2007, Paul Cocker wrote:
 For the ignorant among us can you clarify the meaning of devices?

 Bluecoat. Higher end Cisco ACE appliances/blades. In the accelerator
 space,
 stuff like what became the Juniper DX can SLB and cache about double what
 squid can in memory.


o really? how much would that be? do you have a number or is it just talk?


 Just so you know, the Cisco Cache Engine stuff from about 8 years ago
 still beats Squid for the most part. I remember seeing numbers of
 ~ 2400 req/sec, to/from disk where appropriate, versus Squid's current
 maximum throughput of about 1000. And this was done on Cisco's -then-
 hardware - I think that test was what, dual PIII 800's or something?
 They were certainly pulling about 4x the squid throughput for the same
 CPU in earlier polygraphs.



I am not so sure if this 2400 req/sec wasn't per minute and also wasn't
from cache but only incoming requests ...

I pay you a beer or even two if you show me a device type pIII which can
satisfy 2400 req from disk



 I keep saying - all this stuff is documented and well-understood.
 How to make fast network applications - well understood. How to have
 network apps scale well under multiple CPUs - well understood, even better
 by the Windows people. Cache filesystems - definitely well understood.



well, not only well-understood but also well-known a Ferrari seems to run
faster than the famous john-doo-mobile - but also very well-known the
price issue and even if well-documented it makes no sense at all comparing
both



squid does a pretty good job not only getting high hit rates but
especially considering the price

unfortunatly squid is not a multi-threaded application what by the way
does not disable you running several instances as workaround

unfortunatly again, diskd is kind of orfaned but certainly is
_the_kind_of_choice_ for SMP machines, by design and still more when
running several diskd processes per squid process


again unfortunatly, people are told that squid is not SMP capable and that
there is no advantage of using SMP machines for it so they configuring
their machines to death on single dies with 1 meg or 2 and getting nothing
out of it so where does it end??? Easy answer, squid is going to be a
proxy for natting corporate networks or poor ISPs which do not have
address space - *BUT NOT* as a caching machine anymore

but fortunatly true that caching performance is in first place a matter of
fast hardware

that you can see and not only read common bla-bla I add a well-known mrtg
graph of the hit rate of a dual-opteron sitting in front of a 4MB/s ISP
POP

and I get pretty much more hits as you told at the beginning on larger
POPs - so I do not know where you get your squid's 1000 req limit from ...
must be from your P-III goody ;)


but then at the end the actual squid marketing is pretty bad, nobody talks
caching but talks proxying, authenticating and acling, even the makers are
not defending caching at all and appearently not friends of running squid
as multi-instance application because any documentation about it is very
poor and sad


probably an answer to actual demands and so they go with the croud,
bandwidth is almost everywhere very cheap so why people should spend their
brains and bucks on caching technics. Unfortunatly my bandwidth is
expensive and I am not interesting in proxying or and other feature so
perhaps my situation and position is different and is not the same as
elsewhere.

Michel

...





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.
attachment: squid0-hit-day.png

Re: [squid-users] 2.6-16 compile error on freebsd

2007-10-15 Thread Michel Santos

Thomas-Martin Seck disse na ultima mensagem:
 * Michel Santos ([EMAIL PROTECTED]):


 
  I get a compile error with squid-2.6-STABLE-16 as follows
 

 ...

  ./cf_gen cf.data ./cf.data.depend
  *** Signal 10
 
  Stop in /usr/local/squid/squid-2.6.STABLE16/src.
  *** Error code 1
 
 


 is it possibly a compiler problem?

 This is a bug in cf_gen that only manifests itself on FreeBSD 7
 (either because the new malloc implementation handles things
 differently in general or because its internal debugging code was
 active until FreeBSD-7 was officially branched in CVS). Please look
 at http://www.squid-cache.org/Versions/v2/2.6/changesets/ for the
 patch to fix this. [Shameless plug: or just use the port, it contains
 the fix.]





thank you
don't know why I haven't seen it myself, I looked over the page before

anyway, it works, thank's

Michel



...





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




Re: [squid-users] 2.6-16 compile error on freebsd

2007-10-14 Thread Michel Santos


 I get a compile error with squid-2.6-STABLE-16 as follows


...

 ./cf_gen cf.data ./cf.data.depend
 *** Signal 10

 Stop in /usr/local/squid/squid-2.6.STABLE16/src.
 *** Error code 1




is it possibly a compiler problem?

gcc 4.2.1 is the only difference on FreeBSD7 I can find ( on the machines
FreeBSD6 with gcc 3.4.6 it compiles fine)

on the other hand, squid compiled with gcc 3.4.6 on FreeBSD6 runs fine on
FreeBSD7



Michel
...





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




[squid-users] 2.6-16 compile error on freebsd

2007-10-12 Thread Michel Santos


I get a compile error with squid-2.6-STABLE-16 as follows

2.6-15 compiles normally


awk -f ./cf_gen_defines ./cf.data.pre cf_gen_defines.h
sed  [EMAIL PROTECTED]@%3128%g; [EMAIL PROTECTED]@%3130%g;
[EMAIL PROTECTED]@%/usr/local/squid/etc/mime.conf%g;
[EMAIL PROTECTED]@%/usr/local/squid/libexec/`echo dnsserver | sed
's,x,x,;s/$//'`%g; [EMAIL PROTECTED]@%/usr/local/squid/libexec/`echo
unlinkd | sed 's,x,x,;s/$//'`%g;
[EMAIL PROTECTED]@%/usr/local/squid/libexec/`echo pinger | sed
's,x,x,;s/$//'`%g; [EMAIL PROTECTED]@%/usr/local/squid/libexec/`echo
diskd-daemon | sed 's,x,x,;s/$//'`%g;
[EMAIL PROTECTED]@%/usr/local/squid/var/logs/cache.log%g;
[EMAIL PROTECTED]@%/usr/local/squid/var/logs/access.log%g;
[EMAIL PROTECTED]@%/usr/local/squid/var/logs/store.log%g;
[EMAIL PROTECTED]@%/usr/local/squid/var/logs/squid.pid%g;
[EMAIL PROTECTED]@%/usr/local/squid/var/cache%g;
[EMAIL PROTECTED]@%/usr/local/squid/share/icons%g;
[EMAIL PROTECTED]@%/usr/local/squid/share/mib.txt%g;
[EMAIL PROTECTED]@%/usr/local/squid/share/errors/Portuguese%g;
[EMAIL PROTECTED]@%/usr/local/squid%g; [EMAIL PROTECTED]@%/etc/hosts%g;
[EMAIL PROTECTED]@%2.6.STABLE16%g;  ./cf.data.pre cf.data
if gcc -DHAVE_CONFIG_H
-DDEFAULT_CONFIG_FILE=\/usr/local/squid/etc/squid.conf\ -I. -I.
-I../include -I. -I. -I../include -I../include -Wall -g -O2 -MT
cf_gen.o -MD -MP -MF .deps/cf_gen.Tpo -c -o cf_gen.o cf_gen.c;  then mv
-f .deps/cf_gen.Tpo .deps/cf_gen.Po; else rm -f .deps/cf_gen.Tpo;
exit 1; fi
if gcc -DHAVE_CONFIG_H
-DDEFAULT_CONFIG_FILE=\/usr/local/squid/etc/squid.conf\ -I. -I.
-I../include -I. -I. -I../include -I../include -Wall -g -O2 -MT
debug.o -MD -MP -MF .deps/debug.Tpo -c -o debug.o debug.c;  then mv -f
.deps/debug.Tpo .deps/debug.Po; else rm -f .deps/debug.Tpo; exit 1;
fi
/usr/bin/perl ./mk-globals-c.pl  ./globals.h  globals.c
if gcc -DHAVE_CONFIG_H
-DDEFAULT_CONFIG_FILE=\/usr/local/squid/etc/squid.conf\ -I. -I.
-I../include -I. -I. -I../include -I../include -Wall -g -O2 -MT
globals.o -MD -MP -MF .deps/globals.Tpo -c -o globals.o globals.c;  then
mv -f .deps/globals.Tpo .deps/globals.Po; else rm -f
.deps/globals.Tpo; exit 1; fi
gcc  -Wall -g -O2  -g -o cf_gen  cf_gen.o debug.o globals.o -L../lib
-lmiscutil -lm
./cf_gen cf.data ./cf.data.depend
*** Signal 10

Stop in /usr/local/squid/squid-2.6.STABLE16/src.
*** Error code 1



here the options I use

./configure --enable-default-err-language=Portuguese \
--enable-storeio=diskd,ufs,null \
--enable-removal-policies=heap,lru --enable-underscores
--disable-ident-lookups \
--disable-hostname-checks --enable-large-files
--disable-http-violations \
--enable-snmp --enable-truncate \
--enable-external-acl-helpers=session \
--disable-wccp --disable-wccpv2 \
--enable-follow-x-forwarded-for \
--disable-linux-tproxy --disable-linux-netfilter --disable-epoll \


Michel
...





Tecnologia Internet Matik http://info.matik.com.br
Sistemas Wireless para o Provedor Banda Larga
Hospedagem e Email personalizado - e claro, no Brasil.




Re: [squid-users] acl [NO] bug (when peers configured)

2007-09-01 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On fre, 2007-08-31 at 21:10 -0300, Michel Santos wrote:

 well, I was trying .. asking, begging 'endless' (=_almost) for six
 month
 with logs until i did finally that scary magic touch of /32 and bingo ..
 everything works

 And if you now remove the /32?


just checked

'now' it is working

when the secret service fixed it? I never saw a note


michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] acl [NO] bug (when peers configured)

2007-08-31 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On tor, 2007-08-30 at 08:27 -0300, Michel Santos wrote:

 *THIS* is the thing here: that any acl configured on the frontend cache
 is
 not beeing applied to any request from the peer

 Then check your http_access rules. You have something else in there...


hey thank you!

I found it, there was an extra 'http_access allow peer' above the acls in
two older frontend squids

looking this over means that when the IP address of any 'acl peer src $1'
match the IP range of 'acl all src ip/mask' then I do not need to specify
an additional 'http_access deny peer we_acl' if 'http_access deny all
we_acl' is defined before right


michel


...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] How can i block this type of script

2007-08-31 Thread Michel Santos

jeff donovan disse na ultima mensagem:
 greetings

 i am using squidguard for content filtering.

 How can i block this type of script?

 http://www.softworldpro.com/demos/proxy/

 it's easy to block the url. but when the script is executed there is
 nothing in the url that will let me key in on.


what do you mean with 'let me key in on'?


 here is the regex I am using:

 #Block Cgiproxy, Poxy, PHProxy and other Web-based proxies
 (cecid.php|nph-webpr|nph-pro|/dmirror|cgiproxy|phpwebproxy|nph-
 proxy.cgi|__new_url)



using squid resources in squid config you would do

acl src clients 200.1.1.0/27

acl bla urlpath_regex cecid\.php
acl bla ...

http_access deny clients bla



...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] acl [NO] bug (when peers configured)

2007-08-31 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On fre, 2007-08-31 at 09:24 -0300, Michel Santos wrote:

  192.168.1.0/24 is the same as 192.168.1.0-192.168.1.255
 

 really ;)

 a range indicator is allowed?

 Yes.

I was asking about the dash '-'


 The full specification is

 IPA-IPB/MASK


well, no need teaching a dog to bark ;)

 where IPB defaults to IPA if not specified, and /MASK defaults to /32 if
 not specified (at least unless you use a old now obsolete Squid version
 where it guesses the mask size based on the format of the IP...)


well, I guess in 2.6 is something wrong at this special point, unless some
secret work fixed it (I have not checked  14S), if you remember this is
not working with any 2.6 when coming from a local address, but with 2.5 it
is

shortcut:

#on 127.0.0.2
acl peer src 127.0.0.1

gets 'access denied' for all requests from 127.0.0.1

#on 127.0.0.2
acl peer src 127.0.0.1/32

and 127.0.0.1 goes through ...


michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] acl [NO] bug (when peers configured)

2007-08-31 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On fre, 2007-08-31 at 19:16 -0300, Michel Santos wrote:

 well, I guess in 2.6 is something wrong at this special point, unless
 some
 secret work fixed it (I have not checked  14S), if you remember this is
 not working with any 2.6 when coming from a local address, but with 2.5
 it
 is

 shortcut:

 #on 127.0.0.2
 acl peer src 127.0.0.1

 gets 'access denied' for all requests from 127.0.0.1

 #on 127.0.0.2
 acl peer src 127.0.0.1/32

 and 127.0.0.1 goes through ...

 Then I guess you must have changed something else as well. 127.0.0.1
 127.0.0.1/32 and 127.0.0.1/255.255.255.255 is all equivalent and matches
 the exact ip 127.0.0.1, and has always been..


hmm, I haven't changed anything else than the squid version

 The magic autodetection of the mask size in earlier releases only kick
 in if the ip ends in .0, but was inconsistent and therefore removed...


this is what scares me to death: 'magic' ...

my obs.:
magic starts where maths ends ... ;)

 There has not been any changes in this part of the code since 31 July
 2006 when the mask size detection was removed..


well, I was trying .. asking, begging 'endless' (=_almost) for six month
with logs until i did finally that scary magic touch of /32 and bingo ..
everything works


michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] acl bug (when peers configured)

2007-08-30 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On tor, 2007-08-30 at 06:02 -0300, Michel Santos wrote:
 There is appearently an acl bug

 acls do not work for peers

 They do work for peers, just the same as any other http client. There is
 nothing special about peers in the access controls.

 acl all src 200.152.80.0/20

 Warning: Don't redefine the all acl unless you are very careful. It's
 used in a number of defaults and meant to match the whole world, and
 results can become a bit confusing if redefined...

 Instead define a mynetwork acl to match your clients..



I just did this but does not change the misbehaviour I described


 acl danger urlpath_regex -i instal\.html
 http_access deny all danger
 #

 so far this works for all, I mean it blocks as wanted


 #
 acl all src 200.152.80.0/20
 acl peer src 200.152.83.40
 acl danger urlpath_regex -i instal\.html
 http_access deny all danger
 http_access deny peer danger

 Nothing obviously wrong, apart from the use of the all acl..

ok, in fact the acl all ... is not the point and works anyway despite your
observation, what is NOT working as supposed is acl peer ... and it's
following deny clause for the peer



 does NOT when accessing directly from a browser from 200.152.83.40

 Should it? When going directly Squid is not used...

well well ... directly from a browser not as always_direct or something

I mean here when acessing the parent as client ok, since the frontend
cache is a transparent proxy it catch/intercept this connection and should
apply the acl what it in fact does so long as the IP does not part of acl
peer src

when I change the acl peer src *IP* then the acl works for this machine
as well as for all not_peer_clientes of the frontend cache


*THIS* is the thing here: that any acl configured on the frontend cache is
not beeing applied to any request from the peer


michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] acl bug (when peers configured)

2007-08-30 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On tor, 2007-08-30 at 08:27 -0300, Michel Santos wrote:

 *THIS* is the thing here: that any acl configured on the frontend cache
 is
 not beeing applied to any request from the peer

 Then check your http_access rules. You have something else in there...

 There is absolutely nothing special about peers in access controls. They
 are just HTTP clients just as any other HTTP client.


ok, then I will isolate a pair from the cluster at night and doublecheck
everything
thank's so far

michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] squid do the inverse of what it should do !!!!!!!!!!! help !!!!!!!!!

2007-08-27 Thread Michel Santos

pinky you disse na ultima mensagem:
 finallyyy

 I figured it out, with your help of course

 It’s not a squid issue , in fact my satellite provider
 NewSky has a defected Cisco interface in its site,
 which duplicate each packet I received ( Every request
 I send I receive a duplicate answer )
 I called the provider and told them about the
 duplicated packets I received from them , and they
 solve change the defected interface.


just curious, how an interface would do that?

Other reason as tcp retransmission timeout exceedings (which BTW would
resend one or another package not all) I can not even imagin a reason for
that other as an malicious attack (syn flood) because under normal
circunstancies the package sender *will_not* try retransmission endless
but mark the target unreachable

 In fact squid was getting double answer so , it does
 not know what to do


I guess squid would not get such packages at all but should be discarted
by your router at layer 3 or by your OS at layer 4 level where are checked
tcp flags and sequence numbers before they go to the application layer.


michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] squid do the inverse of what it should do !!!!!!!!!!! help !!!!!!!!!

2007-08-26 Thread Michel Santos

pinky you disse na ultima mensagem:

 but perhaps you check first who has access to your
 box and change lines
 like acl all 0.0.0.0 or so


 I checked that for sure , I mentioned that in my first
  email .


no you didn't

certainly squid does not download for itself right so it comes from
somewhere and if some can use your squid then you are allowing access to
it

anyway, traffic is what you complain about and traffic you can observe
easy (tcpdump) and find in seconds where it comes from, probably netstat
shows it already

your cisco is correct? may be you send squid's traffic also back to squid ...

michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] squid do the inverse of what it should do !!!!!!!!!!! help !!!!!!!!!

2007-08-26 Thread Michel Santos

pinky you disse na ultima mensagem:


 no you didn't

 well you can see I said that I put strict acl



strict acl says nothing to me but acl all src ip_range would ..


 anyway, traffic is what you complain about and
 traffic you can observe
 easy (tcpdump) and find in seconds where it comes
 I used tcpdump , but as  u can see I have 15Mbps (1000
 live users) , so thats not so easy.


:) nice excuse for guessing ... so who has a gigalink does what?

tcpdump -n tcp dst port squid_tcp_port and not src net your_cli_ip_network

or something will give you a clear result whats going on



michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] squid do the inverse of what it should do !!!!!!!!!!! help !!!!!!!!!

2007-08-25 Thread Michel Santos

pinky you disse na ultima mensagem:

 rpm that come with the distor)  in transparent mode ,
 cisco 2811 redirect the packet to squid via wccp2.

that is not transparent mode

 everything works great till that day when Squid
 inverse its purpose!!! ( its start to use far more
 bandwidth than my users do ) you can see the mrtg
 picturs below ( I put links for them).

inverse??? hmmm

 I tried everything . ( disabled the cache and make it

really?

 work as proxy only, used delay loop , change the
 distro and change the squid version and even changed
 the wccp options and version in the router and squid )

 but the problem remains .

 Please help me before losing my job !!! :(


depends on how much they pay me I dont care  and if it is enough I still
pay you a beer :)

but perhaps you check first who has access to your box and change lines
like acl all 0.0.0.0 or so




michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Opinions sought on best storage type for FreeBSD- suggestion for developers

2007-08-24 Thread Michel Santos

Adrian Chadd disse na ultima mensagem:
 On Mon, Aug 20, 2007, Nicole wrote:

 [snip good points]

  It has been found that people are more likely to donate money for
 something
 specific than for a general cause.

 Maybe we're not doing it right; but people seem quite happy to suggest
 functionality but not be willing to donate to see it happen.


I agree much more to what nicole just said

this donate thing does not work

firstable, everybody is scared as myself too because youhohoo developers
are scary guys, you're good in what you do so that means expensive, most
are kind of harsh with whom does not know at last 100% of the technical
vocabulary - so at the end you all kick yourself out of getting something
else then honors and often not even that :) but critics

and then when comes an fearless like me and ask how much would that cost
then the answer is -z ...

Adrian, you have lot of ideas and told me more than twice you have no time
(money) to do this and that. So appending to Nicol's idea I would like you
to put your ideas on a website, short description and give an idea of
project-cost so may be you will find easier a sponsor or several
co-sponsors. As in supermarkets, nobody would buy anything when there were
no prices on the cans.

Also then there is something to what everybody can compare and then
eventual or sure are coming in offers for other piece of work I guess

 I'd love to see that change!

sure, absolutely, all of us

michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Syslog configuration

2007-08-22 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On ons, 2007-08-22 at 01:37 -0300, Michel Santos wrote:

  access_log syslog:LOG_LOCAL4 squid
 

 hmm, isn't this how it should work?

 access_log syslog:local:4

 No, 2.6.STABLE14 and earlier 2.6 releases uses a bit twisted and
 undocumented syntax for specifying syslog facility and log level.

  syslog:LOG_FACILITY|LOG_LEVEL

 where LOG_FACILITY is LOG_ followed by the facility name in uppercase.
 And similar for LOG_LEVEL.. Borrowed from the C syntax when using the
 syslog(3) function.

 We have now changed this to use the more familiar syslog.conf syntax and
 documented it..


so then I have to take care when upgrading because my 2.6.S14 are still
working well with 'access_log syslog:local:4'

michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Syslog configuration

2007-08-22 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On ons, 2007-08-22 at 04:55 -0300, Michel Santos wrote:

 so then I have to take care when upgrading because my 2.6.S14 are still
 working well with 'access_log syslog:local:4'

 That syntax is not understood by any version and is silently ignored,
 resulting in the log being sent to daemon.info  (same as LOG_DAMON|
 LOG_INFO)

 This is true for 2.6.STABLE14 at least. Later versions may reject the
 invalid configuration as invalid.

 If you want the log sent to the local4 facility in 2.6.STABLE14 then
 specify syslog:LOG_LOCAL4 nothing else.


well, don't know there but here I am using it and it working here
perfectly as said in

access_log syslog:local:4

but in your defense :) I must say that your version is working also
exactly the same way configured as

access_log syslog:LOG_LOCAL4


both log to the file defined in syslog.conf for local4.* facility



michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Syslog configuration

2007-08-22 Thread Michel Santos

c0re dumped disse na ultima mensagem:
 It just won't work !

 access_log /squid/var/logs/access.log squid
 access_log syslog:LOG_LOCAL4 squid
 (I need to log to both: access.log and syslog)



I am not sure if you can log to both, I just tried here and it does not
log with two access_log lines

michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Syslog configuration

2007-08-21 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On fre, 2007-08-17 at 10:53 -0300, c0re dumped wrote:
 Hello guys,

 Hi would like to log to both: syslog on a remote machine AND
 /var/log/access.log.

 Is that possible ?

 In my squid squid.conf i seted it up:

   access_log /squid/var/logs/access.log squid
   access_log syslog squid

 Looks fine to me, but you probabl need to specify the facility if you
 want to use local4, the default is daemon I think.

 access_log syslog:LOG_LOCAL4 squid


hmm, isn't this how it should work?

access_log syslog:local:4

supposed the file referenced as local4 in syslog.conf exist and syslog is
already aware of it

michel


...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] endless growing swap.state after reboot

2007-08-15 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On tis, 2007-08-14 at 17:22 -0300, Michel Santos wrote:

 well, just got one, what now? Do you want the file?

 No, but I want you to hold on to it so you can test things without
 having to reboot a server and cross your fingers..

 now I did the same again but started squid with -F and all good

 so I guess we found where to look, something wrong while writing to
 swap.state when still rebuilding it

 Next test is to see if the problem is also seen without -F but with no
 traffic on the proxy.

no, when no traffic no problem, this statement is based on my not
succeeded weekend tests and 3 production servers yesterday and one of them
I cut off the client side while testing without -F and all was good



 Another test I'd like you to run is to try using aufs instead of diskd.
 And also the same for ufs.


yes same issue

seems that swap.state.new hangs on to 72 bytes while swap.state grows fast

michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] endless growing swap.state after reboot

2007-08-14 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 unfortunatly I was sleeping and didn't backed up the swap.file but I can
 do it again later if you need it.

 Please try. But as you indicate above it's possible the problem is not
 caused by the swap.state, but by concurrent traffic while the cache is
 being rebuilt in which case producing a test case is somewhat more
 complex..



if some likes to help catching this problem here is a sh script which
backup the swap files into /usr/local/squid/swap-bu before starting squid.

It should work for squid on freebsd else look into it before running it.
You should run it from your squid-startup script, put it into the first
line without '' at the end of the line. if you do not have a squid start
script execute it before squid or put it into /usr/local/etc/rc.d with a
000. prefix

http://suporte.lucenet.com.br/supfiles/swap.state.bu.sh.tar.gz

than as henrik said before, if squid get confused afetr startup we need
the backuped swap.state

thank's
michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Opinions sought on best storage type for FreeBSD

2007-08-14 Thread Michel Santos

Mark Nottingham disse na ultima mensagem:
 FreeBSD and aufs was discussed a while back, IIRC, and the upshot was
 that for FreeBSD 6, it's useful (threads on 4 is a no-no). The
 lingering doubt in my mind was this bug: http://www.freebsd.org/cgi/
 query-pr.cgi?pr=103127, which appears to have been patched in 6.1-
 RELEASE-p5.

 So, in a nutshell, can it be safely said that aufs is stable and
 reasonably performant on FreeBSD = 6.2, as long as the described
 thread configuration is performed?


on 6.2 you do not need to do anything alse as add the aufs in configure

--enable-storeio=diskd,ufs,aufs,null (or whatever options you like)

and it should work well, I had no problem at all with the aufs model
itself beside queue-congestion alert msgs while swap.state rebuilding was
in progress and sometimes under load. Whatever value I set with
--with-aufs-threads=N didn't helped

you probably should add or create your /etc/libmap.conf as follows

[/usr/local/squid/sbin/squid]
libpthread.so.2 libthr.so.2
libpthread.so   libthr.so


michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] endless growing swap.state after reboot

2007-08-14 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:

 Please try. But as you indicate above it's possible the problem is not
 caused by the swap.state, but by concurrent traffic while the cache is
 being rebuilt in which case producing a test case is somewhat more
 complex..



well, just got one, what now? Do you want the file?

But this confirms what I argued this morning, look:

copied swap state, stopped squid when saw it growing

copied back the backed up swap.state file, started squid and growing again

now I did the same again but started squid with -F and all good

so I guess we found where to look, something wrong while writing to
swap.state when still rebuilding it


michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Opinions sought on best storage type for FreeBSD

2007-08-13 Thread Michel Santos

Tek Bahadur Limbu disse na ultima mensagem:

 what size is your link?

 For each proxy, the link is burstable upto to 15 mbps. But they are
 grouped together in different groups. We have 6 groups. Each group has
 bandwidth ranging from 5 mbps to 20 mbps. However since our link comes via
 satellite, the proxies starts building a large number of mbufs especially
 when our uplink gets saturated. Since it's a satellite link, bandwidth is
 never enough no matter how big we are subscribing. We still have some time
 to go (maybe months, or years) before we get it from a fiber link.


 Sure this is not related to your crash and to your link either but
 somaxconn is the queue size of pending connections and not the number of
 connections and you are probably setting this far too high. somaxconn as
 1024 or max 2048 would be more reasonable and nmbcluster I would not set
 higher than 128 or 256k

 if you eat that up you have other troubles and increasing this values
 does
 not solve them I guess

 Well I am using nmbcluster = 256000 on some of my FreeBSD-6.2 machines
 because they don't support setting the nmbcluster to 0. Well let me try
 setting somaxconn to 2048.

I like to suggest again starting a clean system like said in a former msg
and observe and then check value for value instead of mixing it all up at
once



 - From my observation in recent months, the mbufs value has not crossed
 120K. I will probably use 128K or 256K. I read an article regarding
 setting somaxconn=32768 to help stop SYN flooding.

 http://silverwraith.com/papers/freebsd-ddos.php


who am i to understand miracles? without saying any else I suggest you
compare the man page or tuning what describes somaxconn and what the
author claims it is and figure out about the other statements ...


 In your opinion, what's wrong with setting nmbcluster to 0 since, in
 this way, I never run out of mbufs?


sorry if came over a wrong impression that I want to lecture or something,
I am not saying it is wrong (how would I know?), I am only changing ideas
here ok and am saying that I would do it different and what is my opinion



michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Opinions sought on best storage type for FreeBSD

2007-08-13 Thread Michel Santos

Adrian Chadd disse na ultima mensagem:

 well that was my knowledge about chances but here are not so many
 options,
 or you are a hell of forseer or you create an algorithm, kind of
 inverting
 the usage of the actual or other cache policies applying them before
 caching the objects instead of controlling the replacement and aging

 No, you run two seperate LRUs, glued to each other. One LRU is for new
 objects that are coming in, another LRU is for objects which have been
 accessed more than once.


well, I didn't mean to eliminate the cache policies by using instead, I
mean using them similar for this purpose, whatever basicly we say the same
I guess, or meant at least :)


 A few reasons:

 * I want to do P2P caching; who wants to pony up the money for open source
   P2P caching, and why haven't any of the universities done it yet?


there did exist some p2p cache projects and software which died because of
troubles with author/owner rights of the cached content which could be
interpreted as redistribution or something, seems a dutch network had a
good product


 * bandwidth is still not free - if Squid can save you 30% of your HTTP
   traffic and your HTTP traffic is (say) 50% of 100mbit, thats 30% of
   30mbit, so 10mbit? That 10mbit might cost you $500 a month in America,


absolutely, no need to convince me, I am working with cache for that
reasons, I brought it up because I believe I can understand the reasons
why people are not so in it anymore


   in developing nations..

tell me about ... we pay US$700-900 for each 2Mbit/s ... and now you know
why we are poor because we get milked dry by everyone :)



 Would you like Squid to handle 100mbit+ of HTTP traffic on a desktop PC
 with a couple SATA disks? Would you like Squid to handle 500-800mbit of
 HTTP traffic on a ~$5k server with some SAS disks? This stuff is possible
 on today's hardware. We know how to do it; its just a question of
 writing the right software.


yep, definitly people with great ideas are the owners of the present
future and seems you will continue working on cache projects and I hope
you make very much money with all that so you might have more *time* in
the future :)


michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] endless growing swap.state after reboot

2007-08-13 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On sön, 2007-08-12 at 12:49 -0300, Michel Santos wrote:

 that's from one cache dir and took 5.8 seconds seems to be really
 wrong,
 look at the time stamps:

 Time stamps during the rebuild process is not working well when you use
 -F. This because Squid is only rebuilding the cache index, and it's
 notion of time is a bit messed up.

 Things return to norma when the rebuild is finished.



sooo, first machine I rebooted without shutting down squid did it again,
swap.state grows endless

I rebooted two others but with -F and all good

so seems that writing to swap.state while still rebuilding the cache is
where the dog is berried

unfortunatly I was sleeping and didn't backed up the swap.file but I can
do it again later if you need it.

michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




[squid-users] acl bug or is it so?

2007-08-13 Thread Michel Santos

please have a look :


acl all src 200.152.80.0/24

acl danger urlpath_regex -i blabla

http_access deny all danger
miss_access deny all danger

blocks and works, ok so far

##

acl all src 200.152.80.0/24
acl peer src 200.152.80.21

acl danger urlpath_regex -i blabla

http_access deny all danger
miss_access deny all danger

http_access deny peer danger
miss_access deny peer danger

blocks for acl all but _NOT_ for peer IP, also not if the peer IP is
accessing as normal client with a browser and not as a peer

am I doing something wrong or is it a bug?


same result here when using dstdomain or url_regex in place of urlpath_regex

michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] endless growing swap.state after reboot

2007-08-12 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On fre, 2007-08-10 at 13:55 -0300, Michel Santos wrote:

 just to get it straight

 I start squid with this former swap.state but empty cache_dirs

 yes.

 Is it that exactly?

 yes.


 But before you do that we perhaps should do the same, but without
 erasing the cache directories.

 swap.state should shrink at this stage, eliminating it's reference when
 not finding the file right?

 only if the rebuild is successful, in which case this test failed..



I am in the visiting-the-doctor-and-pain-is-gone stage ...

I still was not able to get my test machine damaging the swap.state file


I am still loading the cache_dir and so fare I have 2Gigs in there and the
rebuild is some seconds only. No reset or kill did it, I tried every
couple of hours.

That brought me to check my startup scripts which I haven't touch since
long time and I am not using the -F option. Since my production caches do
have considerable size and the rebuild is up to 2 minutes and some big
caches need 4-5 minutes I start thinking that the swap.state mess has
something to do with that I am not starting with the -F option.

What do you think? is it possible that the problem is hidden here?

If I am not able to make it happen here on my test machine til monday
morning I will sacrify two production caches and restart one with -F and
the other not and under incoming request load. Then we'll see.

michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Opinions sought on best storage type for FreeBSD

2007-08-12 Thread Michel Santos

Tek Bahadur Limbu disse na ultima mensagem:

 how much mem the server has installed?

 Most of them have 1 GB memory


well, I believe that is really too low for such a busy machine and you
should think of 4-8 gigs (or more?) for such a server


 what is you kern.maxdsiz value?

 It's the default value of 512 MB. I guess I may have to increase it to say
 768 MB.

 I can put the following value in /boot/loader.conf:

 kern.maxdsiz=754974720

you can start here but still also too low, I set this to 4 or 6 gigs but I
have much more ram as you in my servers





 How much memory squid is using just before it crashs? is it using swap?
 what ipcs tells you then or under load?

 Squid could be using somewhere between 500 to 700 MB of memory before it
 crashes.

what do you mean? Could, nothing certain? what is your cache_mem setting?


 It was not using swap.

sure not, if you have 1GB of ram and there are 512Mb left then squid will
crash soon the 512 you allow are used, so no chance to get to swap either

set your maxdsize to 1 or 2 gigs and assist the magic happen



 Currently, ipcs tells me:


no good, ipcs -a at least


 Most of them are Dell SC-420 machines:
 CPU 2.80GHz (2793.09-MHz K8-class CPU)
 Hyperthreading: 2 logical CPUs
 OS: FreeBSD-6.0-6.1 (amd64).


6.2 is way better and releng_6 is really stable you could upgrade which
should be possible with no downtime beside one reboot



  By the way, do you have some optimal settings which can be applied to
  diskd? Below are some values I use:
 
  options SHMSEG=128
  options SHMMNI=256
  options SHMMAX=50331648 # max shared memory segment size
 (bytes)
  options SHMALL=16384# max amount of shared memory (pages)
  options MSGMNB=16384# max # of bytes in a queue
  options MSGMNI=48   # number of message queue identifiers
  options MSGSEG=768  # number of message segments
  options MSGSSZ=64   # size of a message segment
  options MSGTQL=4096 # max messages in system
 
  Correct me where necessary.
 


 that does not say so much, better you send what comes from sysctl
 kern.ipc

 #sysctl kern.ipc


you see? your kernel options are not exactly what you get at runtime right ;)




 You mean set SHMMAXPGS using sysctl or compile it? Also what the best
 value for SHMMAXPGS?

yes sysctl, they are runtime tunable

you must check out with ipcs and set your system to what works well
without using too high values

other values I saw are eventually not so good choices, as somaxconn seems
way to high and nbmclusters are 0 ?


may be you trust the fbsd auto-tuning and compile your kernel with
max_user 0 and restart without sysctl values but maxdsiz to 1 gb or so and
see what happens.




Michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] endless growing swap.state after reboot

2007-08-12 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On sön, 2007-08-12 at 11:59 -0300, Michel Santos wrote:

 That brought me to check my startup scripts which I haven't touch since
 long time and I am not using the -F option. Since my production caches
 do
 have considerable size and the rebuild is up to 2 minutes and some big
 caches need 4-5 minutes I start thinking that the swap.state mess has
 something to do with that I am not starting with the -F option.

 What do you think? is it possible that the problem is hidden here?

 Quite possible. -F is not actively tested and do change the rebuild
 procedure a bit.


Couldn't wait, I just did it on a server now. I am pretty sure that
normally the shit had begun but with -F it built the swap.state and
started working normally and no problem

Aug 12 12:39:49 wco-mir squid[991]: Done reading /c/c2 swaplog (2659207
entries)
Aug 12 12:39:49 wco-mir squid[991]: Finished rebuilding storage from disk.
Aug 12 12:39:49 wco-mir squid[991]:   2289447 Entries scanned
Aug 12 12:39:49 wco-mir squid[991]: 0 Invalid entries.
Aug 12 12:39:49 wco-mir squid[991]: 0 With invalid flags.
Aug 12 12:39:49 wco-mir squid[991]:   2289447 Objects loaded.
Aug 12 12:39:49 wco-mir squid[991]: 0 Objects expired.
Aug 12 12:39:49 wco-mir squid[991]:362061 Objects cancelled.
Aug 12 12:39:49 wco-mir squid[991]: 0 Duplicate URLs purged.
Aug 12 12:39:49 wco-mir squid[991]: 0 Swapfile clashes avoided.
Aug 12 12:39:49 wco-mir squid[991]:   Took 5.8 seconds (396434.2
objects/sec).

that's from one cache dir and took 5.8 seconds seems to be really wrong,
look at the time stamps:

Aug 12 12:28:21 wco-mir squid[991]: Starting Squid Cache version
2.6.STABLE14-20070731 for amd64-unknown-freebsd6.2...


that's ten minutes, has probably to do with the wrong percentage
caclulation when things go wrong also ?



michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Opinions sought on best storage type for FreeBSD

2007-08-12 Thread Michel Santos

Tek Bahadur Limbu disse na ultima mensagem:


 Ok let me upgrade my memory before setting it to 2 GB or more.
 I will set it to 768 MB for now since I have only 1 GB of memory at the
 moment.


I believe with stock maxdsiz your squid process can not use more than the
512MB limit ... so I do not know where you get 600 from

maxdsiz is not only RAM related but defines the upper limit of memory a
process can use and so I believe your machine does not swap even if not
exist RAM enough for the process (generally) but enough to get to the
limit (maxdsiz) and that might be the reason your squid process crash when
it tries to use more than the 512 limit



 other values I saw are eventually not so good choices, as somaxconn
 seems
 way to high and nbmclusters are 0 ?

 Well I will reduce somaxconn to 8192. The reason why I set nbmclusters
 to 0 is because of satellite link delays and high number of tcp
 connections, I run out of mbufs. They easily reach between 64000 -
 128000 and sometimes even more. Every now and then, I would lose tcp
 connections due to the high number of mbufs in use. So I found this
 little hack which keeps the number of mbufs utilization at bay.


what size is your link?

Sure this is not related to your crash and to your link either but
somaxconn is the queue size of pending connections and not the number of
connections and you are probably setting this far too high. somaxconn as
1024 or max 2048 would be more reasonable and nmbcluster I would not set
higher than 128 or 256k

if you eat that up you have other troubles and increasing this values does
not solve them I guess




michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Opinions sought on best storage type for FreeBSD

2007-08-11 Thread Michel Santos

Tek Bahadur Limbu disse na ultima mensagem:

 diskd indeed seems to fail under load especially when approaching
 200/300 requests per second.

are you sure this numbers are correct? where do you get them from?


 It causes Squid to crash and restart automatically. Though, the side
 effects are not noticed to the causal user, it prevents the cache from
 stabilizing in the first place.



in first place diskd does not cause automatic restart ;) that is RunCache
who does it and I also do not believe that diskd cause squid to crash


if the crash really happens then there is something wrong on your machine

if the problem is the load and your computer can not handle the load then
it first gets slow or you get out of memory and then squid may crash but
you better should look what is really wrong there before blaming the fs
type you use


 If I opt to use aufs, will the following compilations work?

 '--enable-async-io' '--with-pthreads'


with-pthreads is not necessary

but certainly this switch is kind of strange for freebsd since you need to
remap the process-threads to kernel-threads in order to get it right
(faster), both thread implementations should work well then with kqueue
which also is correctly detected by configure when available


Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Opinions sought on best storage type for FreeBSD

2007-08-11 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On lör, 2007-08-11 at 15:10 +0545, Tek Bahadur Limbu wrote:

 As far as I know and seen with my limited experience, diskd seems good
 for BSD boxes. But I guess I have to try other alternatives too.

 If I opt to use aufs, will the following compilations work?

 '--enable-async-io' '--with-pthreads'

 --enable-storeio=aufs

 pthreads is automatically enabled, so no need to specify that. Won't
 hurt if you do however.

 If you are on FreeBSD then remember to configure FreeBSD to use kernel
 threads for Squid or it won't work well. See another user response in
 this thread.


Hi
not sure, both thread implementations work well but kernel threads pretend
to be faster. In order to sense it you need some real load on the machine
and I am not sure if there is a difference at all on an UP machine

Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Opinions sought on best storage type for FreeBSD

2007-08-11 Thread Michel Santos

Adrian Chadd disse na ultima mensagem:
 On Sat, Aug 11, 2007, Tek Bahadur Limbu wrote:

 Or simply, what is the best compilation parameters to use on a
 Linux/Unix machine if I want to use aufs?


 coss at first seemed a good choice but it's long rebuilding process is
 not suitable for production use.

 We know how to fix COSS. Time (ie, funding) is the only issue here.
 We'd love to work with any groups who would be willing to help fund
 an effort to mature Squid's storage code (post Squid-3.0, which is
 almost ready from what I hear) into something 21st-century compliant.


nice words but IMO the fs usage is kind of fine-tuning because the
difference between the actual competitors aufs and coss is not sooo big

but what this and next year (and not century) is about is SMP, soon and I
guess very soon you might not be able to buy single cores anymore, when I
said a year ago all people will run quad-cores in 07/08 I got laughed at
but look at the marked, that is what it is and so I guess making squid SMP
friendly is way more important but who knows if we have soon globally
unlimited bandwidth and don't need caches anymore :) what sure will happen
first if you stay thinking of centuries in computer business :)

please don't hate me, nothing personal ok you just picked the wrong word
to compare and I couldn't hold it back :)


michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Opinions sought on best storage type for FreeBSD

2007-08-11 Thread Michel Santos

Adrian Chadd disse na ultima mensagem:
 On Sat, Aug 11, 2007, Michel Santos wrote:

 nice words but IMO the fs usage is kind of fine-tuning because the
 difference between the actual competitors aufs and coss is not sooo big

 Yeah, but the difference between AUFS/COSS and whats actually possible and
 done in the commercial world - and documented in plenty of thesis papers
 out there - is a -lot-. I'm talking double, triple the small object
 (256k)
 size.


I must admit I can't talk in there because I never could test it really
but I do not convinve myself easy by reading papers.




 And I'd love to continue work on the test SMP proxy code I've been working
 on
 on the side here, but to continue that I need money. Its easy to code this
 stuff when you're working for someone who is happy to pay you to do open
 source
 stuff that benefits them, but I'm doing this for fun. Maybe I shouldn't,
 I ain't getting paid (much.) There's only so many 45 minute bus trips
 to/from university a day atm.

 There's plenty of examples of multi-threaded network servers out there.
 Whats stopping Squid from taking advantage of that is about 6 months of
 concentrated work from some people who have the clue and time. None of
 us with the clue have any time, and noone else has stepped up to the
 plate and offered assistance (time, money, etc.) We'd love to work on it
 but the question is so how do we eat.


I agree, completely understandable

but look, easy to code but 6 month of concentrated work ar not so really
the same things ... ;)




Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Opinions sought on best storage type for FreeBSD

2007-08-11 Thread Michel Santos

Adrian Chadd disse na ultima mensagem:
 On Sat, Aug 11, 2007, Michel Santos wrote:

 I must admit I can't talk in there because I never could test it really
 but I do not convinve myself easy by reading papers.

 Good! Thats why you take the papers and try to duplicate/build from them
 to convince yourself. Google for UCFS web cache, should bring out one
 of the papers in question. 2 to 3 times the small object performance is
 what people are seeing in COSS under certain circumstances as it
 eliminates
 the multiple seeks required in the worst cases for normal UNIX
 filesystems.
 It also reduces write overhead and fragmentation issues by writing in
 larger chunks. Issuing a 512 byte write vs a 16k write to the same sector
 of disk is pretty much an equivalent operation in terms of time taken.

 The stuff to do, basically, involves:

 * planning out better object memory cache management;
 * sorting out a smarter method of writing stuff to disk - ie, exploit
   locality;

 * don't write everything cachable to disk! only write stuff that has
   a good chance of being read again;

there is a good chance beeing hit by a car when sleeping in the middle
of a highway as well there is a chance not beeing hit at all :)

well that was my knowledge about chances but here are not so many options,
or you are a hell of forseer or you create an algorithm, kind of inverting
the usage of the actual or other cache policies applying them before
caching the objects instead of controlling the replacement and aging



 * do your IO ops in larger chunks than 512 bytes - I think the sweet
   spot from my own semi-scientific tests is ~64k but what I needed to
   do is try to detect the physical geometry of the disk and make sure
   my write sizes match physical sector sizes (ie, so my X kbyte writes
   aren't kicking off a seek to an adjacent sector, and another rotation
   to reposition the head where it needs to be.)
 * handle larger objects / partial object replies better


well, the theory behind coss is quiet clear


 I think I've said most/all of that before. We've identified what needs
 doing - what we lack is people to do it and/or to fund it. In fact,
 I'd be happy to do all the work as long as I had money available once
 it was done (so I'm not paid for the hope that the work is done.)
 Trouble is, we're coders, not sales/marketing people, and sometimes
 I think thats sorely what the Squid project needs to get itself back
 into the full swing of things.


not sure, squid is long time on top now and probably there is no other
interesting project because caching is not so hot anymore, bandwidth is
cheap in comparism to 10 years ago and the heck today is PtP so I mean,
probably hard to find a sponsor with good money. The most wanted feature
is proxying and acl but not cache so I guess even if there are ever geeks
like us which simply like the challenge to get a bit more out of it most
people do not know what this is about and do not feel nor see the
difference between ufs and coss or whatever. To be realistic I understand
that nobody cares about diskd as nobody cares really about coss because it
would be only for you or for me and some more and so Henrik works on aufs
because he likes it but at the end it is also only for him and some
others. And this sum of some do not have money to spend it into
coss/aufs/diskd. And probably it is not worth it when the principal users
have a 8Mb/s adsl for 40 bucks why they should spend money on squid's fs
development?



Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Opinions sought on best storage type for FreeBSD

2007-08-11 Thread Michel Santos

Tek Bahadur Limbu disse na ultima mensagem:
 Michel Santos wrote:
 Tek Bahadur Limbu disse na ultima mensagem:
 diskd indeed seems to fail under load especially when approaching
 200/300 requests per second.

 are you sure this numbers are correct? where do you get them from?

 Hi Michel,

 I am getting these numbers from one of my busy proxy server. At peak
 times, I get anywhere from 150-200 requests per second. However to cross
 the 300 mark, it only happens when 1 or 2 of my other proxy servers go
 down and then our load balancer redirects web requests to whichever
 proxy server is up and functioning.


so you get 12000/min right? But when I asked where you get them from I
wanted to know how you count them, snmp? cachemgr?

how much mem the server has installed?



 I guess that I may have to really commit my time and resources to find
 out if other factors could be causing this to happen.

 Haven't you faced any automatic restart of your Squid process. Does that
 mean that your Squid process uptime is months?


never dies by it's own, my problem are power problems and loose human
endpoints (fingers) :)

what is you kern.maxdsiz value?

How much memory squid is using just before it crashs? is it using swap?
what ipcs tells you then or under load?


 They have been in production for years and each of their average uptime
 is about 120 days. As far as the load is concerned, my CPU usage never
 goes above 30-40% but sometimes my memory usage crosses 80% of it's
 capacity though.


what hardware is it? Which freebsd version your run? And how is your
layout, standalone proxy server, gateways or cache hierarchy?


 By the way, do you have some optimal settings which can be applied to
 diskd? Below are some values I use:

 options SHMSEG=128
 options SHMMNI=256
 options SHMMAX=50331648 # max shared memory segment size (bytes)
 options SHMALL=16384# max amount of shared memory (pages)
 options MSGMNB=16384# max # of bytes in a queue
 options MSGMNI=48   # number of message queue identifiers
 options MSGSEG=768  # number of message segments
 options MSGSSZ=64   # size of a message segment
 options MSGTQL=4096 # max messages in system

 Correct me where necessary.



that does not say so much, better you send what comes from sysctl kern.ipc

anyway you probably should not limit SHMMAX but set SHMMAXPGS so then
SHMMAX is correctly calculated and no need to compile them, that are
sysctl tunables

I believe any wrong value would not make your server crash, worst case
that your msg queues get stucked what would then put squid's disk r/w
access on hold but not to crash, well, let say I never saw a server
crashing for ipc congestion, the client simply stops communicating

Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Opinions sought on best storage type for FreeBSD

2007-08-10 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On tor, 2007-08-09 at 10:18 -0700, Nicole wrote:
 As some have pointed out, it's a shame diskd is horked, since it seemed
 to be nice and fast.

 Well, it's been broken for several years now, an no one has been willing
 to commit any resources to get it fixed.


please be a little bit more specific about comitting resources, what do
you exactly mean?


what is what you agree to be broken beyond the shutdown issue?


 However, since I have not heard of any progress on fixing
 the bug, I am curious what others have been using or prefer as their
 alternative to diskd and why?

 aufs is seen as the best alternative currently, with FreeBSD also
 supporting kernel threads.

 Note: running aufs without kernel threads is a dead end and won't
 perform well, you might just as well run with the ufs cache_dir type
 then.


ok you mean threads instead of pthreads right?


Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] endless growing swap.state after reboot

2007-08-10 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:

 then I start squid with one of the above versions and squid starts
 rebuilding swap.state

 when it starts failing we get what you want?

 That you try the same again, by shutting down Squid, then clear the
 cache and restore the backed up swap.state files and start Squid again.
 Hopefully the problem will manifest itself again, if so then there is an
 frozen state which produces the problem, and which can be debugged
 further to isolate what goes wrong.


just to get it straight

when it fails I shut squid down again

I wipe out the cache_dirs and recreate them?

I copy the former original (first) backup swap.state back in place

I start squid with this former swap.state but empty cache_dirs

Is it that exactly?

swap.state should shrink at this stage, eliminating it's reference when
not finding the file right?



Michel


...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] endless growing swap.state after reboot

2007-08-10 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On tor, 2007-08-09 at 14:25 -0300, Michel Santos wrote:

 ok the first is easy, the latter you mean what, you want the file?

 Unfortunately the file is a bit platform dependent, but I want you to
 hold on to the file and check if the problem can be reproduced by simply
 placing it back in the cache dir.


so let's mount the scenario

I shutdown squid letting rc.shutdown killing the squid process before it
had time to close correctly the cache_dirs

then I backup swap.state

or do I backup before shutting down?

then I start squid with one of the above versions and squid starts
rebuilding swap.state

when it starts failing we get what you want?


Michel

...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Opinions sought on best storage type for FreeBSD

2007-08-10 Thread Michel Santos

Alexandre Correa disse na ultima mensagem:
 after reading this email, i switched from aufs to diskd to see
 performance of them under high load ..

 with aufs, squid never used more than 10% of cpu and response time is
 very low (5ms to 150ms).. with diskd cpu usage goes to 50% +- and
 median response time up to 900ms !!

 i´m running CentOS 5.0 with kernel 2.6.22, quad opteron 64 bits with
 4gb ram and hd are SAS 15.000 rpm



don't know anything about Centos but when a Quad Opteron does not handle
the load you obviously have something wrong in your config, either squid
or OS settings


Michel



...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] Opinions sought on best storage type for FreeBSD

2007-08-10 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
 On fre, 2007-08-10 at 06:50 -0300, Michel Santos wrote:

 what is what you agree to be broken beyond the shutdown issue?

 Bug #761 unstable under high load when using diskd cache_dir

 diskd falls over under load due to internal design problems in how it
 maintains callback queues. Duane fixed most of it quite recently so it's
 no longer near as bad as it has been, but there is still stuff to do.
 The problems was first reported 5 years ago.


indeed the cpu load went extremly down after this changes, I won on much
machines more then 30-40%, or better 70/80% cpu load felt down to 30-40%
overall. That was very good

but I could get araound of it before and still do using at least 2 or
better 4 or more diskd processes

 ok you mean threads instead of pthreads right?

 I don't know the FreeBSD thread packages very well to call them by name.
 I only know there is two posix threads implementations. One userspace
 which is what has been around for a long time and can not support aufs
 with any reasonable performance, and a new one in more current releases
 using kernel threads which is quite capable of supporting aufs.

it it pthread versus thr (kernel threads) and who is interested, it's easy
to do on 6.2 by creating /etc/libmap.conf or adding if exist, no further
compile thing is necessary

[/usr/local/squid/sbin/squid]
libpthread.so.2 libthr.so.2
libpthread.so   libthr.so



Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




  1   2   >