[squid-users] Thanks for 3.0-STABLE14/15

2009-05-21 Thread George Herbert
I'll see if I can get clearance to post a graph or two, but "a major
cellphone company which choses not to identify itself" would like to
thank the developers for the 3.0-STABLE14 release.

After a year of builds which did unfortunate things to themselves
every hour or so, 3.0-STABLE14 passed testing without a glitch other
than when I ran the logs directory out of filesystem space when it was
able to actually handle a full week at the max test case I could
generate.  We pushed it into production starting Monday - server
upsets under load and server VM usage went from an hourly upset cycle
per server to not a single one as we upgraded, the MTRG graphs went
from spaghetti to a visible sawtooth the first set of upgrades, only a
couple of exceptions after the second that got all but a couple of
servers, and then flatlined to real stability for the whole
environment after the last stragglers today.

25k URLs/sec sustained and they're all running happy as can be.

Thanks to everyone!


--
-george william herbert
george.herb...@gmail.com


Re: [squid-users] Thanks for 3.0-STABLE14/15

2009-05-22 Thread George Herbert
On Thu, May 21, 2009 at 2:57 AM, Gavin McCullagh  wrote:
> On Thu, 21 May 2009, Travel Factory S.r.l. wrote:
>
>> it would be nice to know your configuration (cpu/ram/disk/heap/etc etc etc)
>
> In particular, if you could give data for this page...
>
>        http://wiki.squid-cache.org/KnowledgeBase/Benchmarks
>
> Gavin


I'm looking at clearance for releasing the details!



-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] Issues Compiling

2009-05-27 Thread George Herbert
On Wed, May 27, 2009 at 11:22 AM, Juan C. Crespo R.
 wrote:
> Guys
>
>   I have this issue when I  try to make it (build)
>
>
> main.cc:1091: warning: comparison between signed and unsigned integer
> expressions

What build are you trying to compile, and on what operating system?


-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] Fw: squid crashes after running for a while

2009-05-27 Thread George Herbert
On Wed, May 27, 2009 at 4:08 AM, goody goody  wrote:
>
> in addition to previous email, i am also receiving following messages in 
> cache.log.
>
> comm_old_accept: FD 14: (53) Software caused
>> connection abort
>
> httpAccept: FD 14: accept failure: (53) Software
>> caused connection abort
>
> My current kernel entries are as follow. also suggest if still need to 
> increase it. i have 2GB ram.
>
> kern.ipc.nmbclusters=32768
> kern.ipc.somaxconn=1024
> kern.maxfiles=32768
> kern.maxproc=8192
>
> Thanks,
> --- On Wed, 5/27/09, goody goody  wrote:
>
>> From: goody goody 
>> Subject: squid crashes after running for a while
>> To: squid-users@squid-cache.org
>> Date: Wednesday, May 27, 2009, 1:06 PM
>> Dear members,
>> I have setup a proxy on squid 3.0 stble 14 on freebsd 7.
>>
>> my proxy is behaving abnormally, it runs for afew hours and
>> then squid process closes unexpectdly (message displayed),
>> when i restart the squid it fails again until i dont restart
>> machine. after restarting it works well for a period then it
>> does the same. i am unable to identify the problem my cache
>> log gives the following messages.
>>
>> *
>> 2009/05/27 01:08:56| UFSSwapDir::doubleCheck: ENTRY SIZE:
>> 3342, FILE SIZE: 389
>> 2009/05/27 01:08:56| UFSSwapDir::dumpEntry: FILENO
>> 0004
>> 2009/05/27 01:08:56| UFSSwapDir::dumpEntry: PATH
>> /cache1/00/00/0004
>> 2009/05/27 01:08:56| StoreEntry->key:
>> B016EFEF1F5BDD7F96CC09CF4F64B217
>> 2009/05/27 01:08:56| StoreEntry->next: 0
>> 2009/05/27 01:08:56| StoreEntry->mem_obj: 0
>> 2009/05/27 01:08:56| StoreEntry->timestamp: 1243365627
>> 2009/05/27 01:08:56| StoreEntry->lastref: 1243365627
>> 2009/05/27 01:08:56| StoreEntry->expires: -1
>> 2009/05/27 01:08:56| StoreEntry->lastmod: 1221873935
>> 2009/05/27 01:08:56| StoreEntry->swap_file_sz: 3342
>> 2009/05/27 01:08:56| StoreEntry->refcount: 1
>> 2009/05/27 01:08:56| StoreEntry->flags:
>> CACHABLE,DISPATCHED
>> 2009/05/27 01:08:56| StoreEntry->swap_dirn: 0
>> 2009/05/27 01:08:56| StoreEntry->swap_filen: 4
>> 2009/05/27 01:08:56| StoreEntry->lock_count: 0
>> 2009/05/27 01:08:56| StoreEntry->mem_status: 0
>> 2009/05/27 01:08:56| StoreEntry->ping_status: 0
>> 2009/05/27 01:08:56| StoreEntry->store_status: 0
>> 2009/05/27 01:08:56| StoreEntry->swap_status: 2
>> 2009/05/27 01:08:56|   Completed Validation
>> Procedure
>> 2009/05/27 01:08:56|   Validated 97720
>> Entries
>> 2009/05/27 01:08:56|   store_swap_size =
>> 776190
>> 2009/05/27 01:08:56| assertion failed:
>> store_rebuild.cc:120: "store_errors == 0"
>> 2009/05/27 01:08:59| Starting Squid Cache version
>> 3.0.STABLE14 for i386-unknown-freebsd7.0...
>>
>> *
>>
>> df -i results
>>
>> Filesystem  1K-blocks    Used
>> Avail Capacity iused   ifree %iused
>> Mounted on
>> /dev/da0s1a  10154158  246910  9094916
>>    3%    2763 1316147
>> 0%   /
>> devfs
>>    1       1
>>       0   100%
>>    0       0
>> 100%   /dev
>> /dev/da0s1f  76168552  837956 69237112
>>    1%   56201 9788533
>>   1%   /cache1
>> /dev/da0s1g  76168552       4
>> 70075064     0%
>>    2 9844732
>> 0%   /cache2
>> /dev/da0s1e  40622796 2540572 34832402
>>    7%  312023 4940071
>> 6%   /usr
>> /dev/da0s1d  60931274  225310 55831464
>>    0%     337
>> 7889581    0%   /var
>>
>> I have specfied  cache size : cache_dir diskd /cache1
>> 6 16 256 Q1=72 Q2=64
>>
>>
>> I dont know what to do, pls help me out.
>> An early reponse is requested, pls.
>> Regards,
>> .Goody.


Have you turned on the core dump functionality and set a core dump directory?

Using gdb to trace back through the core dump can help diagnose the
specific problem in more detail.


-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] HTTP/0.0?

2009-06-10 Thread George Herbert
The 400 code makes sense.  The HTTP/0.0 in the log (vs 1.0) doesn't, to me...

-george

On Wed, Jun 10, 2009 at 6:30 PM, Chris Robertson wrote:
> Tech W. wrote:
>>
>> Hello,
>>
>> I telnet to localhost's 80 port (squid-3.0.15 is running on this port),
>> and send a command "GET / HTTP/1.0" following with two "\n\n":
>>
>> # telnet localhost 80
>> GET / HTTP/1.0
>>
>>
>> Then I watched access.log, found this info:
>>
>> 127.0.0.1 - - [09/Jun/2009:12:36:46 +0800] "GET / HTTP/0.0" 400 1209
>> NONE:NONE
>>
>> (squid has set emulate_httpd_log on)
>>
>> I'm totally confused, why squid said it was "HTTP/0.0" and returned a 400
>> code?
>>
>
> The 400 code is due to the URL being invalid.  Try "GET http://google.com/
> HTTP/1.0" instead.
>
>> Thanks.
>
> Chris
>



-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] How to setup squid proxy to run in fail-over mode

2009-06-15 Thread George Herbert
Most of the suggestions so far have missed the mark.

Squid - like an Apache web server etc - is essentially stateless
(transactions in progress don't make permanent changes).  You can run
any number of web servers or Squid servers in parallel with requests
being freely responded to by any of them.  If you set them up as a
cache peering group, the cache hit rate issues with multiple separate
servers are significantly reduced.

High Availability for servers that can run in parallel in this manner
is almost always done by putting some sort of load balancer out in
front, not using clustering software to "fail over" a service between
two nodes.

HA software makes little sense in this case.

There are various free HTTP load balancer software solutions out there
which are open source, or you can buy a commercial load balancer if
you have higher bandwidth requirements.  Most of those applications
can cluster, giving you load balancer level HA.

Multiple DNS A records doesn't necessarily work - many clients will
try the first A record result they get, and if they get no response
assume the server is down.  If you know that all the client software
behind your squids are properly able to try second or third A records,
then that's safe - but test it first.

One can use Linux HA or another clustering solution to create a
virtual IP address that can move around, server to server, so you
don't need a load balancer and if server A goes down the IP will go to
server B.  But it's a very poor match to the application.


-george william herbert
george.herb...@gmail.com


On Mon, Jun 15, 2009 at 5:43 AM, abdul sami wrote:
> Thanks to all for replies.
>
> Sorry i didn't mentioned the plateform I am using to run squid on
> which is freebsd 7.
>
> I have visited the linux-ha site, where it says the software is
> supported for freebsd too but their is no distribution for freebsd, so
> can u people tell me which distribution i can use for feebsd 7?
>
> Thanks & Regards,
> A Sami
>
> On Mon, Jun 15, 2009 at 4:07 PM, Muhammad
> Sharfuddin wrote:
>> just a question
>>
>>>2. Use an HA solution such as Ultramonkey3. Here you could do
>>>Active-Active.
>> Why Ultramonkey3.. why not HA from http://www.linux-ha.org/
>>
>> -Sharfuddin
>>
>> A PC is like a aircondition. If you open Windows it just don't funktion
>> properly anymore
>>
>> On Mon, 2009-06-15 at 12:12 +0200, Luis Daniel Lucio Quiroz wrote:
>>> There are 2 ways as far as I know to do this possible:
>>>
>>> 1. Use de WPAD protocol: lets say PROXY squid1; PROXY squid2 (this is fail
>>> over)
>>> 2. Use an HA solution such as Ultramonkey3. Here you could do Active-Active.
>>>
>>> Kind regards,
>>>
>>> LD
>>> Le lundi 15 juin 2009 11:09:28, Sagar Navalkar a écrit :
>>> > Hey Remy,
>>> >
>>> > The DNS server does not determine which server is down, however If It is
>>> > unable to resolve the 1st entry, it will automatically go down to the 2nd
>>> > entry.
>>> >
>>> > Regards,
>>> >
>>> > Sagar Navalkar
>>> > Team Leader
>>> >
>>> >
>>> > -Original Message-
>>> > From: Mario Remy Almeida [mailto:malme...@isaaviation.ae]
>>> > Sent: Monday, June 15, 2009 1:36 PM
>>> > To: Sagar Navalkar
>>> > Cc: squid-users@squid-cache.org; 'abdul sami'
>>> > Subject: RE: [squid-users] How to setup squid proxy to run in fail-over
>>> > mode
>>> >
>>> > Hi Sagar,
>>> >
>>> > Just a Question?
>>> >
>>> > How can a DNS server determine that the primary server is down and it
>>> > should resolve the secondary server IP?
>>> >
>>> > //Remy
>>> >
>>> > On Mon, 2009-06-15 at 11:21 +0530, Sagar Navalkar wrote:
>>> > > Hi Abdul,
>>> > >
>>> > > Please try to enter 2 different IPs in the DNS 
>>> > >
>>> > > 10.xxx.yyy.zz1 (proxyA) as primary (proxyA-Name should be same on both
>>> > > the servers.)
>>> > > 10.xxx.yyy.zz2 (proxyA) as secondary.
>>> > >
>>> > > Start squid services on both the servers (Primary & Secondary)
>>> > >
>>> > > If Primary server fails, the DNS will resolve secondary IP for proxyA &
>>> >
>>> > the
>>> >
>>> > > squid on second server will kick in automatically..
>>> > >
>>> > > Hope am able to explain it properly.
>>> > >
>>> > > Regards,
>>> > >
>>> > > Sagar Navalkar
>>> > >
>>> > >
>>> > > -Original Message-
>>> > > From: abdul sami [mailto:sami.me...@gmail.com]
>>> > > Sent: Monday, June 15, 2009 11:17 AM
>>> > > To: squid-users@squid-cache.org
>>> > > Subject: [squid-users] How to setup squid proxy to run in fail-over mode
>>> > >
>>> > > Dear all,
>>> > >
>>> > > Now that i have setup a proxy server, as a next step i want to run it
>>> > > in fail-over high availability mode, so that if one proxy is down due
>>> > > to any reason, second proxy should automatically be up and start
>>> > > serving requests.
>>> > >
>>> > > any help in shape of articles/steps would be highly appreciated.
>>> > >
>>> > > Thanks and regards,
>>> > >
>>> > > A Sami
>>> >
>>> > ---
>>> >- --
>>> > Disclaim

Re: [squid-users] Is something out there bamboozling Squid?

2009-06-18 Thread George Herbert
On Wed, Jun 17, 2009 at 9:31 PM, Amos Jeffries wrote:
> On Wed, 17 Jun 2009 20:19:49 -0600, Brett Glass
> 
> wrote:
>> Everyone:
>>
>> Just this past week, our Squid cache has become balky, with long
>> page loads from some sites and timeouts or partial page loads from
>> others. (It's gotten to the point where performance is better
>> without the cache.) I thought that it was just us, but another
>> system administrator in town has complained of the same symptom:
>> weird delays through the cache and none without it.
>
> Time to run through the checklist. What version of squid?
> What do network times and loads look like? hardware access time for the
> disks etc?
> Is one of the routers somewhere dropping packets?
>
> And some weird ones that are becoming issues:
>  has your upstream started interception proxy?
>  are they doing carrier NAT on you?
>
>>
>> Is there some popular site out there which has started doing
>> something that ties Squid in knots?
>
> Your the only one who can really answer that. What shift in destination
> sites have you noticed?

I've noticed an increase in disk cache corruption incidents with Squid
3.0-STABLE14 on CentOS in large scale production, this week.

I'm seeing between 8 and 15% of my systems getting screwed up each day.

I do not have any URLs I've identified that are different.  We have
statistically consistently widely diverse destinations out on the
internet - we're proxying / cacheing for mobile devices.

I have no clear indication this is an external problem source, it
could be something internal.


-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] how to capture https transactions

2009-07-01 Thread George Herbert
On Wed, Jul 1, 2009 at 6:13 PM, Amos Jeffries wrote:
> On Wed, 1 Jul 2009 20:55:06 -0400, Fulko Hew  wrote:
>> I'm new to squid, and I thought I could use it as a proxy to detect
>> transactions
>> that don't succeed and return a page to the browser that would display
>> an error page that re-submitted the original request (again) say 15
> seconds
>> later.  (I want to use this to hide network and server failure from
>> end users at a kiosk.)
>>
>> I've figured out how to do most of this for http transactions, but my
>> real target
>> uses https and when I look at the squid logs I see a transaction called
>> CONNECT ... DIRECT ...
>>
>> and these don't seem to go through, or at the very least it seems as
> though
>> the connections are not proxied, and hence DNS resolution and connection
>> failures aren't captured and don't result in squid error pages returned
> to
>> the
>> browser.
>
> Close. For https:// the browser is makign regular HTTP request, wrapped in
> SSL encryption. Then that itself is wrapped again inside a CONNECT.
>
> Squid just opens a CONNECT tunnel and shovels the bytes through. The SSL
> connection is made inside the tunnel direct for client to server, and the
> HTTPS stuff happens without Squid.
>
> IIRC there was some problem found with browsers displaying any custom
> response to a CONNECT failure. You want to look at the "deny_info
> ERR_CONNECT_FAIL" page replacement or such.
>
>>
>> Is this actually possible, and if so... what directives should I be
> looking
>> at for the config file.
>
> Squid 3.1 provides a SslBump feature to unwrap the CONNECT and proxy the
> SSL portions. But decrypting the SSL link mid-way is called a man-in-middle
> security attack. The browser security pops up a warning dialog box to the
> users every time this happens. I would not think this will be popular or
> good for a kiosk situation.

I don't know if Squid knows how to do this (haven't checked), but
other load balancers, accelerators, and firewalls can sometimes have
the site SSL / https keys installed to allow them to interact with
https content going back and forth.  There's a ethereal / wireshark
module to provide it your site key to decrypt that traffic.

That does only work if:

a) you own both ends of the link (not clear from first email),
b) your software supports it
c) you trust your proxies with your site keys


-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] how much RAM for squid proxy

2009-07-28 Thread George Herbert
On Tue, Jul 28, 2009 at 10:49 AM, Chris Robertson wrote:
> Angela Williams wrote:
>>
>> Hi!
>> On Tuesday 28 July 2009, qwertyjjj wrote:
>>
>>>
>>> How much RAM would be required to run Squid Proxy for a number of users?
>>> I realise there is no exact answer but a rough guide?
>>> For example, I have a linux proxy server with 100Mbit mainly
>>> retransmitting
>>> and caching running video (I assume about 512kbps).
>>> I'm guessing this could support up to 100 users or so but would 1GB RAM
>>> be
>>> enough?
>>> Server would be something like:
>>> # CPU: Athlon 3800+
>>> # CPU Details: 2 x 2.0 GHz
>>> # RAM: 1 GB RAM
>>> # Hard Disks: 2 x 160 GB (RAID 1 Software
>>>
>>
>> More ram the better! I would go for atleast 2G.
>>
>
> Seconded.
>
> The stats you gave would likely do fine for a group of 100 (I've done more
> with less), but any "extra" memory you can supply will be given to caching
> disk accesses and providing the kernel with buffer space.

Thirded - we run our (currently 3.0.15 going to 3.0.17 rsn) servers
with 4 GB RAM - the Squid process as we have it tuned and compiled
never needs the extra 2 GB, but the system performance including disk
caches is noticeably higher with 4 G than with 2.


-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] Re: Antwort: Re: [squid-users] Antwort: [squid-users] Squid 3.0.STABLE17 is available

2009-07-28 Thread George Herbert
Cool.  Is there going to be a STABLE17A or something, or do we have to
hand-patch for now?

Thanks!

On Tue, Jul 28, 2009 at 12:41 AM, Amos Jeffries wrote:
> martin.pichlma...@continental-corporation.com wrote:
>>
>> Thank you Amos,
>>
>> your patch did the trick, it now works smoothly.
>> I didn't have time to test yesterday, therefore sorry for my late
>> response.
>>
>> Martin
>>
>>
>>
>>
>> Amos Jeffries  27.07.2009 17:00
>>
>> An
>> martin.pichlma...@continental-corporation.com
>> Kopie
>> Squid 
>> Thema
>> Re: [squid-users] Antwort: [squid-users] Squid 3.0.STABLE17 is available
>>
>>
>>
>>
>>
>>
>> Amos Jeffries wrote:
>>>
>>> martin.pichlma...@continental-corporation.com wrote:

 Hello all,

 I just compiled squid-3.0.STABLE17 and it compiled fine.
 Unfortunately I now get many warning messages in cache.log (still
 testing, not yet in productive environment):
 2009/07/27 15:11:26| HttpMsg.cc(157) first line of HTTP message is
 invalid
 2009/07/27 15:11:28| HttpMsg.cc(157) first line of HTTP message is
 invalid
 2009/07/27 15:11:37| HttpMsg.cc(157) first line of HTTP message is
 invalid
 2009/07/27 15:11:40| HttpMsg.cc(157) first line of HTTP message is
 invalid
 2009/07/27 15:11:41| HttpMsg.cc(157) first line of HTTP message is
 invalid

 It seems that nearly every URL I try to access gives that warning
 message,
 for example www.arin.net, www.ripe.net, www.hp.com,
 www.arin.net, even www.squid-cache.org and so on.
 Are nearly all pages in the internet invalid or is the if-query or
 rather the function incorrect?
 The lines that produce the above warning are new in STABLE17...

 HttpMsg.cc -- lines 156 to 160:
    if (!sanityCheckStartLine(buf, hdr_len, error)) {
        debugs(58,1, HERE << "first line of HTTP message is invalid");
        // NP: sanityCheck sets *error
        return false;
    }

>>> Oh dear. I missed a bit in the upgrade. Thanks.
>>> This attached patch should quieten it down to only the real errors.
>>>
>>> Amos
>>>
>>
>> Oh foey. forget that patch. It pasted badly.
>>
>> Here is the real one.
>>
>> Amos
>
> Thank you very much for the feedback.
>
> If you noticed, the pconn complaint others made earlier slipped into that
> patch too. :)
>
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE17
>  Current Beta Squid 3.1.0.12
>



-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] Hardware configuration for Squid that can handle 100 - 200 Mbps

2009-08-27 Thread George Herbert
On Thu, Aug 27, 2009 at 8:37 AM, Paul Khadra wrote:
>
> Hi,
>
> I wish to buy hardware for squid that can support internet traffic of 200
> Mbps. I have read a lot of documents on the forums but none has not got the
> best answer.
>
> 1- Shall I go to intel or opteron?
>
> 2- I can get 32GB memory but will 64 GB memory give an advatnage ?
>
> 3- I can get the HP DL38x series. They have 16 empty slots for hard disks. I
> can install 2 HD controllers. what is the best way to fill the harddisks
> bays and at the same time I want the best byte hit ratio? the harddisks
> options  are ( SAS 146GB,300GB or 450GB  at 15 Krpm or the SATA  250GB,
> 500GB or 1TB at 7200rpm).
> So assuming that budget is not a factor, and at 200 Mbps, will buying 16 x
> 500GB or 16 x 1TB disks have good affect on the hit ratio?
>
> Note: squid will be installed over solaris.


Do you plan to use Squid to cache traffic coming from outside destined
to your own servers, or traffic from inside which is going to outside
hosts?

If you're planning to cache outside traffic coming in, how big is your
website?  How many files, how many big files, etc?

If you're planning to cache traffic going out to arbitrary sites - I
have 50 configured, 42-45 live dual processor dual core 2 GHz Opteron
Linux boxes and can do around 550 URLs/sec per server of external
traffic in test and do 350 URLs/sec during peak periods in production
across the farm, with 50% cache hit rates.  If the URLs are 100k each
on average then that's around 440 megabits.  The servers have 4 disks
- one root, one squid logs, two squid cache, with AUFS caches on the
two cache disks.  4 GB RAM.


-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] Squid + Trendmicro

2009-09-23 Thread George Herbert
On Wed, Sep 23, 2009 at 1:27 PM, Luis Daniel Lucio Quiroz
 wrote:
> Le lundi 7 septembre 2009 01:04:49, Amos Jeffries a écrit :
>> Luis Daniel Lucio Quiroz wrote:
>> > Hi all,
>> >
>> > Well, I have a really big problem,  We have deployed a Squid with digest
>> > auth + LDAP, it was work perfectly but other department has installed a
>> > Trendmicro antivirii solution.
>> >
>> > Well the problem is that when trendmicro cliend ask squid to access an
>> > url, it fails in first acl related with auth.
>> >
>> > My log is this:
>> > Request:
>> > 2009/09/05 23:56:30.829| parseHttpRequest: Request Header is
>> > Host: licenseupdate.trendmicro.com:80
>> > User-Agent: Mozilla/4.0 (compatible;MSIE 5.0; Windows 98)
>> > Accept: */*
>> > Pragma: no-cache
>> > Cache-Control: no-cache,no-store
>> > Proxy-Authorization: Digest username="avedstrend", realm="XXX",
>> > nonce="/kCjSgB4/JcCAKLZuWMA", uri
>> > ="http://licenseupdate.trendmicro.com:80/ollu/license_update.aspx?Protoco
>> >l_version=1&AC=OSVMX49VN7GTUMQ8QYQAX
>> > SGJ72QENXK&Product_Code=OS&AP_Name=OC&OS=WW&Language=E&Product_Version=R3
>> >CnAGQAyAA", response="5bd515897ca2f1
>> > 84b196eae2fafc654a"
>> > Proxy-Connection: Keep-Alive
>> > Connection: Close
>> >
>> >
>> > Acl who fails:
>> > 2009/09/05 23:56:30.832| ACLChecklist::preCheck: 0x146e1b0 checking
>> > 'http_access deny !plUexception !plU'
>> > 2009/09/05 23:56:30.832| ACLList::matches: checking !plUexception
>> > 2009/09/05 23:56:30.832| ACL::checklistMatches: checking 'plUexception'
>> > 2009/09/05 23:56:30.832| authenticateAuthenticate: no connection
>> > authentication type
>> > 2009/09/05 23:56:30.832| AuthUserRequest::AuthUserRequest: initialised
>> > request 0x189cc30
>> > 2009/09/05 23:56:30.832| authenticateValidateUser: Validated Auth_user
>> > request '0x189cc30'.
>> > 2009/09/05 23:56:30.832| authenticateValidateUser: Validated Auth_user
>> > request '0x189cc30'.
>> > FATAL: Received Segment Violation...dying.
>> >
>> > As you see plUexception is failling , this acl is declared as next:
>> >
>> > plUexception acl auth user1
>> >
>> >
>> > I wonder if anyone knows how to fix it.
>>
>> Segment violation crashes require a code fix.  What release of Squid is
>> this?
>>
>> ... and can you get any stack trace info?
>>
>> http://wiki.squid-cache.org/SquidFaq/TroubleShooting#head-7067fc0034ce967e6
>> 7911becaabb8c95a34d576d
>>
>>
>> Amos
>>
> We are about to make stack trace,
> but sys admins is worry about diskspace, aproxy, how many diskspace we need
> for disktrace
>
> right know we have 44Gb free, is this enough?
>
> TIA
>


44 GB is plenty.  You need something like your processes' actual
memory usage at the time of the crash for each crash trace.  You can
turn on and off the tracing rapidly - you configure a directory for
the dumps in the suqid.conf, but set permissions on the directory so
that the Squid user can't write there, and nothing comes out.  Then
set permissions on, wait for a crash or a few crashes, turn
permissions off again.

With 44 GB general range of available diskspace I had very balky
versions of Squid doing multi-day all crash dump capture without
exhausting the space available.  You don't want to just turn it on and
ignore it - it will eventually fill up - but that should be multiple
days worth, even if it crashes a lot for Squid.


-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] Managing clusters of siblings (squid2.7)

2009-09-28 Thread George Herbert
On Mon, Sep 28, 2009 at 5:24 PM, Chris Hostetter
 wrote:
>
> : The DNS way would indeed be nice. It's not possible in current Squid
> : however, if anyone is able to sponsor some work it might be doable.
>
> If i can demonstrate enough advantages in getting peering to work i might
> just be able to convince someone to think about doing that ... but that
> also assumes i can get the operations team adament enough to protest
> having a hack where they need to run a "config_generator" script on
> every box whenever a cluster changes (because a script like that would be
> fairly straight forward to write as a one off, it's just harder to
> implement as a general purpose feature in squid)
>
> : With Squid-2.7 you can use the 'include' directive to split the squid.conf
> : apart and contain the unique per-machine parts in a separate file to the
> : shared parts.
>
> yeah, i'm already familiar with inlcude, but either way i need a
> per-machine snippetto get arround the "sibling to self" problem *and* a
> way to reconfig when the snippet changes (because of the cluster changing
> problem)
>
> -Hoss


What would be really nice is a command line option and a bit of code
in the cache peer setup that recognizes own IP and ignores the entry,
to make this problem just all go away...

I should code that up, but not early tonight...


-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] Squid 3.0STABLE19 - performance

2009-10-13 Thread George Herbert
Multiple hard disks, and spreading out Squid's logs and cache dirs
onto separate disks, helps a lot.

The big prod squid environment I was running for a while used 4 disks
- 1 OS, 1 logs, 2 separate aufs cache disks.

If you can't do that with your hardware, even adding a second hard
drive, with logs on the OS disk and the cache on the second disk, will
help some.


-george

On Tue, Oct 13, 2009 at 10:52 AM, Mariel Sebedio  wrote:
> Hello, I have a problem with the Squid performance.
>
> I have a RHEL 5.4 whit Squid 3.0STABLE19 compiled with the following
> options:  '--prefix=/usr' '--sysconfdir=/etc/squid' '--enable-snmp'
> '--enable-cache-digest' '--enable-err-language=Spanish'
> '--enable-delay-pools'
>
> The hardware of the Proxy server machine is:
>
> processor    : 0
> vendor_id    : GenuineIntel
> cpu family    : 15
> model        : 4
> model name    : Intel(R) Pentium(R) 4 CPU 3.00GHz
> stepping    : 1
> cpu MHz        : 3000.177
> cache size    : 1024 KB
> physical id    : 0
> siblings    : 2
> core id        : 0
> cpu cores    : 1
> apicid        : 0
> fdiv_bug    : no
> hlt_bug        : no
> f00f_bug    : no
> coma_bug    : no
> fpu        : yes
> fpu_exception    : yes
> cpuid level    : 5
> wp        : yes
> flags        : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat
> pse36 clflush dts
> acpi mmx fxsr sse sse2 ss ht tm pbe nx constant_tsc pni monitor ds_cpl cid
> xtpr
> bogomips    : 5999.92
>
> The filesystem information is this:
>
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/sda2              5080828   4252116    566452  89% /
> /dev/sda5            141129204   2496448 131348084   2% /var
> /dev/sda1               101086     11303     84564  12% /boot
> tmpfs                  1031764         0   1031764   0% /dev/shm
>
> The top output
>
> top - 09:50:08 up 3 days, 17:07,  1 user,  load average: 0.09, 0.06, 0.01
> Tasks:  88 total,   1 running,  87 sleeping,   0 stopped,   0 zombie
> Cpu(s):  0.5%us,  0.5%sy,  0.0%ni, 98.5%id,  0.0%wa,  0.2%hi,  0.3%si,
>  0.0%st
> Mem:   2063532k total,  2001504k used,    62028k free,   199476k buffers
> Swap:  5245212k total,        0k used,  5245212k free,  1415224k cached
>
> The ammount of connections oscilates between 400-600. ([]# netstat -an |grep
> STABL |wc -l)
>  I can see that when I request a page it takes a long time to appear on
> my browser, and If at that moment I look at the option "Client-side
> Active Requests" on the statistics, I can't see anything referring to my
> request
>
> It also takes a lot of time for the request to appear in the access.log
>
> When I have a page request, it doesn't arrive in a short period of time,
> So I stop my browser and resend it, and it arrives quickly the second
> time.
>
> Is there something wrong with my squid.conf or my kernel configuration.
> Any suggestions of where to look or what to change to improve
> performance?
>
> How can I determine if it is a matter of DNS response or squid
> congestion or simply a delay related to the page requested itself?
>
> Thanks in advance for the help.
>
> My squid.conf is there:
> authenticate_cache_garbage_interval 3600 seconds
> authenticate_ttl 3600 seconds
> authenticate_ip_ttl 0 seconds
> acl all src 0.0.0.0/0.0.0.0
> acl manager proto cache_object
> acl localhost src 127.0.0.1
> acl to_localhost dst 0.0.0.0 127.0.0.0/255.0.0.0
> acl mynet src "/etc/squid/mynet" ## allow over 400 Ips
> acl snmppublic snmp_community proxy
> acl administrador src "/etc/squid/administradores" ## only 3 Ips
> acl SSL_ports port 443
> acl Safe_ports port 80 81 21 443 70 210 1025-65535 280 488 591 777
> acl CONNECT method CONNECT
> http_access Allow manager administrador
> http_access Deny manager
> http_access Deny !Safe_ports
> http_access Deny CONNECT !SSL_ports
> http_access Allow mynet
> http_access Deny all
> icp_access Allow mynet
> icp_access Deny all
> htcp_access Allow mynet
> htcp_access Deny all
> htcp_clr_access Deny all
> ident_lookup_access Deny all
> http_port 0.0.0.0:3128
> dead_peer_timeout 10 seconds
> hierarchy_stoplist cgi-bin
> hierarchy_stoplist ?
> cache_mem 33554432 bytes
> maximum_object_size_in_memory 8192 bytes
> memory_replacement_policy lru
> cache_replacement_policy lru
> cache_dir ufs /var/spool/squid/cache 8 16 256 IOEngine=Blocking
> store_dir_select_algorithm least-load
> max_open_disk_fds 0
> minimum_object_size 0 bytes
> maximum_object_size 4194304 bytes
> cache_swap_low 90
> cache_swap_high 95
> access_log /var/log/squid/access.log squid
> cache_log /var/log/squid/cache.log
> cache_store_log /var/log/squid/store.log
> logfile_rotate 9
> emulate_httpd_log off
> log_ip_on_direct on
> mime_table /etc/squid/mime.conf
> log_mime_hdrs off
> pid_filename /var/run/squid.pid
> debug_options ALL,1
> log_fqdn off
> client_netmask 255.255.255.255
> strip_query_terms on
> buffered_logs off
> ftp_user anonym...@xxx.com.ar
> ftp_list_width 32
> ftp_passive on
> ftp_sanitycheck on
> ftp_telnet_protoc

Re: [squid-users] 1024 file descriptors is good

2009-11-07 Thread George Herbert
On the other hand - used as outbound caching proxies, for typical ISP
users, 1024 may be too small.  Former client of mine had it turned to
--with-maxfd=8192

Also note - when compiling on RHEL 5.x (and some other systems) you
need to have ulimit -n *of the configure and build environment* set to
at least the --with-maxfd value as well.

We used a wrapper on the configure which essentially did this:
--
export 'CFLAGS=-g -O -march=opteron -DNUMTHREADS=120
-DBUILDID=SQUID3.x-CUSTOMER-DATE'

echo "setting max open files hard/soft limits to 32k"
ulimit -HSn 32768
printenv
./configure (long list of configure options read from a separate file)
--

If you didn't do that, the actual maxfd limit it built with was less
than the --with-maxfd


-george

On Tue, Oct 20, 2009 at 1:04 PM, Leonardo Rodrigues
 wrote:
> Mariel Sebedio escreveu:
>>
>> Hi, I have a RHEL 5.4 with squid3.0STABLE19 and have a performance
>> problems...
>>
>> My cache.log not report warning
>>
>> When I see in cachemgr.cgi I just have a 1024 File descriptors...
>>
>
>   if you're not getting the famous WARNING in your cache.log
>
> WARNING! Your cache is running out of filedescriptors
>
>   then you really dont need to worry about 1024 FDs. That's now too much,
> but that's pretty enough for having a good number of simultaneos clients.
>
>   Filedescriptors problems (running low on them) could give you some
> problems, but in any case you would see the warning on your logs. If you're
> not seeing it, then problem is not filedescriptor related. And if that's not
> filedescriptor related, raising it wont change anything.
>
>   your performance problem is somewhere else .
>
>
>
>
> --
>
>
>        Atenciosamente / Sincerily,
>        Leonardo Rodrigues
>        Solutti Tecnologia
>        http://www.solutti.com.br
>
>        Minha armadilha de SPAM, NÃO mandem email
>        gertru...@solutti.com.br
>        My SPAMTRAP, do not email it
>
>
>
>
>



-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] Amount of Bandwidth squid can handle

2010-01-06 Thread George Herbert
To build on Shawn's comments -

I've handled peak loads in forward cacheing in the several hundred
requests per second per Squid server, with 3.0-STABLE13 through 17 and
some older 2.6 servers, as part of a smartphone company web interface.

Servers were 4 GB dual Xeon quad core, running FreeBSD something for
the 2.6 servers and CentOS 5.2 for the 3.0 servers we were moving
towards.  There were four disks in use - OS, Logs, Cache 1, and Cache
2, with no redundancy.

We operated in larger cache groups initially but pared back to pairs
and triplets due to operational management concerns, over time.  Total
cache hit rate was slightly over 50%.

Peak benchmarking performance was over 600 hits/sec/server with a
production log sample workload, we saw about a third to half of that
as actual operational peaks (and were trying to keep margins of 2.0
from benchmarked perf to max production load).  We did 100k and 1m
request benchmark runs with medium sized IP pools making the queries
for testing, so it was pretty good load testing, though the test
harness was not optimal.


-george william herbert
george.herb...@gmail.com




On Wed, Jan 6, 2010 at 8:14 PM, Shawn Wright  wrote:
>
> We've been running Squid 2.6 for 5+ years with a 10Mb full duplex connection 
> serving ~650 active users. It has handled peak loads of 60-90 req/sec without 
> issue, which represents a fully utilized 10Mb link (managed with delay 
> pools). Last month we upgraded to a full 1Gb (yes 100x speed increase!) on a 
> trial basis. During a one week trial, we saw about 2-3x bandwidth use (or 
> 20-30Mbps sustained average) with little affect on the proxy server load. 
> During tests we were able to manage speedtest results of 250-300Mbps from a 
> single Gb connected host to Speakeasy's Seattle test node, and saw no 
> difference between going direct or via squid. We were also able to achieve a 
> full 100Mbps speed result on each of 4 simultaneous hosts tested via squid 
> (each was using 100Mb NIC). So far, the only issue we have seen is a problem 
> our log files exceeding 2Gb in less than 24 hours, which required a 
> re-compile to add the '--with-large-files' option.
> Still far short of the 60-100Mb rates you mention (are these peak or 
> sustained?), but our server appears to have plenty of breathing room left, 
> and is modest by today's standards:
>
> Dell PE2850 with Dual Quad Xeons
> Ubuntu 6.06 32bit, 4Gb RAM
> 6x 15K 72Gb SCSI drives, 4 for cache, 1 for logs, one for system, running XFS
> Squid 2.6stable20
> Single Gb NIC in use.
> Lots of ACLs (300,000 lines), delay pools, all clients authenticated via AD
>
> I expect we will need to do more tuning since opening up the bandwidth, but 
> so far, things are going fine. Prior to this week's re-compile, the system 
> was running 24x7 since April 08. :-)
>
> Hope this helps.
>
> --
>
> Shawn Wright
> I.T. Manager, Shawnigan Lake School
> http://www.shawnigan.ca
>
>
> - Original Message -
> From: "nima chavooshi" 
> To: squid-users@squid-cache.org
> Sent: Wednesday, January 6, 2010 11:28:23 AM GMT -08:00 US/Canada Pacific
> Subject: [squid-users] Amount of Bandwidth squid can handle
>
> Hi
> First of all thanks for sharing your experience on this mailing list.
> I intend to install squid as forward cache in few companies with high
> HTTP traffic almost 60 or 80 or 100Mb.
> Can squid handle this amount of traffic??of course I do not have any
> idea about selecting hardware yet.
> May you tell me maximum of bandwidth you could handle with squid?it's
> so good if you give me spec of your hardware that run squid on high
> traffic.
>
> Thanks in advance
>
> --
> N.Chavoshi
>



-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] No /usr/local/squid/sbin/squid after power restore

2010-01-08 Thread George Herbert
On Fri, Jan 8, 2010 at 3:58 PM, Landy Landy  wrote:
> Hello.
>
> I just want to share with the list something I experienced earlier this week. 
> I have installed squid 3. stable 20 on Lenny. I had a power outage, when 
> power was restored I didn't have anything in /usr/local/squid/sbin/. Since I 
> had squid built and compiled and didn't delete the source directory I did a 
> make install and that installed /usr/local/squid/sbin/squid.
>
> I found this very unusual that had to share it with you. Never heard of 
> anything similar and don't know why it the squid executable was deleted...


That sounds like an OS problem.  Possibly failure to finish writing
the in-memory disk cache out to the actual disk, or some sort of
filesystem corruption that the power failure triggered.

Squid's operating software doesn't include anything which could remove
its own binary.

What OS was this running on?


-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] Requests Per Second

2010-01-24 Thread George Herbert
Several hundred requests per second, measured by a telco provider
squid gateway system in production usage.

I have measured 400+ in the lab for 2.7 and 600+ in the lab for
3.0STABLE3 and beyond (but latest is best); I haven't benchmarked 3.1.
 I have seen sustained stable performance of prod servers which was
50% or more of that in daily peaks (hours long at those levels) with
good results.

Results are approximately the same with small clusters (2-5 servers
per cache group), scaling linearly.

Systems:
Modern dual CPU quad core 2.5-3.0 GHz Intel or AMD CPUs and 4+ GB RAM,
with 2 HD for AUFS cache, 1 HD for logs, 1 HD for OS.


-george william herbert
george.herb...@gmail.com


On Sun, Jan 24, 2010 at 3:13 PM, BarneyC  wrote:
>
> I'm trying to get a handle on the number of RPS (Maximum) a residential ISP
> is likely to see on a busy 100Mb/s network (close to capacity). Most of the
> stats I see on here seem pretty low.
>
> I'm trying to at least interpolate the largest network load a single squid
> box could handle without requiring clustering.
>
> Thanks,
>
> Barney
> --
> View this message in context: 
> http://n4.nabble.com/Requests-Per-Second-tp1288921p1288921.html
> Sent from the Squid - Users mailing list archive at Nabble.com.
>



-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] Ongoing Running out of filedescriptors

2010-02-09 Thread George Herbert
Secret compile time gotcha - your compile needs to have the max fd set
higher during the configure, make, and compile, or it doesn't actually
end up able to use the higher maxfd limit.

I do a script with roughly "ulimit -HSn 32768; ./configure (long
options string included from a file)"

(On CentOS 5.1-5.3 build servers, and presumably 5.4; the same should
apply to other Linux + Gnu configure/make environments)


-george

On Tue, Feb 9, 2010 at 3:29 PM, Landy Landy  wrote:
> I don't know what to do with my current squid, I even upgraded to 
> 3.0.STABLE21 but, the problem persist every three days:
>
> /usr/local/squid/sbin/squid -v
> Squid Cache: Version 3.0.STABLE21
> configure options:  '--prefix=/usr/local/squid' '--sysconfdir=/etc/squid' 
> '--enable-delay-pools' '--enable-kill-parent-hack' '--disable-htcp' 
> '--enable-default-err-language=Spanish' '--enable-linux-netfilter' 
> '--disable-ident-lookups' '--localstatedir=/var/log/squid3.1' 
> '--enable-stacktraces' '--with-default-user=proxy' '--with-large-files' 
> '--enable-icap-client' '--enable-async-io' '--enable-storeio=aufs' 
> '--enable-removal-policies=heap,lru' '--with-maxfd=32768'
>
> I built with --with-maxfd=32768 option but, when squid is started it says is 
> working with only 1024 filedescriptor.
>
> I even added the following to the squid.conf:
>
> max_open_disk_fds 0
>
> But it hasn't resolve anything. I'm using squid on Debian Lenny. I don't know 
> what to do. Here's part of cache.log:
>
> 2010/02/09 17:14:29| ctx: exit level  0
> 2010/02/09 17:14:29| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:16:50| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:18:45| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:20:01| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:20:17| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:20:38| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:21:33| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:22:26| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:22:41| clientParseRequestMethod: Unsupported method attempted 
> by 172.16.100.83: This is not a bug. see squid.conf extension_methods
> 2010/02/09 17:22:41| clientParseRequestMethod: Unsupported method in request 
> '_...@.#c5u_e__:___{_Q_"___L_r'
> 2010/02/09 17:22:41| clientProcessRequest: Invalid Request
> 2010/02/09 17:22:43| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:22:59| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:23:16| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:23:36| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:23:52| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:24:19| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:24:23| clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) failed: 
> (2) No such file or directory
> 2010/02/09 17:24:38| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:24:41| clientParseRequestMethod: Unsupported method attempted 
> by 172.16.100.83: This is not a bug. see squid.conf extension_methods
> 2010/02/09 17:24:41| clientParseRequestMethod: Unsupported method in request 
> '_E__&_b_%_w__pw__m_}z%__i_...@_t__q___d__?_g'
> 2010/02/09 17:24:41| clientProcessRequest: Invalid Request
> 2010/02/09 17:24:54| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:25:12| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:25:12| clientParseRequestMethod: Unsupported method attempted 
> by 172.16.100.83: This is not a bug. see squid.conf extension_methods
> 2010/02/09 17:25:12| clientParseRequestMethod: Unsupported method in request 
> '_Z___|G3_7^_%U_r_1.h__gd__8C'
> 2010/02/09 17:25:12| clientProcessRequest: Invalid Request
> 2010/02/09 17:25:29| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:25:41| clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) failed: 
> (2) No such file or directory
> 2010/02/09 17:25:45| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:26:01| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:26:18| client_side.cc(2843) WARNING! Your cache is running out 
> of filedescriptors
> 2010/02/09 17:26:34| client_side.cc(2843) WARNING! Your cache is running out 
> of 

Re: [squid-users] Ongoing Running out of filedescriptors

2010-02-10 Thread George Herbert
On Wed, Feb 10, 2010 at 8:50 AM, Luis Daniel Lucio Quiroz
 wrote:
> Le Mardi 9 Février 2010 19:34:13, Amos Jeffries a écrit :
>> On Tue, 9 Feb 2010 17:39:37 -0600, Luis Daniel Lucio Quiroz
>>
>>  wrote:
>> > Le Mardi 9 Février 2010 17:29:23, Landy Landy a écrit :
>> >> I don't know what to do with my current squid, I even upgraded to
>> >> 3.0.STABLE21 but, the problem persist every three days:
>> >>
>> >> /usr/local/squid/sbin/squid -v
>> >> Squid Cache: Version 3.0.STABLE21
>> >> configure options:  '--prefix=/usr/local/squid'
>>
>> '--sysconfdir=/etc/squid'
>>
>> >> '--enable-delay-pools' '--enable-kill-parent-hack' '--disable-htcp'
>> >> '--enable-default-err-language=Spanish' '--enable-linux-netfilter'
>> >> '--disable-ident-lookups' '--localstatedir=/var/log/squid3.1'
>> >> '--enable-stacktraces' '--with-default-user=proxy' '--with-large-files'
>> >> '--enable-icap-client' '--enable-async-io' '--enable-storeio=aufs'
>> >> '--enable-removal-policies=heap,lru' '--with-maxfd=32768'
>> >>
>> >> I built with --with-maxfd=32768 option but, when squid is started it
>>
>> says
>>
>> >> is working with only 1024 filedescriptor.
>> >>
>> >> I even added the following to the squid.conf:
>> >>
>> >> max_open_disk_fds 0
>> >>
>> >> But it hasn't resolve anything. I'm using squid on Debian Lenny. I
>>
>> don't
>>
>> >> know what to do. Here's part of cache.log:
>> 
>>
>> > You got a bug! that behaivor happens when a coredump occurs in squid,
>> > please
>> > file a ticket with gdb output, rice debug at maximum if you can.
>>
>> WTF are you talking about Luis? None of the above problems have anything
>> to do with crashing Squid.
>>
>> They are in order:
>>
>> "WARNING! Your cache is running out of filedescriptors"
>>  * either the system limits being set too low during run-time operation.
>>  * or the system limits were too small during the configure and build
>> process.
>>    -> Squid may drop new client connections to maintain lower than desired
>> traffic levels.
>>
>>   NP: patching the kernel headers to artificially trick squid into
>> believing the kernel supports more by default than it does is not a good
>> solution. The ulimit utility exists for that purpose instead.
>> 
>>
>>
>> "Unsupported method attempted by 172.16.100.83"
>>  * The machine at 172.16.100.83 is pushing non-HTTP data into Squid.
>>   -> Squid will drop these connections.
>>
>> "clientNatLookup: NF getsockopt(SO_ORIGINAL_DST) failed: (2) No such file
>> or directory"
>>  * NAT interception is failing to locate the NAT table entries for some
>> client connection.
>>  * usually due to configuring the same port with "transparent" option and
>> regular traffic.
>>  -> for now Squid will treat these connections as if the directly
>> connecting box was the real client. This WILL change in some near future
>> release.
>>
>>
>> As you can see in none of those handling operations does squid crash or
>> core dump.
>>
>>
>> Amos
>
>
> Amos, that is the exactly behaivor I did have with a bug, dont you remember
> the DIGEST bug that makes squid restart internaly? HNO did help me, but the
> fact is that is a symptom of a coredump internal restart because he complains
> his sq is already compilled with more than 1024.
>
> After retarting, I did have 1024 descriptors, no matters i did compile with
> 64k FDs.

As I said -

The running configure / make / compile environment has to be set to
64k file descriptors.  The build environment's max file descriptors
are an overriding limit on the actual usable FDs, no matter what you
set the configure maxfd value to.  If ulimit -n = 1024 at configure
time, that's what you're stuck at.

# ulimit -HSn 32768 (or 64k) ; ./configure (options...) ; make



-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] Squid HD Limitation

2010-02-24 Thread George Herbert
On Wed, Feb 24, 2010 at 4:22 PM, Mr. Issa(*)  wrote:
> Dear mates Well the real Problem is that when we have 100GB of cache
> on the squid BOX, we notice that every 1 Hour exactly the connectivity
> on the WAN interface of squid drops for 10 seconds.. then it comes
> back again... (MRTG graph is attached). We have the squid BOX in
> Transparent mode
>
>
>
> So what could be the problem?

Probably not related to disk or RAM - I have run numerous production
systems with 2x 300 GB disks dedicated to AUFS cache storage directory
use, and 16 GB of RAM, and not seen any such intermittent behavior.

I've run bigger systems in test, but it wasn't economical to deploy
them as opposed to clusters of smaller ones.  The bigger ones ran
fine, as far as I could tell.

Do you have any OS or support system crontabs?  Are you rotating
logfiles on hourly intervals?


-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] squid consuming too much processor/cpu

2010-03-17 Thread George Herbert
On Wed, Mar 17, 2010 at 5:09 AM, Muhammad Sharfuddin
 wrote:
>
> On Wed, 2010-03-17 at 19:54 +1100, Ivan . wrote:
>> you might want to check out this thread
>>
>> http://www.mail-archive.com/squid-users@squid-cache.org/msg56216.html
>
> I checked, but its not clear to me
> do I need to install some packages/rpms ? and then ?
> I mean how can I resolve this issue
>
> --
> Regards
> Muhammad Sharfuddin | NDS Technologies Pvt Ltd | +92-333-2144823
>
>>
>>
>> cheers
>> ivan
>>
>> On Wed, Mar 17, 2010 at 4:55 PM, Muhammad Sharfuddin
>>  wrote:
>> > Squid Cache: Version 2.7.STABLE5(squid-2.7.STABLE5-2.3)
>> > kernel version: 2.6.27 x86_64
>> > CPU: Xeon 2.6 GHz CPU
>> > Memory: 2 GB
>> > /var/cache/squid is ext3, mounted with 'noacl' and 'noatime' options
>> > number of users using this proxy: 160
>> > number of users using simultaneously/concurrently using this proxy: 72
>> >
>> > I found that squid is consuming too much cpu, average cpu idle time is
>> > 49 only.
>> >
>> > I have attached the output 'top -b -n 7', and 'vmstat 1'
>> >
>> > below is the output of squid.conf
>> >
>> > squid.conf:
>> > -
>> >
>> > http_port 8080
>> > cache_mgr administra...@test.com
>> > cache_mem 1024 MB
>> > cache_dir aufs /var/cache/squid 2 32 256
>> > visible_hostname gateway.test.com
>> > refresh_pattern ^ftp: 1440 20% 10080
>> > refresh_pattern ^gopher: 1440 0% 1440
>> > refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 10080 90% 43200
>> > refresh_pattern -i \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|x-flv)$ 43200 90%
>> > 432000
>> > refresh_pattern -i \.(deb|rpm|exe|zip|tar|tgz|ram|rar|bin|ppt|doc|tiff)$
>> > 10080 90% 43200
>> > refresh_pattern -i \.index.(html|htm)$ 0 40% 10080
>> > refresh_pattern -i \.(html|htm|css|js)$ 1440 40% 40320
>> > refresh_pattern . 0 40% 40320
>> > cache_swap_low 78
>> > cache_swap_high 90
>> >
>> > maximum_object_size_in_memory 100 KB
>> > maximum_object_size 12288  KB
>> >
>> > fqdncache_size 2048
>> > ipcache_size 2048
>> >
>> > acl myFTP port   20  21
>> > acl ftp_ipes src "/etc/squid/ftp_ipes.txt"
>> > http_access allow ftp_ipes myFTP
>> > http_access deny myFTP
>> >
>> > acl porn_deny url_regex "/etc/squid/domains.deny"
>> > http_access deny porn_deny
>> >
>> > acl vip src "/etc/squid/vip_ipes.txt"
>> > http_access allow vip
>> >
>> > acl entweb url_regex "/etc/squid/entwebsites.txt"
>> > http_access deny entweb
>> >
>> > acl mynet src "/etc/squid/allowed_ipes.txt"
>> > http_access allow mynet
>> >
>> >
>> > please help, why squid is utilizing so much of cpu
>> >
>> >
>> > --
>> > Regards
>> > Muhammad Sharfuddin | NDS Technologies Pvt Ltd | +92-333-2144823

If it is that same gnu malloc issue on pattern matching, then a
restart of Squid should clear it up temporarily.  It would
consistently appear after some time, after the restart, though.

You could either automatically restart more often than that time
period, or install the Google malloc library and recompile Squid to
use it instead of default gcc malloc.  One of these is easier than the
other...


-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] Increasing File Descriptors

2010-05-06 Thread George Herbert
Do this:

ulimit -Hn

If the values is 32768 that's your current kernel/sys max value and
you're stuck.

If it's more than 32768 (and my RHEL 5.3 box says 65536) then you
should be able to increase up to that value.  Unless there's an
internal signed 16-bit int involved in FD tracking inside the Squid
code then something curious is happening...

However - I'm curious as to why you'd need that many.  I've had top
end systems with Squid clusters running with compiles of 16k file
descriptors and only ever really used 4-5k.  What are you doing that
you need more than 32k?


-george

On Thu, May 6, 2010 at 10:32 AM, Bradley, Stephen W. Mr.
 wrote:
> Unfortunately won't work for me above 32768.
>
> I have the ulimit in the startup script and that works okay but I need more 
> the 32768.
>
> :-(
>
>
>
> -Original Message-
> From: Ivan . [mailto:ivan...@gmail.com]
> Sent: Thursday, May 06, 2010 5:17 AM
> To: Bradley, Stephen W. Mr.
> Cc: squid-users@squid-cache.org
> Subject: Re: [squid-users] Increasing File Descriptors
>
> worked for me
>
> http://paulgoscicki.com/archives/2007/01/squid-warning-your-cache-is-running-out-of-filedescriptors/
>
> no recompile necessary
>
>
> On Thu, May 6, 2010 at 7:13 PM, Bradley, Stephen W. Mr.
>  wrote:
>> I can't seem to get increase the number above 32768 no matter what I do.
>>
>> Ulimit during compile, sysctl.conf and everything else but no luck.
>>
>>
>> I have about 5,000 users on a 400mbit connection.
>>
>> Steve
>>
>> RHEL5 64bit with Squid 3.1.1
>



-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] FATAL: Received Segment Violation...dying.

2010-05-25 Thread George Herbert
You will have to set up the system to collect a core dump, you need
that to tell where in the code it seg faulted.


On Tue, May 25, 2010 at 5:56 AM, sameer khan  wrote:
>
>
> Hey
>
> squid is just dying with fatal error:
>
> FATAL: Received Segment Violation...dying.
> 2010/05/25 17:52:52| storeDirWriteCleanLogs: Starting...
> 2010/05/25 17:52:52| WARNING: Closing open FD   29
> 2010/05/25 17:52:52| commSetEvents: epoll_ctl(EPOLL_CTL_DEL): failed on 
> fd=29: (1) Operation not permitted
> 2010/05/25 17:52:52| WARNING: Closing open FD   30
> 2010/05/25 17:52:52| commSetEvents: epoll_ctl(EPOLL_CTL_DEL): failed on 
> fd=30: (1) Operation not permitted
> 2010/05/25 17:52:53| 65536 entries written so far.
> 2010/05/25 17:52:53|    131072 entries written so far.
> 2010/05/25 17:52:53|    196608 entries written so far.
> 2010/05/25 17:52:53|    262144 entries written so far.
> 2010/05/25 17:52:53|    327680 entries written so far.
> 2010/05/25 17:52:53|    393216 entries written so far.
> 2010/05/25 17:52:53|    458752 entries written so far.
> 2010/05/25 17:52:53|    524288 entries written so far.
> 2010/05/25 17:52:53|    589824 entries written so far.
> 2010/05/25 17:52:53|    655360 entries written so far.
> 2010/05/25 17:52:54|    720896 entries written so far.
> 2010/05/25 17:52:54|    786432 entries written so far.
> 2010/05/25 17:52:54|    851968 entries written so far.
> 2010/05/25 17:52:54|    917504 entries written so far.
> 2010/05/25 17:52:54|    983040 entries written so far.
> 2010/05/25 17:52:54|   1048576 entries written so far.
> 2010/05/25 17:52:54|   1114112 entries written so far.
> 2010/05/25 17:52:54|   1179648 entries written so far.
> 2010/05/25 17:52:54|   1245184 entries written so far.
> 2010/05/25 17:52:54|   1310720 entries written so far.
> 2010/05/25 17:52:55|   1376256 entries written so far.
> 2010/05/25 17:52:55|   1441792 entries written so far.
> 2010/05/25 17:52:55|   1507328 entries written so far.
> 2010/05/25 17:52:55|   1572864 entries written so far.
> 2010/05/25 17:52:55|   1638400 entries written so far.
> 2010/05/25 17:52:55|   1703936 entries written so far.
> 2010/05/25 17:52:55|   1769472 entries written so far.
> 2010/05/25 17:52:55|   1835008 entries written so far.
> 2010/05/25 17:52:55|   1900544 entries written so far.
> 2010/05/25 17:52:55|   1966080 entries written so far.
> 2010/05/25 17:52:55|   2031616 entries written so far.
> 2010/05/25 17:52:56|   2097152 entries written so far.
> 2010/05/25 17:52:56|   2162688 entries written so far.
> 2010/05/25 17:52:56|   2228224 entries written so far.
> 2010/05/25 17:52:56|   2293760 entries written so far.
> 2010/05/25 17:53:03|   Finished.  Wrote 2338090 entries.
> 2010/05/25 17:53:03|   Took 10.2 seconds (228705.3 entries/sec).
> CPU Usage: 12716.239 seconds = 6215.796 user + 6500.442 sys
> Maximum Resident Size: 0 KB
> Page faults with physical i/o: 17950
> Memory usage for squid via mallinfo():
>     total space in arena:  580484 KB
>     Ordinary blocks:   579739 KB    287 blks
>     Small blocks:   0 KB  4 blks
>     Holding blocks: 57608 KB  3 blks
>     Free Small blocks:  0 KB
>     Free Ordinary blocks: 745 KB
>     Total in use:  637347 KB 100%
>     Total free:   745 KB 0%
> 2010/05/25 17:53:07| Starting Squid Cache version 2.7.STABLE6 for 
> x86_64-unknown-linux-gnu...
> 2010/05/25 17:53:07| Process ID 4412
> 2010/05/25 17:53:07| With 65535 file descriptors available
> 2010/05/25 17:53:07| Using epoll for the IO loop
> 2010/05/25 17:53:07| Performing DNS Tests...
> 2010/05/25 17:53:07| Successful DNS name lookup tests...
> 2010/05/25 17:53:07| DNS Socket created at 0.0.0.0, port 26621, FD 6
> 2010/05/25 17:53:07| Adding nameserver 127.0.0.1 from /etc/resolv.conf
> 2010/05/25 17:53:07| helperOpenServers: Starting 10 'storeurl.pl' processes
> 2010/05/25 17:53:07| Unlinkd pipe opened on FD 20
> 2010/05/25 17:53:07| Swap maxSize 565248000 + 3145728 KB, estimated 43722594 
> objects
> 2010/05/25 17:53:07| Target number of buckets: 2186129
> 2010/05/25 17:53:07| Using 4194304 Store buckets
> 2010/05/25 17:53:07| Max Mem  size: 3145728 KB
> 2010/05/25 17:53:07| Max Swap size: 565248000 KB
> 2010/05/25 17:53:07| Local cache digest enabled; rebuild/rewrite every 
> 3600/3600 sec
> 2010/05/25 17:53:07| Store logging disabled
> 2010/05/25 17:53:07| Rebuilding storage in /usr/local/squid/var/cache/sda1 
> (CLEAN)
> 2010/05/25 17:53:07| Rebuilding storage in /usr/local/squid/var/cache/sda2 
> (CLEAN)
> 2010/05/25 17:53:07| Rebuilding storage in /usr/local/squid/var/cache/sda3 
> (CLEAN)
> 2010/05/25 17:53:07| Rebuilding storage in /usr/local/squid/var/cache/sda4 
> (CLEAN)
> 2010/05/25 17:53:07| Rebuilding storage in /usr/local/squid/var/cache/sdb1 
> (CLEAN)
> 2010/05/25 17:53:07| Rebuilding storage in /usr/local/squid/var/cache/sdb2 
> (CLEAN)
> 2010/05/25 17:53:07| Rebuilding storage in /usr/local/squid/var/ca

Re: [squid-users] Hardware Requirements

2010-06-18 Thread George Herbert
On Fri, Jun 18, 2010 at 11:40 AM, Luis Daniel Lucio Quiroz
 wrote:
> Le vendredi 18 juin 2010 09:47:22, Ariel a écrit :
>> hello list, as estasn, I need your advice to the next stage
>>
>> an ISP network with 500 users
>> I have a pentium 4 Dual Core + 4 GB ram + Sata 2 160 GB
>> Squid 3.1.xx + bridge + tproxy  + Centos 5.4 64 Bits
>>
>> I would like to know your opinions about the hardware, if very small,
>> fine or need something bigger
>> what equipment do you recommend?
>>
>> thanks
>
> How many hits are you specting  hits/min
> if  under 200 hits/min then you are okay (as my experience has shown me)

Is that a single hard drive?


-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] Hardware Requirements

2010-06-18 Thread George Herbert
On Fri, Jun 18, 2010 at 12:06 PM, Jakob Curdes  wrote:
>
 an ISP network with 500 users
 I have a pentium 4 Dual Core + 4 GB ram + Sata 2 160 GB
 Squid 3.1.xx + bridge + tproxy  + Centos 5.4 64 Bits

>>>
>>> How many hits are you specting  hits/min
>>> if  under 200 hits/min then you are okay (as my experience has shown me)
>>>
>
> From my experience you can do a lot more hits with that type of machinery,
> although this depends on  a lot of factors, and also strongly on the squid
> configuration.

You can certainly do a lot more hits with something that's slightly
bigger;  With dual-CPU quad-core P4 boxes with 8 GB of RAM, 4x SATA HD
(root, 2x separate cache dirs, logs dir) and systems operating in 2-4
system cache groups I got 400+ hits/second in production and 600+ in
test.

The specific configuration here, with single CPU and less RAM and one
HD, is going to be less capacity than that.  But 120 times less?  That
suprises me...



-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] performance question, 1 or 2 NIC's?

2010-08-30 Thread George Herbert
On Sat, Aug 28, 2010 at 5:12 PM, Amos Jeffries  wrote:
> Leonardo Rodrigues wrote:
>[...]
>  For a faster internal connection and slower Internet connection you can
> look towards raising the Hit Ratio' probably the byte hits specifically.
> That will drop the load on the Internet line and make the whole network
> appear faster to users. The holy grail for forward proxies seems to be 50%,
> with reality coming in between 20% and 45% depending on your clients and
> storage space.

For what it's worth, at Large Telco Smartphone Provider Which Will Not
Be Named, all of our Squids routinely exceeded 40%, with many of them
over 50% for a whole day or so.  The only major burps there were
having to reboot them for now-fixed internal errors.

That was 1m active users, though...


-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] Only 23% of traf is cached. Config problem?

2010-09-29 Thread George Herbert
On Wed, Sep 29, 2010 at 1:43 PM, Ralf Hildebrandt
 wrote:
> * Andrei :
>> These are my Squid stats. I have about 23% of cache hits.
>
> I have four squid machines, an the Request hit rate average is at:
> 29.3%, 27.2%, 27.4% and 26.7% (last 24h)
>
> So your values could be a bit better.

As the userbase size increases the cache hits will increase.

It took literally slightly over 1 million users at the prior site I
ran Squid for to get slightly over 50% cache hits.  23% for a small
site (300 users) is reasonable, depending on the workload and how much
of the sites are all-dynamic content which can't be cached.


-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] Only 23% of traf is cached. Config problem?

2010-09-29 Thread George Herbert
On Wed, Sep 29, 2010 at 1:54 PM, Jordon Bedwell  wrote:
> On 09/29/2010 03:47 PM, George Herbert wrote:
>>
>> On Wed, Sep 29, 2010 at 1:43 PM, Ralf Hildebrandt
>>   wrote:
>>>
>>> * Andrei:
>>>>
>>>> These are my Squid stats. I have about 23% of cache hits.
>>>
>>> I have four squid machines, an the Request hit rate average is at:
>>> 29.3%, 27.2%, 27.4% and 26.7% (last 24h)
>>>
>>> So your values could be a bit better.
>>
>> As the userbase size increases the cache hits will increase.
>>
>> It took literally slightly over 1 million users at the prior site I
>> ran Squid for to get slightly over 50% cache hits.  23% for a small
>> site (300 users) is reasonable, depending on the workload and how much
>> of the sites are all-dynamic content which can't be cached.
>>
>>
>
> Dynamic is subjective.  What the world considers dynamic most of the is
> actually dynamically generated static content that rarely changes and always
> wastes CPU time.  I hardly consider one post a day dynamic and unnecessary
> for sending "cache me" headers (to squid at least) for the next 24 hours.
>  You can cache all content, dynamic or not, it's just not recommended, you
> can do it with squid or you can trick squid into thinking it's not dynamic
> anyways, which is what we do on some our sites for pages that we know rarely
> change.


This is HIGHLY content-specific, and in many cases is horridly unsafe.

Your mileage may vary.  Know what your users are actually doing...


-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] could not parse headers from a disk structure!

2010-10-10 Thread George Herbert
Important question - Landy, what version of squid, and what OS, are
you running on?

Was it a precompiled Squid or a custom compliation?  If custom, what
were the build options?

I've seen stuff like this repeatedly in the long tail chase of
3.0-StableX versions 2ish years ago, when things went sideways, but it
could also be a one time blip for you.


-george

On Sun, Oct 10, 2010 at 4:47 PM, Amos Jeffries  wrote:
> On Sun, 10 Oct 2010 07:22:15 -0700 (PDT), Landy Landy
>  wrote:
>> Ok.
>>
>> I'm still getting that message. Looks like there were a lot of corrupted
>> files.
>>
>> Thanks for replying.
>
> In the corruption case, as Kinkie said, Squid discards the file and
> replaces it with a new one. This causes the message to decline as things
> get fixed. It may last a week or more to completely go, but should have an
> exponential decline as cleanup progresses. Are they noticeably decreasing
> already?
>
> Another potential cause is an upgrade of Squid where a disk format bug was
> added or fixed between the two versions. Or 32-bit -> 64-bit upgrade to the
> build or hardware. This would churn through the whole previous cache
> instead of just a small selection of corrupted files.
>
> If the warnings are not decreasing with time you may need to enable the
> store.log and check the timestamps for creation/release for some of the
> reported files. Any which are created by the current process then fail to
> read back need closer inspection.
>
> Things to consider that will impact this are: since you last re-started
> Squid has there been an OS kernel update? a squid binary change? a libc
> update? an ntp binary update (timestamp sizes)? a filesystem change? crypto
> library update (MD5)?
>  Any one of those could stay hidden on the system until a restart of the
> box or Squid brings up the new software linkages.
>
> Amos
>
>>
>> --- On Sat, 10/9/10, Kinkie  wrote:
>>
>>> From: Kinkie 
>>> Subject: Re: [squid-users] could not parse headers from a disk
> structure!
>>> To: "Landy Landy" 
>>> Cc: "Squid-Users" 
>>> Date: Saturday, October 9, 2010, 1:45 PM
>>> You  are right, and you don't
>>> need to do anything. Those cache files
>>> will be discarded by Squid.
>>>
>>> On Friday, October 8, 2010, Landy Landy 
>>> wrote:
>>> > Today, I noticed some sites were not loading and was
>>> getting "connection refused error". checked the cache.log
>>> and noticed squid was restarting due to
>>> >
>>> > 2010/10/08 15:41:01| WARNING: redirector #17 (FD 24)
>>> exited
>>> > 2010/10/08 15:41:01| WARNING: redirector #15 (FD 22)
>>> exited
>>> > 2010/10/08 15:41:01| WARNING: redirector #10 (FD 17)
>>> exited
>>> > 2010/10/08 15:41:01| WARNING: redirector #13 (FD 20)
>>> exited
>>> > 2010/10/08 15:41:01| WARNING: redirector #12 (FD 19)
>>> exited
>>> > 2010/10/08 15:41:01| WARNING: redirector #6 (FD 13)
>>> exited
>>> >
>>> > I shutdown squid and after 5 minutes restarted it
>>> again and now i get the following:
>>> >
>>> > 2010/10/08 15:44:30| WARNING: 1 swapin MD5 mismatches
>>> > 2010/10/08 15:44:30| could not parse headers from on
>>> disk structure!
>>> > 2010/10/08 15:44:42| could not parse headers from on
>>> disk structure!
>>> > 2010/10/08 15:44:42| could not parse headers from on
>>> disk structure!
>>> > 2010/10/08 15:45:10| could not parse headers from on
>>> disk structure!
>>> > 2010/10/08 15:45:13| could not parse headers from on
>>> disk structure!
>>> > 2010/10/08 15:45:16| could not parse headers from on
>>> disk structure!
>>> > 2010/10/08 15:45:25| could not parse headers from on
>>> disk structure!
>>> > 2010/10/08 15:45:26| could not parse headers from on
>>> disk structure!
>>> > 2010/10/08 15:45:29| could not parse headers from on
>>> disk structure!
>>> >
>>> > Don't know what causes it but, I'm suspecting some
>>> cache files are corrupted. I had a power outage yesterday
>>> and maybe it caused that.
>>> >
>>> > How can I fix that error?
>>> >
>>> > Thanks in advanced for your help.
>>> >
>>> >
>>> >
>>> >
>>>
>>> --
>>>     /kinkie
>>>
>



-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] Heavy load squid with high CPU utilization...

2011-03-22 Thread George Herbert
On Tue, Mar 22, 2011 at 7:27 PM, Marcus Kool
 wrote:
> Dejan,
>
> Squid is known to be CPU bound under heavy load and the
> Quad core running at 1.6 GHz in not the fastest.
> A 3.2 GHz dual core will give you double speed.

Second this.  CPU speed -> perf wasn't quite linear when I was testing
that but was certainly highly improved with 2.4 and 2.6 and 3.0 GHz
CPUs.

>[...]
> You use one disk solely for cache.  This can be better
> if you use a battery-backed disk I/O controller with
> 256MB cache.
> And the obvious: more disks is good for overall performance

Personally, I like using 4 disks.  2 for OS + Squid logs, 2 for cache
(2 separate cache dirs).  Use Linux LVM for mirroring the OS partition
and logs partition, but no RAID/LVM on the cache dirs.


-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] Is it possible to use Squid as a proxy and cache for a slow CIFS drive ?

2011-03-28 Thread George Herbert
Squid is a web content cache engine, not a filesystem cache
technology.  The filesystem cache / acelleration systems are a
completely different class of technology.

If the Alfresco system is doing database-like things on the back end,
filesystem cacheing in front of it is unlikely to be entirely safe
from a functional / architectural point of view, but you'd need to
talk to the Alfresco engineering team.




On Mon, Mar 28, 2011 at 10:05 AM, Martin Gilly  wrote:
> Hi all !
>
> We have special scenario with a slow file share where Squid (maybe combined
> with other tools) could help by acting like as a CIFS proxy and caching
> system:
>
> We're testing an Alfresco ECM System which has a CIFS subsystem (based on
> jLAN) that is simply to slow for our needs. In this setup the appserver
> Alfresco (SUSE on vmwars ESXi) and the clients are on a local LAN with Gb
> Ethernet (some clients on WLAN) connectivity and the clients (Windows and
> Mac) access Alfresco via the CIFS share provided by Alfresco.
>
> The Alfresco server is (due to it overhead (talking to the DB, indexing,
> etc.)) about six times slower when storing or reading files than a Samba
> mount on the same machine or a NAS on the same network.
>
> Now my idea is to put a caching layer in the middle between Alfresco and the
> client that ...
> * ... transparently sits in the middle between Alfresco and the clients
> * ... caches read files and (on subsequent access) serves them directly
> instead of from the repository
> * ... caches write operations in a store-and-forward manner like a write
> back cache (ie. signals OK to the client when the file is received locally
> and than writes back to Alfresco asynronously)
>
> So far, I've been discussing this with some WAFS vendors, but the ones I
> came to know don't have anything in their toolbox to acheive this. Now I'm
> completely stuck in finding a way to speed this up :-/
>
> Maybe you can think of a way that Squid - maybe in combination with some
> other tools - can create a solution for this problem ?
>
> thx and kind regards,
>
> martin.
>
>



-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] How to diagnose race condition?

2011-04-25 Thread George Herbert
On Mon, Apr 25, 2011 at 2:41 PM, Steve Snyder  wrote:
> I just upgraded from CentOS 5.5 to CentOS 5.6, while running Squid v3.1.12.1 
> in both environments, and somehow created a race condition in the process.  
> Besides updating the 200+ software packages that are the difference between 
> 5.5 and 5.6, I configured and enabled DNSSEC on my nameserver.
>
> What I see now is that Squid started at boot time uses 100% CPU, with no 
> traffic at all, and will stay that way seemingly forever.  If I shut down 
> Squid and restart it, all is well.  So: Squid started at boot time = bad, 
> Squid started post-boot = good.  There is nothing unusual in either the 
> system or Squid logs to suggest what the problem is.
>
> Can anyone suggest how to diagnose what Squid is doing/waiting for?
>
> Thanks.

Not precisely sure, however in general...

If you have a viable console at the time, you can trace the process
activity and see what it's waiting on (what file, network port, etc).
Figure out what process ID squid is, and then strace -p .


If that's not working, modify the init start script temporarily.
Where it normally runs squid, modify it to log instead:

strace -o/tmp/squid-strace 


Quick and dirty solution to try first - move its init script to
S99squid from whatever number it is now.  And if you're starting it at
runlevel 2, move it to the end of runlevel 3...  More generally, look
at what init scripts got moved around from 5.5 to 5.6


-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] Squid Hardware to Handle 150Mbps Peaks

2012-01-18 Thread George Herbert
On Tue, Jan 17, 2012 at 4:25 PM, jeffrey j donovan
 wrote:
>
> On Jan 17, 2012, at 1:02 PM, nachot wrote:
>
>> We currently have a commercial proxy solution in place but since we increased
>> our bandwidth to 150meg connection, the proxy is slowing things down
>> considerably as it's spec'd for 10meg connections.  The commercial vendor
>> proposes a new appliance that is 5 times what we can afford to spend.  We're
>> considering Squid as an option, but it needs to be able to support 50meg
>> sustained throughput with spikes to 150meg.
>>
>> We have about 200 users and only need the proxy to support ICAP integration
>> with our DLP solution.  The Squid proxy should provide visibility into our
>> SSL connections for the DLP solution to scan and also provide blocking of
>> web/FTP connections containing sensitive data.  Caching and web filtering
>> are secondary needs.
>>
>> I expect Squid would be able to support our needs, but also expect that it
>> won't run on light hardware (which is the reason behind our current need in
>> the first place).  Are there recommended hardware specs for such a
>> configuration?
>>
>> Any suggestions are appreciated.
>
>
> I have 2 squids running on 2.8ghz quad core xeons, serving 32 networks and 
> 9,000 users. internet connection is 100mb ethernet handoff.
> squid is great money saver.
>
> -j


More important that Mb/s or users is requests per second.  You can put
gig or 10 gig interfaces on the Squid box; the number of lookups it
can do per second doesn't get any faster.

You can get that from your logs, it's easy to time bin them and
generate peak values for second, 5 or 10 second bins, minutes, etc.
>From that, spec out systems to match it.

Last time I ran high-performance Squid clusters (a couple of jobs ago
now) we hit 600 plus hits per second per server in "lab test"
(3.0P30ish at the end) and 400+ HPS in production, in clusters of 2-4
per cache pool, using dual-CPU quad-core P4 boxes with 8 GB of RAM, 4x
SATA HD (root, 2x separate cache dirs, logs dir).  I have heard
similar numbers for general internet content from others, though your
mileage may vary depending on how big the hits are and tuning and your
CPUs.


-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] Will Work with Atom Processor?

2012-07-16 Thread George Herbert
On Mon, Jul 16, 2012 at 6:25 PM, Amos Jeffries  wrote:
> On 17.07.2012 04:21, Waitman Gobble wrote:
>>
>> On 7/16/2012 9:08 AM, William De Luca wrote:
>>>
>>> Hey All,
>>>
>>> I'm thinking about building a web Cache server and I was thinking
>>> about getting one of those cheap'o Shuttle slim computers with the
>>> dual core Atom Processor. I was just wondering if Squid Web Caching
>>> will run on the Atom Processor before I invest in it?
>>>
>>> Thanks,
>>> -Bill
>>
>> Hi,
>>
>> I've tested on Acer AO722 w/ AMD C-60 / FreeBSD 10.0-CURRENT... as a
>> local cache and it works.
>>
>> Waitman Gobble
>> San Jose California USA
>
>
>
> Yes, the only things to be aware of are RAM requirements if your box is
> slimline in that direction.
> Squid default binary footprint is 6MB, but can be tuned down below 4MB by
> disabling a lot of the bells and whistles. Memory cache defaults to 256MB
> nowdays, but can be configured as low as zero bytes. Squid requires a few
> extra MB to run transactions (averages around ~32KB per concurrent client),
> so the overall slimline footprint requirements is somewhere in the order of
> 16MB.

The Atoms (at least some) can support plenty of RAM - the Sea Micro
(now AMD) SM1 comes with 4 GB per physical CPU, a standard DDR3
DIMM.  That's about 1.5 TB in their box.

Makes me wonder if Sea Micro tested large Squid (or Varnish) clusters...


-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] Moving squid from Solaris to Linux

2012-10-01 Thread George Herbert
On Mon, Oct 1, 2012 at 6:09 AM, Graham Butler  wrote:
> We are currently looking at replacing our Solaris boxes with a flavour of 
> Linux to run squid with a focus on Red Hat and Ubuntu. I am trying to collect 
> some evidence to which OS is being used to run squid and why, before we make 
> a decision. Could you please respond by sending me, or the list, information 
> on which OS you are using to run squid and any information on why your 
> decided to run it on that particular platform.
>
> I am also asking other list for similar information on BIND, Exim, Apache, 
> etc...
>
> Many thanks for any information you may send me.
>

I answer this question more based on what you know than what it "runs
best on"; from what I've seen, the OS is of secondary importance to
the Squid version and tuning.

Personally, RHEL or CentOS have worked very well for me when I was
running large (1m users) Squid farms, but I have seen Squid run on
large clusters with Debian, SuSE, etc.  But I was already familiar
with RHEL / CentOS going back some years.

If your UNIX / Linux admin teams already prefer another distro,
probably your best bet is to stick with what they know already.


-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] max_filedesc on squid 3.2.2

2012-10-16 Thread George Herbert
I still find this behavior slightly bizarre, that the ulimit in the
build environment can affect the prod envt.  And it keeps biting other
people...

-george

On Tue, Oct 16, 2012 at 2:42 PM, Amos Jeffries  wrote:
> On 17.10.2012 03:02, Ricardo Rios wrote:
>>
>> El 2012-10-16 03:17, Amos Jeffries escribió:
>>
>>> On 16/10/2012 6:14 p.m., Ricardo Rios - Shorewall List wrote:
>>>
 Testing version 3.2.2-20121015-r11677, i see problems with the
 max_filedesc on OpenSuSE 11.4 x64 server:/ # ulimit -n 65535 squid.conf
 : max_filedesc 65535 /etc/security/limits.conf * - nofile 65535 on
 cache.log : kid1| NOTICE: Could not increase the number of
 filedescriptors kid1| With 16384 file descriptors available on squid -k
 reconfigure : kid1| WARNING: max_filedescriptors disabled. Operating
 System setrlimit(RLIMIT_NOFILE) is missing.
>>>
>>> Squid just told you what the problem is: "Operating System
>>> setrlimit(RLIMIT_NOFILE) is missing". Please check the config.log from
>>> when you built this Squid for more information about what went wrong when
>>> the compiler tested your OS for this function support. Does that message
>>> show up on startup at all? or just reconfigure? PS. Also notice how the
>>> official squid.conf directive name is different to the old experimental
>>> "max_filedesc" you are configuring?
>>>
 PS: still getting segment fault dying.. on this version with more then
 1 worker.
>>>
>>> We fixed one of the three SMP segfaults earlier today. Amos
>>
>>
>> I am so sorry guys, i just noted i compile with
>> "--with-filedescriptors=16384", i change it to 65535 and now is working
>>
>> kid1| With 65535 file descriptors available
>>
>> Sorry :(
>
>
>
> No worries. If you don't mind could you check the config.log anyway.
>
> The ./configure option is supposed to be just a default limit when none is
> set in the config file. AFAIK OpenSUSE is supposed to provide setrlimit()
> and allow squid.conf to alter the limit to anything else it needs.
>
> Amos



-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] max_filedesc on squid 3.2.2

2012-10-16 Thread George Herbert
On Tue, Oct 16, 2012 at 3:00 PM, Amos Jeffries  wrote:
> On 17.10.2012 10:48, George Herbert wrote:
>>
>> I still find this behavior slightly bizarre, that the ulimit in the
>> build environment can affect the prod envt.  And it keeps biting other
>> people...
>
>
> It's not ulimit in the build environment particularly. Although the build
> environment might need ulimit permissions to perform the setrlimit() tests.
>
> It is a basic default of N=1024
>  ... altered by ./configure --with-filedescriptors=N
>  ... overridden on production by squid.conf max_filedescriptors (If, and
> only if, setrlimit() RLIMIT_NOFILE is able to be built+used).
>
> Amos


Right; it's the build environment setrlimit() test I think that causes
the problem.

I'd have configure'ed --with-filedescriptors=N (64k, say) and forget
the ulimit in the build script wrapper and still get 1024, or whatever
the current build hard limit was.

I never bothered to look and see the details of what it was doing, but
is there any situation where the build environment setrlimit test is
actually helping?  All the prod boxes had 16k, 32k, 64k hard limits;
the build box often defaulted to a lower value.  Having my own minimal
script around make to ulimit unlimited first then configure and make
worked, but


-- 
-george william herbert
george.herb...@gmail.com


Re: [squid-users] 3.2.5 comm_open: socket failure: (24) Too many open files

2012-12-29 Thread George Herbert



On Dec 29, 2012, at 12:41 PM, 叶雨飞  wrote:

> So you are saying even squid is configured to use 16384 fd, it
> couldn't because limit is 1024?
> 
> That's kind of confusing,  I tried to use ulimit -n 16384 as root to
> raise FD limit now, will report back.


The ulimit settings need to be system config settings on startup, or in the 
squid init script so their scope applies to the actual software launch when 
unattended...

George William Herbert
Sent from my iPhone


Re: [squid-users] SSL Bump Root Certificate Expiration

2013-01-04 Thread George Herbert
http://projects.puppetlabs.com/projects/1/wiki/SSL_in_The_Year2038

32-bit date overflow, same problem as the generic UNIX Y2038 bug.

Use 64 bit systems 8-)


George William Herbert
Sent from my iPhone

On Jan 4, 2013, at 1:10 AM, Woon Khai Swen  wrote:

> Found out the problem 
> 
> # openssl req -new -newkey rsa:1024 -days 36500 -nodes -x509 -keyout myCA.pem 
>  -out myCA.pem
> 
> # openssl x509 -in myCA.pem -outform DER -out myCA.der
> 
> Installing myCA.der as root cert shows the validity date from ‎Friday, ‎4 
> ‎January, ‎2013 4:58:39 PM to ‎Thursday, ‎4 ‎November, ‎1976 10:30:23 AM  
> (1976, not 2113. it can auto back date :O   )
> 
> Still figuring out why this happened, thou. Must be an openssl issue. The 
> commands are copied directly from squid dynamic cert generation wiki.
> 
> Thanks for the pointer.
> 
> 
> 
> -Original Message-
> From: Will Roberts [mailto:ironwil...@gmail.com] 
> Sent: Friday, 4 January, 2013 12:20 PM
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] SSL Bump Root Certificate Expiration
> 
> On 01/03/2013 11:16 PM, Woon Khai Swen wrote:
>> Dear all,
>> 
>> I found out the self signed ssl root cert for transparent SSL interception 
>> (SSL Bump + origin cert mimicking + dynamic cert generation) is valid only 
>> for 365 days max, no matter how many additional days specified in openssl 
>> cert generation command line.
> 
> Mine's good for 100 years. I'd check your command line arguments.
> 
> --Will


Re: [squid-users] Squid processing very slow on some pdf

2013-01-30 Thread George Herbert
On first impression from this data?  Check DNS resolution from the Squid to 
that hostname.  It sounds like a timeout / retry / recursion fail in 
progress... 


George William Herbert
Sent from my iPhone

On Jan 29, 2013, at 11:54 PM, "Sandrini Christian \(xsnd\)"  
wrote:

> Hi
> 
> We are using an f5 appliance that is loadbalancing http request to 3 squid 
> servers. We use squid 3.1.10. When I want to open a pdf file of a certain 
> domain it takes several minutes for 160kb. If I open the pdf without going 
> through the proxy it is very quick. We have seen this problem only on the pdf 
> of the following domain
> 
> http://www2.zhlex.zh.ch/appl/zhlex_r.nsf/0/62FABE8867570E44C1257A210032892E/$file/414.252.3_29.1.08_77.pdf
> 
> This is in the access.log. Squid takes 115 seconds to handle the request.
> 
> 1359524314.374 115810 160.85.85.46 TCP_HIT/200 111028 GET 
> http://www2.zhlex.zh.ch/appl/zhlex_r.nsf/0/62FABE8867570E44C1257A210032892E/$file/414.252.3_29.1.08_77.pdf
>  - NONE/- application/pdf
> 
> No logs have been written to cache.log during that time.
> 
> I have captured the network traffic from the squidbox to www2.zhlex.zh.ch to 
> find out the time squid takes to get the pdf. It does it in less than a 
> second.
> 
> tcpdump -i eth1 host www2.zhlex.zh.ch
> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
> listening on eth1, link-type EN10MB (Ethernet), capture size 65535 bytes
> 06:30:45.491906 IP srv-app-902.zhaw.ch.34179 > 195.65.218.66.http: Flags [S], 
> seq 1530511587, win 5840, options [mss 1460,nop,nop,sackOK], length 0
> 06:30:45.494241 IP 195.65.218.66.http > srv-app-902.zhaw.ch.34179: Flags 
> [S.], seq 3031868726, ack 1530511588, win 64240, options [mss 
> 1380,nop,nop,sackOK], length 0
> 06:30:45.494259 IP srv-app-902.zhaw.ch.34179 > 195.65.218.66.http: Flags [.], 
> ack 1, win 5840, length 0
> 06:30:45.494353 IP srv-app-902.zhaw.ch.34179 > 195.65.218.66.http: Flags 
> [P.], seq 1:519, ack 1, win 5840, length 518
> 06:30:45.524850 IP 195.65.218.66.http > srv-app-902.zhaw.ch.34179: Flags 
> [P.], seq 1:290, ack 519, win 63722, length 289
> 06:30:45.524864 IP srv-app-902.zhaw.ch.34179 > 195.65.218.66.http: Flags [.], 
> ack 290, win 6432, length 0
> 06:30:45.541484 IP 195.65.218.66.http > srv-app-902.zhaw.ch.34179: Flags [.], 
> seq 290:1670, ack 519, win 63722, length 1380
> 06:30:45.541493 IP srv-app-902.zhaw.ch.34179 > 195.65.218.66.http: Flags [.], 
> ack 1670, win 9660, length 0
> 06:30:45.541603 IP 195.65.218.66.http > srv-app-902.zhaw.ch.34179: Flags [.], 
> seq 1670:3050, ack 519, win 63722, length 1380
> 06:30:45.541612 IP srv-app-902.zhaw.ch.34179 > 195.65.218.66.http: Flags [.], 
> ack 3050, win 12420, length 0
> 06:30:45.541709 IP 195.65.218.66.http > srv-app-902.zhaw.ch.34179: Flags [.], 
> seq 3050:4430, ack 519, win 63722, length 1380
> 06:30:45.541718 IP srv-app-902.zhaw.ch.34179 > 195.65.218.66.http: Flags [.], 
> ack 4430, win 15180, length 0
> 06:30:45.543929 IP 195.65.218.66.http > srv-app-902.zhaw.ch.34179: Flags [.], 
> seq 4430:5810, ack 519, win 63722, length 1380
> 06:30:45.543937 IP srv-app-902.zhaw.ch.34179 > 195.65.218.66.http: Flags [.], 
> ack 5810, win 17940, length 0
> 06:30:45.544053 IP 195.65.218.66.http > srv-app-902.zhaw.ch.34179: Flags [.], 
> seq 5810:7190, ack 519, win 63722, length 1380
> 06:30:45.544062 IP srv-app-902.zhaw.ch.34179 > 195.65.218.66.http: Flags [.], 
> ack 7190, win 20700, length 0
> 06:30:45.544162 IP 195.65.218.66.http > srv-app-902.zhaw.ch.34179: Flags [.], 
> seq 7190:8570, ack 519, win 63722, length 1380
> 06:30:45.544170 IP srv-app-902.zhaw.ch.34179 > 195.65.218.66.http: Flags [.], 
> ack 8570, win 23460, length 0
> 06:30:45.544303 IP 195.65.218.66.http > srv-app-902.zhaw.ch.34179: Flags [.], 
> seq 8570:9950, ack 519, win 63722, length 1380
> 06:30:45.544308 IP srv-app-902.zhaw.ch.34179 > 195.65.218.66.http: Flags [.], 
> ack 9950, win 26220, length 0
> 06:30:45.544372 IP 195.65.218.66.http > srv-app-902.zhaw.ch.34179: Flags [.], 
> seq 9950:11330, ack 519, win 63722, length 1380
> 06:30:45.544381 IP srv-app-902.zhaw.ch.34179 > 195.65.218.66.http: Flags [.], 
> ack 11330, win 28980, length 0
> 06:30:45.544531 IP 195.65.218.66.http > srv-app-902.zhaw.ch.34179: Flags [.], 
> seq 11330:12710, ack 519, win 63722, length 1380
> 06:30:45.544541 IP srv-app-902.zhaw.ch.34179 > 195.65.218.66.http: Flags [.], 
> ack 12710, win 33120, length 0
> 06:30:45.546216 IP 195.65.218.66.http > srv-app-902.zhaw.ch.34179: Flags [.], 
> seq 12710:14090, ack 519, win 63722, length 1380
> 06:30:45.546226 IP srv-app-902.zhaw.ch.34179 > 195.65.218.66.http: Flags [.], 
> ack 14090, win 35880, length 0
> 06:30:45.546332 IP 195.65.218.66.http > srv-app-902.zhaw.ch.34179: Flags [.], 
> seq 14090:15470, ack 519, win 63722, length 1380
> 06:30:45.546341 IP srv-app-902.zhaw.ch.34179 > 195.65.218.66.http: Flags [.], 
> ack 15470, win 38640, length 0
> 06:30:45.546463 IP 195.65.218.66.http > srv-app-902.zhaw.ch.34179: Flags [.], 

Re: [squid-users] Squid CPU 100% infinite loop

2013-05-15 Thread George Herbert
Two questions -

One, what are in the logs from when this starts?

Two, I forget the *bsd tool, but can you run the appropriate strace / truss / 
dtrace tool on the process during lockups (ideally, before, through the start 
of, after)?


George William Herbert
Sent from my iPhone

On May 15, 2013, at 9:35 AM, "loic.blot"  wrote:

> Hello,
> 
> I have problems with squid 3.3.4. Every 30 minutes (with 300 users), squid 
> goes
> to an infinite loop and freeze all web connections (99.9% CPU used)
> 
> Squid is installed under OpenBSD 5.2
> 
> Here are my compile options:
> 
> Squid Cache: Version 3.3.4 configure options: '--enable-pf-transparent' 
> '--enable-follow-x-forwarded-for' '--with-large-files' '--enable-ssl' 
> '--disable-ipv6' '--enable-esi' '--enable-kill-parent-hack' '--disable-snmp' 
> '--with-pthreads' '--enable-ltdl-convenience' '--enable-auth-basic=none' 
> '--enable-auth-digest=none' '--enable-external-acl-helpers=none'
> 
> 
> Here is the configuration:
> 
> authenticate_ttl 2 hour
> authenticate_ip_ttl 1 hours
> include /etc/squid/squid.acl.conf
> include /etc/squid/squid.http_access.conf
> http_port 3128
> http_port 3129 intercept
> 
> hierarchy_stoplist cgi-bin ?
> cache_mem 6800 MB
> maximum_object_size_in_memory 10 MB
> minimum_object_size 2 KB
> maximum_object_size 6 MB
> access_log stdio:/var/log/squid/access.log
> cache_store_log none
> buffered_logs on
> cache_log /var/log/squid/cache.log
> coredump_dir /tmp
> url_rewrite_program /usr/local/bin/squidGuard -c 
> /etc/squidguard/squidguard.conf
> url_rewrite_children 192 startup=150 idle=10 concurrency=0
> # Add any of your own refresh_pattern entries above these.
> refresh_pattern ^ftp: 1440 20% 10080
> refresh_pattern -i (/cgi-bin/|?) 0 0% 0
> refresh_pattern . 0 20% 4320
> quick_abort_min 0 KB
> quick_abort_max 0 KB
> negative_ttl 0 seconds
> positive_dns_ttl 12 hours
> negative_dns_ttl 8 seconds
> connect_timeout 15 seconds
> request_timeout 45 seconds
> persistent_request_timeout 35 seconds
> shutdown_lifetime 3 seconds
> cache_mgr s...@lan.fr
> mail_from proxy...@lan.fr
> cache_effective_user _squid
> cache_effective_group _squid
> httpd_suppress_version_string on
> visible_hostname Proxy-PL
> unique_hostname prox1
> hostname_aliases dns.lan.fr
> digest_generation off
> icp_port 0
> allow_underscore on
> dns_retransmit_interval 1 seconds
> dns_timeout 2 seconds
> append_domain .lan.fr
> ipcache_size 10240
> ipcache_low 90
> ipcache_high 95
> fqdncache_size 10240
> client_db off
> maximum_single_addr_tries 2
> balance_on_multiple_ip on
> pipeline_prefetch on
> 
> 
> Any idea ?
> 
> Thanks for advance.