Re: [squid-users] moving cache.swap or rotating more frequently?

2011-06-02 Thread Tory M Blue
On Thu, Jun 2, 2011 at 12:45 AM, Amos Jeffries  wrote:
> On 02/06/11 18:27, Tory M Blue wrote:
>>
>> Afternoon
>>
>> Have  a question, is there a negative to running -k rotate more than
>> once a day?
>
> All your active connections will pause while Squid deals with the logs.
>

Ahh wasn't aware thanks, but it seems to be pretty quick, so not sure
this is terrible :)

>> I've recently moved squid to a ramcache (it's glorious), however my
>> cache.swap file continues to grow and brings me to an uncomfortable
>> 95%.
>
> By "ramcache" do you mean RAM cache (aka a large cache_mem storage area) or
> an in-memory pseudo disk?
>
> Tried using COSS? (in-memory pseudo disk with hardware backing).

In memory psuedo-disk /dev/ram0.

I tried Coss before and it was a really bad experience, wonder if I
try it with the pseudo-disk instead of on hard disk (I setup coss
before using standard fast SAS disks (not memory) and it was slower
then sin, really bad, 20-30 seconds for the first image etc. Maybe
what you are saying is I did my test wrong and COSS should be used
with a in memory "pseudo-disk", like what I'm running now with aufs..
hmmm

>>
>> If I run rotate it goes from 95% to 83% (9-12gb cache dir), it seems I
>> need to run this once every 12 hours to stay in a good place, but is
>> there anything wrong with that? I don't see it and seems that the
>> rotate really just cleans up the swap file and since it's all in ram,
>> it's super fast.
>
> That should be fine even if its was on disk. High throughput networks are
> known to do it as often as every 3 hours with only minor problems.
>  I've only heard of one network doing it hourly, the pause there was
> undesirable for the traffic being handled.
>  There is a nasty little feedback loop: the faster to *have* to do it the
> worse is the effects when you do. It is economical, up to a point.
>
>
>>
>> Another option is to move the swap file to a physical disk, what type
>> of performance hit will my squid system take? Obviously it's just
>> looking up, reading hash so it should not cause any issues, but
>> wondered. What is my best option, keep everything in ram and run
>> rotate 2-3x a day or is the penalty so small that pushing the swap
>> file to a physical disk a better answer?
>
> Unsure. Try it and let us know.
>
> The swap.state is a journal with small async writes for each file operation
> (add/remove/update of the cache_dir), including those for temporary files
> moving through. You only get into problems if the write speed to it falls
> behind the I/O of the cache its recording. (in general, on average etc..
> Peak bursts should not be a big problem)
>  Squid can recover from most swap.state problems. But it is best to avoid
> that kind of thing becoming normal.
>
> HTH


Always thank you sir.

Tory
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE9 or 3.1.12
>  Beta testers wanted for 3.2.0.8 and 3.1.12.2
>


[squid-users] moving cache.swap or rotating more frequently?

2011-06-01 Thread Tory M Blue
Afternoon

Have  a question, is there a negative to running -k rotate more than
once a day?

I've recently moved squid to a ramcache (it's glorious), however my
cache.swap file continues to grow and brings me to an uncomfortable
95%.

If I run rotate it goes from 95% to 83% (9-12gb cache dir), it seems I
need to run this once every 12 hours to stay in a good place, but is
there anything wrong with that? I don't see it and seems that the
rotate really just cleans up the swap file and since it's all in ram,
it's super fast.

Another option is to move the swap file to a physical disk, what type
of performance hit will my squid system take? Obviously it's just
looking up, reading hash so it should not cause any issues, but
wondered. What is my best option, keep everything in ram and run
rotate 2-3x a day or is the penalty so small that pushing the swap
file to a physical disk a better answer?

2.7STABLE9
Fedora12

Thanks
Tory


Re: [squid-users] Want to monitor squid, by pulling an image which is local.

2011-05-27 Thread Tory M Blue
On Fri, May 27, 2011 at 1:39 AM, Amos Jeffries  wrote:
> On 27/05/11 10:05, Tory M Blue wrote:
>>
>
> Seems to be some confusion.
>
>  Stale vs non-stale content is a matter for the website cache control HTTP
> headers. Squid *will* serve stale content according to RFC 2616.
>
>  Proxy up/down merely determines how much lag the client gets exposed to as
> unavailable peers get contacted for data.
>
> A peer is unavailable if either its box is down OR you manually dropped it
> out. It can't be available while shutdown, for instance.
>
> Now why does the Squid box being up/down matter independently of Squid
> itself?

Say Squid crashes, my icmp host check will succeed and with no other
check in place the squid server continues to be passed end user
requests that it can't hope to fulfill.So end users get errors.
(Remember the F5 accepted the connection and passed it on to a now
defunct server, so there is no real client side retransmission or
other, the user will get a 404 or other error.).

Another scenario I have a monitor check that grabs an image from a
cache_peer through squid, it succeeds squid is up and running, the LB
leaves squid in the VIP. Now we take the cache_peers down, or they
fail, overloaded or something , the LB tries to pull a test
image through squid, it fails (cache_peer is not available), the LB
pulls the squid cache(s) out of the VIP as well. I'm now completely
down, vs having sometime for the caches to serve data from it's local
store (I believe with my testing, that if the cache_peer is not
available and Squid has the requested image, it will serve it up,
regardless of age (could be stale, can't revalidate it etc).   Am I
mistaken?

>>

> Squid has a set of icons which it loads for FTP directory listings etc.
> They are configured in the /etc/squid/mime.conf configuration file.
>
> If you need a test image loaded by Squid the URL
> http://$host/squid-internal-static/icons/unknown.gif
> should come back from squid-2 with a question-mark icon.

Your a prince, but who is Anthony?

squid-internal-static/icons/anthony-unknown.gif :)

>>
>> monitorurl doesn't quite do it, since I'm looking for a test from a
>> 3rd party device.
>
> monitorurl takes any URL you want *through* the cache_peer link. If it comes
> back okay the peer is assumed to be accessible and ready to accept traffic.
>

Yes, but as you can see I'm trying to verify that the squid process is
working, I have other checks to verify that the cache_peers are up and
running (Again another LB monitor that checks for a status file,
up/down (for maintenance reasons).

But.. So what happens if the monitorurl fails?

-Thanks again sir, I owe you a beer/coffee/soda/some variety of flavored water

Tory


[squid-users] Want to monitor squid, by pulling an image which is local.

2011-05-26 Thread Tory M Blue
Hiya :)

I would like to have via the primary squid instance a method to grab a
local image, to verify that squid is up and running.

I have F5's that I want to add a monitor, if the squid box goes down,
take it out of the vip.. However I can't have the monitor query the
squid box and have it pull an image from the cache_peer, since we at
times take those down for maintenance, and I much rather have squid
serve old content, than not be available at all.

So I'm trying to figure out, how I can put a static image on the squid
box itself,which allows me to pull that image through the standard
port, standard squid process.

monitorurl doesn't quite do it, since I'm looking for a test from a
3rd party device.

Thought about some acl's but not sure how to force it to grab the
local image/file.

Any ideas?
Tory

2.7STABLE9
now running from a ramcache! iowait begone!


Re: [squid-users] Anyway to tell if squid is actively using unlinkd?

2011-05-25 Thread Tory M Blue
On Wed, May 25, 2011 at 9:03 PM, Amos Jeffries  wrote:
> On Wed, 25 May 2011 20:27:05 -0700, Tory M Blue wrote:
>>
>> On Wed, May 25, 2011 at 8:01 PM, Amos Jeffries 
>> wrote:

>> backup, so I was leary. CPU cycles sure, but the squid process shows:
>> PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>> 30766 squid     20   0 6284m 6.1g 3068 S 13.9 38.8  91:51.50 squid



> Hold up a minute. This diagram worries me. Squid-2 should not have any
> {squid} entries. Just helpers with their own names.

the diagram was from ptrace.

The processes running that can be deemed via ps are

root  2334 1  0 May19 ?00:00:00 squid -f /etc/squid/squid.conf
squid 2336  2334 11 May19 ?17:54:41 (squid) -f /etc/squid/squid.conf
squid 2338  2336  0 May19 ?00:00:00 (unlinkd)


> Are they helpers running with the process name of "squid" instead of their
> own binary names like unlinkd?
>
> Or are they old squid which are still there after something like a crash?

No crashes, controlled stop and starts or -k reconfigure. Nothing old



>>
>> l1/l2 cache? Have not considered or looked into it. New concept for me :)
>>
>
> Sorry terminology mixup.
>
> L1 L2 values on the cache_dir line. Sub-directories within the dir
> structure.
> The URL hash is mapped to a 32-bit binary value which then gets split into
> FS path: path-root/L1/L2/filename
>
>  cache_dir type path-root size L1 L2

Ahh yes, actually was running at 16 256
and recently moved it to 8 128 trying "again" to mitigate files.

So did I move this in the wrong direction?
>
> Tuned to larger values to decrease file count in each directory. To avoid
> iowait on ext-like FS while the disk scans inodes for a particular filename
> in the L2 directory.

thanks again Amos

Tory


Re: [squid-users] Anyway to tell if squid is actively using unlinkd?

2011-05-25 Thread Tory M Blue
On Wed, May 25, 2011 at 8:01 PM, Amos Jeffries  wrote:

>> high and low water for the disk were at 85 and 95%, bumped it just to
>
> The watermark difference and total size determine how much disk gets erased
> when it overflows. Could be lots or not much. Very likely this is it. When
> the cache_dir reached high watermark it consumes a lot more disk IO and CPU
> erasing the cache until low watermark is reached.

Kind of what I thought but iowait didn't show enough to cause cpu
backup, so I was leary. CPU cycles sure, but the squid process shows:
PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
30766 squid 20   0 6284m 6.1g 3068 S 13.9 38.8  91:51.50 squid

However most of today the resident memory sat at 6.4gb, as did all my
squid servers. So figured this is kind of the high end for memory
during busy times. Now it's dropped to 6.1gb, so I'm guessing we are
slowing down and thus squid is slowly shrinking its in memory
footprint.


> Look at using COSS either way for high number of small files.

Yes, I tried this and it was a complete fail. Load went through the
roof.  Was a complete failure. this server should be complete overkill
for squid, 12 cores, 16gb mem, 12 15K sas drives. Even my boxes that
are using all 8 15K sas drives, have this weird load thang, (and let's
be honest, 4-5 load is nothing, but it's the weird spike and not
knowing what is causing the load and if it's something I can fix, is
the real issue).

Have tried various raid levels, including 0. Using LVM across all 8
disks, to match squid 4K write/read block size so that the VG/LVM
would write across all 8 disks equally etc.

earlier Coss config attempts

  coss cache file system 
#cache_dir coss /cache1/coss 3 max-size=256000 block-size=8192 <--initial
#cache_dir coss /cache1/coss 3000 max-size=256000 block-size=2048 <--2nd testl
#cache_dir coss /cache2/coss 3000 max-size=256000 block-size=2048 <-2nd
#cache_dir coss /cache3/coss 3000 max-size=256000 block-size=2048 <-2nd
#cache_dir coss /cache4/coss 3000 max-size=256000 block-size=2048 <-2nd
#cache_swap_log /cache/%s
  coss cache file system 

> Could also be dumping cache_mem content to the disk for some reason. Though
> why it would do that without traffic is a mystery.

Misunderstood, or mistyped. This was during a peak period, but seems
to happen without significant increase in traffic. Like load is 4-5
now, and we are beyond peak.

├─squid(30763)───squid(30766)─┬─unlinkd(30767)
│ ├─{squid}(30768)
│ ├─{squid}(30769)
│ ├─{squid}(30770)
│ ├─{squid}(30771)
│ ├─{squid}(30772)
│ ├─{squid}(30773)
│ ├─{squid}(30774)
│ ├─{squid}(30775)
│ ├─{squid}(30776)
│ ├─{squid}(30777)
│ ├─{squid}(30778)
│ ├─{squid}(30779)
│ ├─{squid}(30780)
│ ├─{squid}(30781)
│ ├─{squid}(30782)
│ ├─{squid}(30783)
│ ├─{squid}(30784)
│ ├─{squid}(30785)
│ ├─{squid}(30786)
│ ├─{squid}(30787)
│ ├─{squid}(30788)
│ ├─{squid}(30789)
│ ├─{squid}(30790)
│ └─{squid}(30791)


>> see but made really no difference.  My cache dirs are just 5gb each
>> (10gb total), on 150gb disks ya, but I don't really want to have to
>> weed through all of these files looking for an image.
>
> You wade through them manually?

No,  :) just didn't want the system even with hash to have to deal
with a ton of files that are mostly stale within 10-30 minutes.


> Squid uses a hash, so its lookup time is good. You can tune L1/L2 for faster
> disk IO speeds with large caches.

l1/l2 cache? Have not considered or looked into it. New concept for me :)


>
> strace info during a spike should be useful to show what is going on.
>

installed, reading to see how and what information it will give me

> Amos

Thank you sir!

Tory


[squid-users] Anyway to tell if squid is actively using unlinkd?

2011-05-25 Thread Tory M Blue
I've got weird load behavior that crops up and this box is only
running squid. I am close to what I set my cache_dirs to in terms of
size, so wondering if that's it.

Just trying to figure out why my server will run at a load of 1 -1.5
and next thing it's up to 5-6, no real increase in traffic.

Cache_mem is high, this is a very robust.beefy server. So trying to
figure it out.

Reverse Proxy config.
Squid 2.7STABLE9
Fedora 12 64bit
16gb Mem
4 raid 10 15K sas drives for OS
8 15K sas drives in jbod config..
 currently testing with 2 cache_dirs against 2 drives.

Very little iowait.

Resident memory for the squid process is around 6.4gb, cache_mem set
at 4gb.  We have tons of small images (3K), so I like to attempt to
keep all hot objects in cache.

high and low water for the disk were at 85 and 95%, bumped it just to
see but made really no difference.  My cache dirs are just 5gb each
(10gb total), on 150gb disks ya, but I don't really want to have to
weed through all of these files looking for an image.

Just wondering where I can look and or what type of information would
be helpful to see if Squid is in fact causing these spikes, vs
something in the OS.

Thanks
Tory


Re: [squid-users] storeClientReadHeader: no URL!

2011-04-06 Thread Tory M Blue
On Tue, Apr 5, 2011 at 12:28 PM, Tory M Blue  wrote:
> On Tue, Apr 5, 2011 at 12:32 AM, Amos Jeffries  wrote:
>> On 05/04/11 17:09, Tory M Blue wrote:
>>>>
>>>> Problem is that this is happening in every cache server. Even if I
>>>> start clean I get these. What debug level/numbers can I use to track
>>>> this down? This happens constantly, so ya as you said something is
>>>> going on but it doesn't appear to be, someone mucking with the cache
>>>> or other odity, since it happens with new fresh squid instances and is
>>>> happening a lot..
>>>>
>>>> Thanks Amos
>>>>
>>>> Tory
>>>>
>>>
>>> hmmm
>>>
>>> 746665-2011/04/04 21:57:05| storeClientReadHeader: swapin MD5 mismatch
>>> 746729-2011/04/04 21:57:05|     1949E8301BB74F8CD2E16773A23B8D26
>>> 746784-2011/04/04 21:57:05|     3BD0B17768C3A6F6A85A4C4684A311C0
>>>
>>>
>>> That has to be the cause, but it makes no sense. Why would they be
>>> there with a fresh cache install, on 3 different servers..
>>
>>
>> What OS filesystem is this using?
>>  does it do any sort of fancy file de-duplication or compression underneath
>> Squid?
>>  is there any system process which might do that sort of thing?
>>
>> Can you try 2.7.STABLE9?
>>
>> If this is ufs/aufs/diskd, do you have the ufsdump tool installed with
>> Squid? that can dump the content of cache files for manual checking.
>

Nope clean install on 2.7STABLE9 and the first hour or so no error
messages but now I'm getting them

2011/04/06 12:33:46| storeClientReadHeader: no URL!
2011/04/06 12:34:02| storeClientReadHeader: no URL!
2011/04/06 12:34:12| storeClientReadHeader: no URL!
2011/04/06 12:34:12| storeClientReadHeader: no URL!
2011/04/06 12:34:33| storeClientReadHeader: no URL!
2011/04/06 12:34:34| storeClientReadHeader: no URL!
2011/04/06 12:34:35| storeClientReadHeader: no URL!
2011/04/06 12:34:37| storeClientReadHeader: no URL!
2011/04/06 12:35:03| storeClientReadHeader: no URL!
2011/04/06 12:35:03| storeClientReadHeader: no URL!
2011/04/06 12:35:40| storeClientReadHeader: no URL!
2011/04/06 12:35:50| storeClientReadHeader: no URL!
2011/04/06 12:35:52| storeClientReadHeader: no URL!
2011/04/06 12:36:17| storeClientReadHeader: no URL!
2011/04/06 12:36:18| storeClientReadHeader: no URL!
2011/04/06 12:36:18| storeClientReadHeader: no URL!


Since this was a absolutely fresh install, clean cache directory, this
is either a request coming in or something. We did make some changes
with headers, but since this was a clean install, can't believe it's
the apache changes from the app servers.

Very strange, would like to identify

Tory


Re: [squid-users] storeClientReadHeader: no URL!

2011-04-05 Thread Tory M Blue
On Tue, Apr 5, 2011 at 12:32 AM, Amos Jeffries  wrote:
> On 05/04/11 17:09, Tory M Blue wrote:
>>>
>>> Problem is that this is happening in every cache server. Even if I
>>> start clean I get these. What debug level/numbers can I use to track
>>> this down? This happens constantly, so ya as you said something is
>>> going on but it doesn't appear to be, someone mucking with the cache
>>> or other odity, since it happens with new fresh squid instances and is
>>> happening a lot..
>>>
>>> Thanks Amos
>>>
>>> Tory
>>>
>>
>> hmmm
>>
>> 746665-2011/04/04 21:57:05| storeClientReadHeader: swapin MD5 mismatch
>> 746729-2011/04/04 21:57:05|     1949E8301BB74F8CD2E16773A23B8D26
>> 746784-2011/04/04 21:57:05|     3BD0B17768C3A6F6A85A4C4684A311C0
>>
>>
>> That has to be the cause, but it makes no sense. Why would they be
>> there with a fresh cache install, on 3 different servers..
>
>
> What OS filesystem is this using?
>  does it do any sort of fancy file de-duplication or compression underneath
> Squid?
>  is there any system process which might do that sort of thing?
>
> Can you try 2.7.STABLE9?
>
> If this is ufs/aufs/diskd, do you have the ufsdump tool installed with
> Squid? that can dump the content of cache files for manual checking.

Nothing special, all using aufs, some on reiserfs others on ext4. I
have stable9 built so I can push that to a server and see if I see the
same thing.  I'll also grab ufsdump and see what I can do. I tried
diskd but it failed horribly, so back to aufs :)

Update shortly

Tory


Re: [squid-users] storeClientReadHeader: no URL!

2011-04-04 Thread Tory M Blue
> Problem is that this is happening in every cache server. Even if I
> start clean I get these. What debug level/numbers can I use to track
> this down? This happens constantly, so ya as you said something is
> going on but it doesn't appear to be, someone mucking with the cache
> or other odity, since it happens with new fresh squid instances and is
> happening a lot..
>
> Thanks Amos
>
> Tory
>

hmmm

746665-2011/04/04 21:57:05| storeClientReadHeader: swapin MD5 mismatch
746729-2011/04/04 21:57:05| 1949E8301BB74F8CD2E16773A23B8D26
746784-2011/04/04 21:57:05| 3BD0B17768C3A6F6A85A4C4684A311C0


That has to be the cause, but it makes no sense. Why would they be
there with a fresh cache install, on 3 different servers..

Thanks Amos

Tory


Re: [squid-users] storeClientReadHeader: no URL!

2011-04-04 Thread Tory M Blue
On Mon, Apr 4, 2011 at 4:10 PM, Amos Jeffries  wrote:
> On Mon, 4 Apr 2011 10:24:14 -0700, Tory M Blue wrote:
>>
>> What does " storeClientReadHeader: no URL!" mean, what is it telling me
>>
>> I'm seeing this quite a bit and can't find with normal searches what
>> this means, what is causing this..
>>
>> Thanks
>>
>> Tory
>>
>> 2011/04/04 10:18:45| storeClientReadHeader: no URL!
>> 2011/04/04 10:18:49| storeClientReadHeader: no URL!
>> 2011/04/04 10:18:49| storeClientReadHeader: no URL!
>> 2011/04/04 10:18:51|    /cache/0D/19/000D1950
>>
>> Squid Cache: Version 2.7.STABLE7
>> Fedora 12
>
> Somehow you have an object in your cache which has no URL.
>
> Considering that the URL is required in order to create the cache
> filename/number that is a sign that someone has been tampering with the
> cache or something really nasty has gone wrong.
>
> Squid will continue to work fine, treating these as corrupt and fetching new
> content for whatever URL they would otherwise have provided. It should also
> erase them to prevent future problems. So it is not something to be overly
> worried about, but may be worthwhile tracking down what is happening to the
> cache.
>
> Amos

Problem is that this is happening in every cache server. Even if I
start clean I get these. What debug level/numbers can I use to track
this down? This happens constantly, so ya as you said something is
going on but it doesn't appear to be, someone mucking with the cache
or other odity, since it happens with new fresh squid instances and is
happening a lot..

Thanks Amos

Tory


[squid-users] storeClientReadHeader: no URL!

2011-04-04 Thread Tory M Blue
What does " storeClientReadHeader: no URL!" mean, what is it telling me

I'm seeing this quite a bit and can't find with normal searches what
this means, what is causing this..

Thanks

Tory

2011/04/04 10:18:45| storeClientReadHeader: no URL!
2011/04/04 10:18:49| storeClientReadHeader: no URL!
2011/04/04 10:18:49| storeClientReadHeader: no URL!
2011/04/04 10:18:49| storeClientReadHeader: no URL!
2011/04/04 10:18:51| storeClientReadHeader: no URL!
2011/04/04 10:18:51| storeAufsOpenDone: (2) No such file or directory
2011/04/04 10:18:51|/cache/0D/19/000D1950

Squid Cache: Version 2.7.STABLE7
Fedora 12


Re: [squid-users] Illegal character in hostname '!host!'

2010-05-05 Thread Tory M Blue
On Tue, May 4, 2010 at 4:14 PM, Amos Jeffries  wrote:
> On Tue, 4 May 2010 11:17:18 -0700, Tory M Blue  wrote:
>> I'm seeing this error on occasion and trying to figure out how to
>> capture what is causing it.
>>
>> 2010/05/04 11:06:03| urlParse: Illegal character in hostname '!host!'
>>
>>
>> !host!.
>>
>> I've thought maybe it was actually in a URI but I've added access
>> logging with urlpath_regex -i \!host  and nothing is matching.
>
> urlpath_regex matches the path+filename+query portion of the URL.
>
> Try with url_regex.
>
>>
>> Is the !host! possibly internal to squid?
>
> No.
>
>>
>> How do I go about capturing and figuring this out?
>
> If the url_regex does not capture it debug_options 84,9 will display all
> the headers going through squid.
>
> debug_options 23,3 will show the higher level URL parse and what its being
> split into.
>
> Amos

Thanks Amos (catching the reply late).

Odd that the added debug is not functioning, I've tried

debug_options ALL,1 23,3 84,9


And I don't get more than the ALL,1 information

2010/05/05 09:08:05| urlParse: Illegal character in hostname '!host!'

And my access.log

acl HTTP-SUSPECT url_regex \!host

works with a generated bogus url:

1272997513.724  1 10.40.9.132 TCP_MISS/404 589 GET
http://cache01.gc.sv.domain.net/!host! -
FIRST_UP_PARENT/apps.domain.net text/html

So I'm capturing if it's in the url, but I'm till getting the illegal
character, in cache.log but nothing in access.log. So I'm missing or
not capturing something.

Very odd that my debug does not seem to be working however :)

Tory


[squid-users] Illegal character in hostname '!host!'

2010-05-04 Thread Tory M Blue
I'm seeing this error on occasion and trying to figure out how to
capture what is causing it.

2010/05/04 11:06:03| urlParse: Illegal character in hostname '!host!'


!host!.

I've thought maybe it was actually in a URI but I've added access
logging with urlpath_regex -i \!host  and nothing is matching.

Is the !host! possibly internal to squid?

How do I go about capturing and figuring this out?

Thanks
Tory


Re: [squid-users] squidaio_queue_request: WARNING - Queue congestion

2010-02-18 Thread Tory M Blue
On Thu, Feb 18, 2010 at 12:27 AM, Henrik Nordstrom
 wrote:
> ons 2010-02-17 klockan 21:40 -0800 skrev Tory M Blue:
>
>> And sorry "sleeping" was just my way of citing the box shows no load,
>> almost no IO 4-5 when I'm hitting it hard. I do not see this issue
>> with lesser threads, it's only when I turn up the juice. But with
>> turning up the connections per second I would expect to see some type
>> of load and I see none.
>
> Anything in /var/log/messages?
>
> The above problem description is almost an exact match for Linux
> iptables connectiontracking table limit being hit.
>
> Regards
> Henrik

Thanks Henrik, nothing in /var/logs/messages or even dmesg

and iptables

Not running. No rules in place, service shutdown.

Thats not the culprit, with less than 12 children at the beginning of my run

2010/02/17 10:29:51|   Completed Validation Procedure
2010/02/17 10:29:51|   Validated 948594 Entries
2010/02/17 10:29:51|   store_swap_size = 3794376k
2010/02/17 10:29:51| storeLateRelease: released 0 objects


2010/02/18 09:53:08| squidaio_queue_request: WARNING - Queue congestion
2010/02/18 09:53:12| squidaio_queue_request: WARNING - Queue congestion
2010/02/18 09:53:17| squidaio_queue_request: WARNING - Queue congestion

Even dropped my thread count and as soon as my load test starts (get
maybe 10 children launched), I get the error

2010/02/18 09:56:18| squidaio_queue_request: WARNING - Queue congestion
2010/02/18 09:56:28| squidaio_queue_request: WARNING - Queue congestion


Okay I've found some issues that I had not seen before,

Feb 18 18:37:06 kvm0 kernel: nf_conntrack: table full, dropping packet.

I would like to kick the netfilter team and fedora  team in the shins.
The issue was my squid boxes are virtual and the errors were being
logged on the domain box (not domain as in MS). So now I'm trying to
go through the system and remove all this garbage. This server does
not need to track the connections and or log them. There does not seem
to be a simple way to disable, just a lot of sysctl options and I'm
unclear if these will do it entirely.

net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.all.arp_filter=0
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.default.arp_filter=0
net.ipv4.conf.lo.rp_filter=0
net.ipv4.conf.lo.arp_filter=0
net.ipv4.conf.eth0.rp_filter=0
net.ipv4.conf.eth0.arp_filter=0
net.ipv4.conf.eth1.rp_filter=0
net.ipv4.conf.eth1.arp_filter=0
net.ipv4.conf.br0.rp_filter=0
net.ipv4.conf.br0.arp_filter=0
net.ipv4.conf.br1.rp_filter=0
net.ipv4.conf.br1.arp_filter=0
net.ipv4.conf.vnet0.rp_filter=0
net.ipv4.conf.vnet0.arp_filter=0
net.ipv4.conf.vnet1.rp_filter=0
net.ipv4.conf.vnet1.arp_filter=0

But I'll be quiet here for a few, until I need assistance from the
squid community. I'm still seeing the queue congestion, but if it
actually doubles the thresholds each time, I may get to a good place,
or can be okay to ignore the messages. Obviously the queue congestion
was not causing the 500's, the dropping of packets by netfilter was

Thanks

Tory


Re: [squid-users] squidaio_queue_request: WARNING - Queue congestion

2010-02-17 Thread Tory M Blue
2010/2/17 Henrik Nordström :
> tor 2010-02-18 klockan 14:51 +1300 skrev Amos Jeffries:
>
>> Henrik seems to have re-appeared and he has more disk IO experience then
>> me so may have an idea whet to look for ...   ?
>
> My first reaction is to run a small benchmark in parallel to squid
> performing a reasonably slow sequence of random reads over the disks and
> measuring the response time.. SSD disks in particular may have a habit
> of "randomly" block all requests for a while during/after a period of
> writes.. and OS write buffering and batching may even add to this queue
> latency problem.
>
> But I have not read up on the whole problem description.
>
> Also keep in mind that as Amos mentioned earlier these "Queue
> congestion" warnings come more frequently after start as there is a
> filter preventing the logs from getting overwhelmed with this warning.

Thanks Henrik and Amos

I'll do whatever testing is needed. As it's really odd.and I can't use
these servers/disks until I can get this problem identified.

And sorry "sleeping" was just my way of citing the box shows no load,
almost no IO 4-5 when I'm hitting it hard. I do not see this issue
with lesser threads, it's only when I turn up the juice. But with
turning up the connections per second I would expect to see some type
of load and I see none.

and the -X -d I dont see anything but that error, is there another log
file that needs to be enabled vs cache.log and squid.out?

I'll try some various things to see what I can see. I know time dd
tests and bonnie++ and some FIO seem to do just fine. It's only squid
that seems to be having an issue with me or my setup :)

Tory


Re: [squid-users] squidaio_queue_request: WARNING - Queue congestion

2010-02-17 Thread Tory M Blue
On Tue, Feb 16, 2010 at 7:38 PM, Tory M Blue  wrote:
>>  /usr/local/squid/etc/squid/squid.conf ??
>>
>>>
>>> So it's really odd. Not getting anything to stdin/stdout
>>>
>>> But don't want to get too into the config piece when the big deal
>>> seems to be the congestion. Why more congestion with faster disks and
>>
>> I'm just thinking if there is actually another config being loaded, any
>> optimizations in the non-loaded one are useless.
>>
>> Amos
>
Ahh

Appears squid by design, loads a lot of default params, before it
actually reads the config file. So the parse lines are repeated for
various items, the default and than your configs from squid.conf. Just
needed to look farther down the debug output. So that is a red
herring.

Need to figure out this queue congestion as SSD's and a sleeping box.
I'm missing something
Tory


Re: [squid-users] squidaio_queue_request: WARNING - Queue congestion

2010-02-16 Thread Tory M Blue
>  /usr/local/squid/etc/squid/squid.conf ??
>
>>
>> So it's really odd. Not getting anything to stdin/stdout
>>
>> But don't want to get too into the config piece when the big deal
>> seems to be the congestion. Why more congestion with faster disks and
>
> I'm just thinking if there is actually another config being loaded, any
> optimizations in the non-loaded one are useless.
>
> Amos

Nope, only /etc/squid/squid.conf /etc/squid/squid.conf.default.

I've done a find on my system and no others.

I'm going to run debug on my SDA 2.7stable13 boxen to see if I see
something similar. But I still don't think the config is going to
cause the queue congestion on an idle box.

Tory


Re: [squid-users] squidaio_queue_request: WARNING - Queue congestion

2010-02-16 Thread Tory M Blue
On Tue, Feb 16, 2010 at 4:45 PM, Amos Jeffries  wrote:
> On Tue, 16 Feb 2010 16:24:22 -0800, Tory M Blue  wrote:
>>>> 2010/02/16 14:18:15| squidaio_queue_request: WARNING - Queue
> congestion
>>>> 2010/02/16 14:18:26| squidaio_queue_request: WARNING - Queue
> congestion
>>>>
>>>> What can I look for, if I don't believe it's IO wait or load (the box
>>>> is sleeping), what else can it be. I thought creating a new build with
>>>> 24 threads would help but it has not (I can rebuild with 10 threads vs
>>>> the default 18 (is that right?) I guess.
>>>
>>> Each of the warnings doubles the previous queue size, so
>>>
>>>
>>> I think its time we took this to the next level of debug.
>>> Please run a startup with the option -X and lets see what squid is
> really
>>> trying to do there.
>>>
>>> Amos
>>
>>
>> Okay not seeing anything exciting here. Nothing new with -X and/or
>> with both -X and -d
>>
>> 2010/02/16 16:17:51| squidaio_queue_request: WARNING - Queue congestion
>> 2010/02/16 16:17:59| squidaio_queue_request: WARNING - Queue congestion
>>
>> No additional information was provided other than what appears to be
>> something odd between my config and what squid is loading into it's
>> config.
>>
>> for example;
>> conf file :maximum_object_size 1024 KB
>> What it says it's parsing:  2010/02/16 16:12:07| parse_line:
>> maximum_object_size 4096 KB
>>
>> conf file: cache_mem 100 MB
>> What it says it's parsing: 2010/02/16 16:12:07| parse_line: cache_mem 8
> MB
>>
>> This may not be the answer, but it's odd for sure (
>>
>> Nothing more on the queue congestion, no idea why this is happening.
>
> To stdout/stderr or cache.log?  I think if thats to stdout/stderr might be
> the defaults loading.
> There should be two in that case. The later one correct.
>
> Though it may be worth double checking for other locations of squid.conf.
>
> Amos

That's from cache.log and I only have one squid.conf in /etc/squid and
the only other squid.conf is the http configuration for cachemgr in
/etc/httpd/conf.d

So it's really odd. Not getting anything to stdin/stdout

But don't want to get too into the config piece when the big deal
seems to be the congestion. Why more congestion with faster disks and
almost no load. I'm willing to run tests, tweak, rebuild with various
settings whatever, just would like to figure this out

Tory


Re: [squid-users] squidaio_queue_request: WARNING - Queue congestion

2010-02-16 Thread Tory M Blue
>> 2010/02/16 14:18:15| squidaio_queue_request: WARNING - Queue congestion
>> 2010/02/16 14:18:26| squidaio_queue_request: WARNING - Queue congestion
>>
>> What can I look for, if I don't believe it's IO wait or load (the box
>> is sleeping), what else can it be. I thought creating a new build with
>> 24 threads would help but it has not (I can rebuild with 10 threads vs
>> the default 18 (is that right?) I guess.
>
> Each of the warnings doubles the previous queue size, so
>
>
> I think its time we took this to the next level of debug.
> Please run a startup with the option -X and lets see what squid is really
> trying to do there.
>
> Amos


Okay not seeing anything exciting here. Nothing new with -X and/or
with both -X and -d

2010/02/16 16:17:51| squidaio_queue_request: WARNING - Queue congestion
2010/02/16 16:17:59| squidaio_queue_request: WARNING - Queue congestion

No additional information was provided other than what appears to be
something odd between my config and what squid is loading into it's
config.

for example;
conf file :maximum_object_size 1024 KB
What it says it's parsing:  2010/02/16 16:12:07| parse_line:
maximum_object_size 4096 KB

conf file: cache_mem 100 MB
What it says it's parsing: 2010/02/16 16:12:07| parse_line: cache_mem 8 MB

This may not be the answer, but it's odd for sure (

Nothing more on the queue congestion, no idea why this is happening.

2010/02/16 16:12:07| Memory pools are 'off'; limit: 0.00 MB
2010/02/16 16:12:07| cachemgrRegister: registered mem
2010/02/16 16:12:07| cbdataInit
2010/02/16 16:12:07| cachemgrRegister: registered cbdata
2010/02/16 16:12:07| cachemgrRegister: registered events
2010/02/16 16:12:07| cachemgrRegister: registered squidaio_counts
2010/02/16 16:12:07| cachemgrRegister: registered diskd
2010/02/16 16:12:07| diskd started
2010/02/16 16:12:07| authSchemeAdd: adding basic
2010/02/16 16:12:07| authSchemeAdd: adding digest
2010/02/16 16:12:07| authSchemeAdd: adding negotiate
2010/02/16 16:12:07| parse_line: authenticate_cache_garbage_interval 1 hour
2010/02/16 16:12:07| parse_line: authenticate_ttl 1 hour
2010/02/16 16:12:07| parse_line: authenticate_ip_ttl 0 seconds
2010/02/16 16:12:07| parse_line: authenticate_ip_shortcircuit_ttl 0 seconds
2010/02/16 16:12:07| parse_line: acl_uses_indirect_client on
2010/02/16 16:12:07| parse_line: delay_pool_uses_indirect_client on
2010/02/16 16:12:07| parse_line: log_uses_indirect_client on
2010/02/16 16:12:07| parse_line: ssl_unclean_shutdown off
2010/02/16 16:12:07| parse_line: sslproxy_version 1
2010/02/16 16:12:07| parse_line: zph_mode off
2010/02/16 16:12:07| parse_line: zph_local 0
2010/02/16 16:12:07| parse_line: zph_sibling 0
2010/02/16 16:12:07| parse_line: zph_parent 0
2010/02/16 16:12:07| parse_line: zph_option 136
2010/02/16 16:12:07| parse_line: dead_peer_timeout 10 seconds
2010/02/16 16:12:07| parse_line: cache_mem 8 MB
2010/02/16 16:12:07| parse_line: maximum_object_size_in_memory 8 KB
2010/02/16 16:12:07| parse_line: memory_replacement_policy lru
2010/02/16 16:12:07| parse_line: cache_replacement_policy lru
2010/02/16 16:12:07| parse_line: store_dir_select_algorithm least-load
2010/02/16 16:12:07| parse_line: max_open_disk_fds 0
2010/02/16 16:12:07| parse_line: minimum_object_size 0 KB
2010/02/16 16:12:07| parse_line: maximum_object_size 4096 KB
2010/02/16 16:12:07| parse_line: cache_swap_low 90
2010/02/16 16:12:07| parse_line: cache_swap_high 95
2010/02/16 16:12:07| parse_line: update_headers on
2010/02/16 16:12:07| parse_line: logfile_daemon /usr/lib/squid/logfile-daemon
2010/02/16 16:12:07| parse_line: cache_log /var/logs/cache.log
2010/02/16 16:12:07| parse_line: cache_store_log /var/logs/store.log
2010/02/16 16:12:07| parse_line: logfile_rotate 10
2010/02/16 16:12:07| parse_line: emulate_httpd_log off
2010/02/16 16:12:07| parse_line: log_ip_on_direct on
2010/02/16 16:12:07| parse_line: mime_table /etc/squid/mime.conf
2010/02/16 16:12:07| parse_line: log_mime_hdrs off
2010/02/16 16:12:07| parse_line: pid_filename /var/logs/squid.pid
2010/02/16 16:12:07| parse_line: debug_options ALL,1
2010/02/16 16:12:07| parse_line: log_fqdn off
2010/02/16 16:12:07| parse_line: client_netmask 255.255.255.255
2010/02/16 16:12:07| parse_line: strip_query_terms on
2010/02/16 16:12:07| parse_line: buffered_logs off
2010/02/16 16:12:07| parse_line: netdb_filename /var/logs/netdb.state
2010/02/16 16:12:07| parse_line: ftp_user Squid@
2010/02/16 16:12:07| parse_line: ftp_list_width 32
2010/02/16 16:12:07| parse_line: ftp_passive on
2010/02/16 16:12:07| parse_line: ftp_sanitycheck on
2010/02/16 16:12:07| parse_line: ftp_telnet_protocol on
2010/02/16 16:12:07| parse_line: diskd_program /usr/lib/squid/diskd-daemon
2010/02/16 16:12:07| parse_line: unlinkd_program /usr/lib/squid/unlinkd
2010/02/16 16:12:07| parse_line: storeurl_rewrite_children 5
2010/02/16 16:12:07| parse_line: storeurl_rewrite_concurrency 0
2010/02/16 16:12:07| parse_line: url_rewrite_children 5
2010/02/16 16:12:07| parse_line: url_rewrite_concurre

[squid-users] squidaio_queue_request: WARNING - Queue congestion

2010-02-16 Thread Tory M Blue
I'm starting to lose my mind here. New hardware test bed including a
striped set of SSD's

Same hardware, controller etc as my other squid servers, just added
SSD's for testing. I've used default threads and I've built with 24
threads. And what's blowing my mind is I get the error immediately
upon startup of my cache server (what?) and when I start banging on it
with over 75 connections p/sec..

The issue with the "well if you only see a few ignore", is that I
actually get 500 errors when this happens. So something is going on
and I'm not sure what.

No Load
No I/O wait.

Fedora 12
Squid2.7Stable7
Dual Core
6gigs of ram
Striped SSD's

And did I mention no wait and zero load when this happens?

"configure options:  '--host=i686-pc-linux-gnu'
'--build=i686-pc-linux-gnu' '--target=i386-redhat-linux'
'--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr'
'--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc'
'--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib'
'--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib'
'--mandir=/usr/share/man' '--infodir=/usr/share/info'
'--exec_prefix=/usr' '--libexecdir=/usr/lib/squid'
'--localstatedir=/var' '--datadir=/usr/share/squid'
'--sysconfdir=/etc/squid' '--disable-dependency-tracking'
'--enable-arp-acl' '--enable-follow-x-forwarded-for'
'--enable-auth=basic,digest,negotiate'
'--enable-basic-auth-helpers=NCSA,PAM,getpwnam,SASL'
'--enable-digest-auth-helpers=password'
'--enable-negotiate-auth-helpers=squid_kerb_auth'
'--enable-external-acl-helpers=ip_user,session,unix_group'
'--enable-cache-digests' '--enable-cachemgr-hostname=localhost'
'--enable-delay-pools' '--enable-epoll' '--enable-ident-lookups'
'--with-large-files' '--enable-linux-netfilter' '--enable-referer-log'
'--enable-removal-policies=heap,lru' '--enable-snmp' '--enable-ssl'
'--enable-storeio=aufs,diskd,ufs' '--enable-useragent-log'
'--enable-wccpv2' '--with-aio' '--with-maxfd=16384' '--with-dl'
'--with-openssl' '--with-pthreads' '--with-aufs-threads=24'
'build_alias=i686-pc-linux-gnu' 'host_alias=i686-pc-linux-gnu'
'target_alias=i386-redhat-linux' 'CFLAGS=-fPIE -Os -g -pipe
-fsigned-char -O2 -g -march=i386 -mtune=i686' 'LDFLAGS=-pie'"


2010/02/16 14:15:49| Starting Squid Cache version 2.7.STABLE7 for
i686-pc-linux-gnu...
2010/02/16 14:15:49| Process ID 19222
2010/02/16 14:15:49| With 4096 file descriptors available
2010/02/16 14:15:49| Using epoll for the IO loop
2010/02/16 14:15:49| Performing DNS Tests...
2010/02/16 14:15:49| Successful DNS name lookup tests...
2010/02/16 14:15:49| DNS Socket created at 0.0.0.0, port 52964, FD 6

2010/02/16 14:15:49| User-Agent logging is disabled.
2010/02/16 14:15:49| Referer logging is disabled.
2010/02/16 14:15:49| Unlinkd pipe opened on FD 10
2010/02/16 14:15:49| Swap maxSize 32768000 + 102400 KB, estimated
2528492 objects
2010/02/16 14:15:49| Target number of buckets: 126424
2010/02/16 14:15:49| Using 131072 Store buckets
2010/02/16 14:15:49| Max Mem  size: 102400 KB
2010/02/16 14:15:49| Max Swap size: 32768000 KB
2010/02/16 14:15:49| Local cache digest enabled; rebuild/rewrite every
3600/3600 sec
2010/02/16 14:15:49| Store logging disabled
2010/02/16 14:15:49| Rebuilding storage in /cache (CLEAN)
2010/02/16 14:15:49| Using Least Load store dir selection
2010/02/16 14:15:49| Set Current Directory to /var/spool/squid
2010/02/16 14:15:49| Loaded Icons.
2010/02/16 14:15:50| Accepting accelerated HTTP connections at
0.0.0.0, port 80, FD 13.
2010/02/16 14:15:50| Accepting ICP messages at 0.0.0.0, port 3130, FD 14.
2010/02/16 14:15:50| Accepting SNMP messages on port 3401, FD 15.
2010/02/16 14:15:50| WCCP Disabled.
2010/02/16 14:15:50| Ready to serve requests.
2010/02/16 14:15:50| Configuring host,domain.com Parent host.domain.com/80/0
2010/02/16 14:15:50| Store rebuilding is  0.4% complete
2010/02/16 14:16:05| Store rebuilding is 66.1% complete
2010/02/16 14:16:12| Done reading /cache swaplog (948540 entries)
2010/02/16 14:16:12| Finished rebuilding storage from disk.
2010/02/16 14:16:12|948540 Entries scanned
2010/02/16 14:16:12| 0 Invalid entries.
2010/02/16 14:16:12| 0 With invalid flags.
2010/02/16 14:16:12|948540 Objects loaded.
2010/02/16 14:16:12| 0 Objects expired.
2010/02/16 14:16:12| 0 Objects cancelled.
2010/02/16 14:16:12| 0 Duplicate URLs purged.
2010/02/16 14:16:12| 0 Swapfile clashes avoided.
2010/02/16 14:16:12|   Took 23.0 seconds (41316.8 objects/sec).
2010/02/16 14:16:12| Beginning Validation Procedure
2010/02/16 14:16:13|262144 Entries Validated so far.
2010/02/16 14:16:13|524288 Entries Validated so far.
2010/02/16 14:16:13|786432 Entries Validated so far.
2010/02/16 14:16:13|   Completed Validation Procedure
2010/02/16 14:16:13|   Validated 948540 Entries
2010/02/16 14:16:13|   store_swap_size = 3794160k
2010/02/16 14:16:14| storeLateRelease: released 0 objects
2010/02/16 14:18:00| squidaio_queue_request: WARNING - Queue congestion
2010/02/16 14:18:04| squ

[squid-users] squidaio_queue_request: WARNING - Queue congestion

2010-02-12 Thread Tory M Blue
Squid 2.7Stable7
F12
AUFS on a ext3 FS
6gigs ram
dual proc
cache_dir aufs /cache 32000 16 256

FilesystemSize  Used Avail Use% Mounted on
/dev/vda2  49G  3.8G   42G   9% /cache

configure options:  '--host=i686-pc-linux-gnu'
'--build=i686-pc-linux-gnu' '--target=i386-redhat-linux'
'--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr'
'--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc'
'--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib'
'--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib'
'--mandir=/usr/share/man' '--infodir=/usr/share/info'
'--exec_prefix=/usr' '--libexecdir=/usr/lib/squid'
'--localstatedir=/var' '--datadir=/usr/share/squid'
'--sysconfdir=/etc/squid' '--with-logdir=$(localstatedir)/log/squid'
'--with-pidfile=$(localstatedir)/run/squid.pid'
'--disable-dependency-tracking' '--enable-arp-acl'
'--enable-follow-x-forwarded-for'
'--enable-auth=basic,digest,ntlm,negotiate'
'--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,getpwnam,multi-domain-NTLM,SASL,squid_radius_auth'
'--enable-ntlm-auth-helpers=no_check,fakeauth'
'--enable-digest-auth-helpers=password,ldap'
'--enable-negotiate-auth-helpers=squid_kerb_auth'
'--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group'
'--enable-cache-digests' '--enable-cachemgr-hostname=localhost'
'--enable-delay-pools' '--enable-epoll' '--enable-icap-client'
'--enable-ident-lookups' '--with-large-files'
'--enable-linux-netfilter' '--enable-referer-log'
'--enable-removal-policies=heap,lru' '--enable-snmp' '--enable-ssl'
'--enable-storeio=aufs,diskd,ufs' '--enable-useragent-log'
'--enable-wccpv2' '--enable-esi' '--with-aio'
'--with-default-user=squid' '--with-filedescriptors=16384' '--with-dl'
'--with-openssl' '--with-pthreads' 'build_alias=i686-pc-linux-gnu'
'host_alias=i686-pc-linux-gnu' 'target_alias=i386-redhat-linux'
'CFLAGS=-fPIE -Os -g -pipe -fsigned-char -O2 -g -march=i386
-mtune=i686' 'LDFLAGS=-pie'

No load to speak of, very little iowait. Threads were configured as the default.

This is running on a striped pair of SSD's and is only a test script
that (ya it's hitting it a bit hard), but nothing that squid nor my
hardware should have an issue with.

I've searched and really there does not appear to be a solid answer,
except running out of cpu or running out of iops, neither "appears" to
be the case here. Figured if it was a thread issue, I would see a
bottleneck on my server? (ya?). Also the if it only happens a couple
of times ignore it. This is just some testing and I believe this
congestion is possibly causing the 500 errors I'm seeing while running
my script.

Any pointers, where to look etc? (2.7stable6 on fc6/xen kernel) had no
such issues (yes, the SSD's are a new variable (but otherwise
identical hardware).

Thanks
Tory

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
  13.130.00   40.664.800.00   41.41

Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
vda1539.50 11516.00  6604.00  23032  13208

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
  13.600.00   38.29   11.080.00   37.03

Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
vda1385.07 11080.60 0.00  22272  0


[...@cache01 ~]$ free
 total   used   free sharedbuffers cached
Mem:   5039312 4217684617544  0  50372 183900
-/+ buffers/cache: 1874964851816
Swap:  7143416  07143416

Totals since cache startup:
sample_time = 1266055655.813016 (Sat, 13 Feb 2010 10:07:35 GMT)
client_http.requests = 810672
client_http.hits = 683067
client_http.errors = 0
client_http.kbytes_in = 171682
client_http.kbytes_out = 2145472
client_http.hit_kbytes_out = 1809060
server.all.requests = 127606
server.all.errors = 0
server.all.kbytes_in = 321960
server.all.kbytes_out = 38104
server.http.requests = 127606
server.http.errors = 0
server.http.kbytes_in = 321960
server.http.kbytes_out = 38104
server.ftp.requests = 0
server.ftp.errors = 0
server.ftp.kbytes_in = 0
server.ftp.kbytes_out = 0
server.other.requests = 0
server.other.errors = 0
server.other.kbytes_in = 0
server.other.kbytes_out = 0
icp.pkts_sent = 0
icp.pkts_recv = 0
icp.queries_sent = 0
icp.replies_sent = 0
icp.queries_recv = 0
icp.replies_recv = 0
icp.query_timeouts = 0
icp.replies_queued = 0
icp.kbytes_sent = 0
icp.kbytes_recv = 0
icp.q_kbytes_sent = 0
icp.r_kbytes_sent = 0
icp.q_kbytes_recv = 0
icp.r_kbytes_recv = 0
icp.times_used = 0
cd.times_used = 0
cd.msgs_sent = 0
cd.msgs_recv = 0
cd.memory = 0
cd.local_memory = 487
cd.kbytes_sent = 0
cd.kbytes_recv = 0
unlink.requests = 0
page_faults = 1
select_loops = 467112
cpu_time = 681.173445
wall_time = -40015.496720
swap.outs = 126078
swap.ins = 1366134
swap.files_cleaned = 0
aborted_requests = 0


[squid-users] squid 2.7 stable 6 increased load from 2.6

2009-04-27 Thread Tory M Blue
Greetings,

I just recently upgraded (or in the midst of testing) and I note that
3 servers that I upgraded from 2.6 stable 13 to 2.7 stable 6, are
running 3-4x load of the identical servers running the 2.6 stable
variety.

I was wondering what would cause this?

Should I stick with 2.6 stable 13 and be happy? I was looking forward
to some of the additional http 1.1 (unsupported support) :)

Very werid, ideas?

Tory


Re: [squid-users] SSL Accel - Reverse Proxy

2008-05-05 Thread Tory M Blue
On Mon, May 5, 2008 at 9:23 AM, Tory M Blue <[EMAIL PROTECTED]> wrote:
>
> On Fri, May 2, 2008 at 6:17 PM, Henrik Nordstrom
>  <[EMAIL PROTECTED]> wrote:
>  > On ons, 2008-04-30 at 11:10 -0700, Tory M Blue wrote:
>  >  > I was wondering if there was a way for Squid to pass on some basic
>  >  > information to the server citing that the original request was Secure,
>  >  > so that the backend server will respond correctly.
>  >
>  >  Yes. See the front-end-https cache_peer option.
>
>  Thanks Henrik
>
>  Either I have this implemented wrong (more likely).  Or the directive
>  is not quite right.


Found it, not quite clear in the documentation but I read the
description again. "If set to auto", so there are actually options ,
so I set it to =auto and that works!

Thanks

Tory


Re: [squid-users] SSL Accel - Reverse Proxy

2008-05-05 Thread Tory M Blue
On Fri, May 2, 2008 at 6:17 PM, Henrik Nordstrom
<[EMAIL PROTECTED]> wrote:
> On ons, 2008-04-30 at 11:10 -0700, Tory M Blue wrote:
>  > I was wondering if there was a way for Squid to pass on some basic
>  > information to the server citing that the original request was Secure,
>  > so that the backend server will respond correctly.
>
>  Yes. See the front-end-https cache_peer option.

Thanks Henrik

Either I have this implemented wrong (more likely).  Or the directive
is not quite right.

I seem to see this header: Front-End-Https: On:,  If I hit the page
via port 80 or port 443, this in itself tells me that I've
misunderstood and botched the config, or  this is not quite working
correctly (betting against me, vs the feature)..

Here is the pertinent configuration, As I stated above if i hit any of
the domains on port 80 (http://blah) or on port 443 (https://blah), I
see the header, which I should not see if I hit the page on port 80.

Thanks

Tory

http_port 80 accel vhost
http_port 199  accel vhost
http_port 360  accel vhost
cache_peer 10.40.5.229 parent 80 0 no-query originserver front-end-https
cache_peer 10.40.5.152 parent 80 0 no-query originserver front-end-https
cache_peer 10.40.5.231 parent 80 0 no-query originserver front-end-https
cache_peer_domain 10.40.5.229 !submit-dev.eng.domain.com
cache_peer_domain 10.40.5.229 !admanager-dev.eng.domain.com
cache_peer_domain 10.40.5.152 !apps-dev.eng.domain.com
cache_peer_domain 10.40.5.152 !dev-cache.eng.domain.com
cache_peer_domain 10.40.5.152 !devcache01.eng.domain.com
cache_peer_domain 10.40.5.152 !admanager-dev.eng.domain.com
cache_peer_domain 10.40.5.231 !submit-dev.eng.domain.com
cache_peer_domain 10.40.5.231 !apps-dev.eng.domain.com
cache_peer_domain 10.40.5.231 !dev-cache.eng.domain.com
cache_peer_domain 10.40.5.231 !devcache01.eng.domain.com

##SSL DIRECTIVES##
https_port 443 accel cert=/etc/squid/wildcard.eng.domain.com.pem vhost
https_port 444 accel cert=/etc/squid/wildcard.domain.com.pem vhost


Re: [squid-users] SSL Accel - Reverse Proxy

2008-05-02 Thread Tory M Blue
On Fri, May 2, 2008 at 5:25 AM, Amos Jeffries <[EMAIL PROTECTED]> wrote:

>
>  You made the situation clear. I mentioned the only reasonably easy
> solution.
>  If you didn't understand me, Keith M Richad provided you with the exact
> squid.conf settings I was talking about before.


Obviously i have not., and I apologize.

I want Squid to handle both HTTP/HTTPS (easy, implemented working for months).

I want SQUID to talk to the backend server via HTTP.. period,  (EASY)

I want SQUID to handle the https encryption/description and talk to
the origin server via http . (EASY)

I want Squid to somehow inform the origin that the original request
was in fact HTTPS (HOW, is the question at hand)

I can do SSL and pass it and have squid handle the SSL without issue.,
the issue is allowing the origin insight as to the originating
protocol, if squid accepts the client connection on 443 and sends the
request to the origin on port 80

The issue is that I don't want my backend server to have to deal with
ssl at all. But I have some applications that require the request be
https (secured pages),  So if Squid could pass something in the header
citing that the original request was made via https, than my code
could take that information, and know that sending secured data via
non secure method is okay, since Squid will encrypt the data and send
to the client before that data leaves my network.

I had similar questions with squid sending the original http version
information in a header, which it does. Now I'm wondering if squid
keeps track of the original requesting protocol, so that my
application can look at the header and decide if the original request
came in as https (Since the origin at this point believes not, since
squid is talking to the origin via http and talking to the client via
https.)

Sorry that I seem to be making this complicated, it totally makes
sense in my head (: )

Tory

I'm not sure how to be clearer and would be happy to email directly
with someone , aim, or phone


Re: [squid-users] SSL Accel - Reverse Proxy

2008-05-01 Thread Tory M Blue
On Thu, May 1, 2008 at 2:02 AM, Amos Jeffries <[EMAIL PROTECTED]> wrote:
>
>  You could make a second peer connection using HTTPS between squid and the
> back-end server and ACL the traffic so that only requests coming in via SSL
> are sent over that link. Leaving non-HTTPS incoming going over the old HTTP
> link fro whatever the server want to do.
>
Thanks Amos

Not sure that I made myself clear or that I understand your suggestion.

I need to allow squid to connect and talk to my servers via http
(only), i want squid to handle the SSL termination (SSL acceleration,
take the overhead off the back end servers).

However since squid talks to the back end servers via http (and not
https on pages that require https), I need to somehow tell the server
that the original connection, or the connection that will go back to
the client will be https, even though the server is responding via
http..

I handle secure and non secure fine now, the same website for example.
apps.domain.com, listens to both 443 and 80, so squid can handle
secure and non secure. there is code on apps.domain.com that checks
the incoming protocol to verify that's it's secure, if not it sends a
secure url for the client to come back in on.  As you can see if I
allow Squid to handle the SSL portion, the back end server has no way
of knowing (the piece I'm missing) if the actual client connection is
secure or not. (hard to explain possibly)..

Client > apps.domain.com (443)  -> backend server (80)
backend server (80)  -->  apps.domain.com (443) -->
Client (443)

I'm wondering if Squid can tell the peer (server) that the original
request was in fact secure, so that we can tell the application, feel
free to respond with the secure data via non secure port, because
squid will encrypt the server response and get back to the client via
https

Sorry kind of long winded.
Tory


[squid-users] SSL Accel - Reverse Proxy

2008-04-30 Thread Tory M Blue
I was wondering if there was a way for Squid to pass on some basic
information to the server citing that the original request was Secure,
so that the backend server will respond correctly.

Right now Squid takes and handles the SSL, passes back to the server
via standard http and the application check, causes "basically a
loop", because it wants to see the client using SSL and not  standard
HTTP..

This is only an issue with same hostname/headers that have access on
both 80/443 as the application needs to know that someone came in
secured and that the Squid box will respond in kind.

Am I missing something basic? i'm not seeing it in the information
currently that Squid passes. Otherwise the application could key off
the originating dest port or similar

Thanks
Tory


[squid-users] Vary the cache objects based on the incoming http version, or buckets for different browsers

2008-01-22 Thread Tory M Blue
Okay

So still working thru some http 1.1 issues  as we keep finding more
"well that won't work"..

Due to various "bugs" in IE 4-6 we have to return 1.1 or they get a
script error (it's a .js file).
"tested both on ie7 and ie6 and in both cases with 1.1 enabled the
page is fine. Once 1.1 is disabled, the tell-tale script error
appears."

Since with our 1.0 vs 1.1 script change, squid is caching a version of
the file that is gzipped, so if a 1.0 client asks, squid says oh ya, I
have it, here you go (umm it's compressed and thus the end user
receives an error.)

Is there a way to have Squid vary the cache objects based on the
incoming client HTTP protocol version? So that I can store 2 copies of
any given file, one compressed and one not, and make sure Squid hands
out the right version?  Ideally, we need different buckets, per
browser version would be awesome, based on how often broken software
is released.

It appears windows version 4-6 on XP (we think) has a bug where it
unchecks the use http/1.1 box magically, not an end user selection but
a defacto config from MS.. In fact we see some from IE 7 (not sure the
standard user would go in and uncheck that for any real reason (I
guess unless their IT department has a fubar proxy that requires it
)

Ideas? Yes 2.7, but trying to get an idea how stable folks think it
is, as it's been noted it has more http/1.1 functionality.

Thanks
Tory


Re: [squid-users] What exactly is "Do not set REUSEADDR on port."

2008-01-22 Thread Tory M Blue
On Jan 19, 2008 8:22 PM, Andrew Miehs <[EMAIL PROTECTED]> wrote:
> What exactly was the three second delay? and what did F5 do to fix this?
>
> Thanks
>
> Andrew

Sorry Andrew for the delay.. I believe I posted this when I first had
the issue, but reposting so that it can be logged

.42 = Squid
.153 = apache web server

"FROM F5"
Looking through cache01-new, every instance of a 3 second delay that I
find, I see where 10.13.200.42 sends the SYN, which is sent through
the BIG-IP to 10.13.200.153.  In each instance, I find that
10.13.200.153, rather than replying to the SYN with a SYN-ACK, simply
sends an ACK.  This, bing incorrect, gets a RST response from
10.13.200.42.  After all, 10.13.200.42 was expecting SYN-ACK, not ACK.
 After a three second delay, 10.13.200.42 initiates the handshake
again, and this time 10.13.200.153 sends the SYN-ACK response, which
allows the handshake to carry on.  I had iniitially thought that this
was the fault of 10.13.200.42, but looking over the w04-new tcpdump,
and matching up the delay packets, it's very clear that there is no
SYN-ACK, resulting in this 3 second delay.

As to why 10.13.200.153 would respond to a SYN with just an ACK, I
believe that this may be due to a port reuse issue.  The server
believes that port to still be in use, while the BIG-IP believes it to
be closed.  To get around this, I would suggest the use of
incrementing autoport, where the BIG-IP does not try to use the same
ephemeral port that the client uses, but rather makes use of some
other ephemeral port.  To set this, from the command line issue the
command:

b internal use_incrementing_autoport 1

Which will enable this immediately, and does not require a reboot at all.

"..

Again this fixed the 3 second delay but I still have a ton of
connections sitting in Time_wait on the apache servers. I've tried to
use a sysctl recycle option, but squid via the lb seems to have some
issues with it (again I think it's in the LB)

Tory


Re: [squid-users] Squid http1.1 vs http1.0 (probably again)

2008-01-19 Thread Tory M Blue
On Jan 19, 2008 2:06 PM, Henrik Nordström <[EMAIL PROTECTED]> wrote:
>
> 2.7 has the support you need for this, assuming you speak of using Squid
> as an accelerator/frontend server..
>
> Regards
> Henrik

How so, as you've read and provided further information re my gzip
workaround (thanks), I'm wondering how 2.7 is going to help with this?
I'm currently rolling out 2.6 stable 17 and interested in what 2.7 is
going to provide and when.. BTW the 1.1 vs1.0 work around appears to
fix my issue so I don't have to leave (I'm happy), I like squid, but I
would rather things like this work without "workarounds"

Thanks again Henrik

Tory


[squid-users] Found a work around for my gzip issue

2008-01-18 Thread Tory M Blue
I didn't notice that Squid does a nice thing , all things considered ..

When the protocol is sent as HTTP 1.1 to the Squid cache, it rewrites
the request as HTTP 1.0 (changing the SERVER_PROTOCOL header) , but
sticks the origin client protocol version into the "Via" header.

so for a snippet of our code, i can inspect the via header, and if the
client has come in via 1.1 I can have the server gzip the data and
send it on.. This has to be done in secondary code, as obviously
apache and everything else is going to reply with the originating http
version and with squid, it's 1.0 ..

i was not aware of this and didn't catch it until today.

-Tory


Re: [squid-users] Reverse Proxy Cache - implementing gzip

2008-01-18 Thread Tory M Blue
On Jan 18, 2008 12:46 AM, Ash Damle <[EMAIL PROTECTED]> wrote:
> Hello. Any pointers how how to get Squid to do gzip compression and then 
> e-tags when used as a reverse proxy cache.
>
> Thanks
>
> -Ash

Has to do with version HTTP1.1 vs gzip. But since Squid passes http1.0
version to your origin servers, they are going to respond in kind and
thus the origin is not going to gzip the content (if squid preserved
the 1.0 vs 1.1 version,  the origin server could do what it wanted.
But believe that is the RFC compliance that squid seems to be hard
pressed to conform with.

How much would it cost to get Squid to preserve the http version so
that our servers could provide gzip functionality?

Tory


[squid-users] Squid http1.1 vs http1.0 (probably again)

2008-01-14 Thread Tory M Blue
So I've discovered that much of my connection stacking is due to Squid
responding as 1.0 for everything, this has also caused some issues in
my app.

So before I abandon squid, since we must use gzip encoding and various
other 1.1 specific features, I'm wondering if there is a way to
capture and pass on the clients http version thru to my backend
server..

Since Squid is responding to the backend, regardless of the client
query as 1.0, our servers can't do many of the nice 1.1 features that
we would like to..

So is there a way to capture and have squid rewrite the request so
that my server knows the client made the request using http 1.1, so
that it can respond in kind, regardless of how $#%$ squid is
responding as 1.0 (sorry new issue that wasn't uncovered until today
and it really sucks)..

I've looked all over and the 1.1 vs 1.0 appears to be a decent battle
inside the Squid developers "Or maybe all my reading is based on early
conflicts and this has been resolved in a later version that i'm not
running?!"

Thanks

Tory


[squid-users] What exactly is "Do not set REUSEADDR on port."

2008-01-14 Thread Tory M Blue
I'm running into more connection stacking and while I solved my 3
second delay thanks to F5, i'm still seeing over 9000,1
connections on my web servers, all in Time Wait and most of them from
Squid.

As I continue to look thru config options, kernel params, I noticed this;

Do not set REUSEADDR on port..

And wondered if I'm not using any persistance at all in my
environment, if this is something I should be setting -R

Thanks

Tory


Re: [squid-users] problem with snmp

2007-12-12 Thread Tory M Blue
On Dec 12, 2007 7:14 AM,  <[EMAIL PROTECTED]> wrote:
> Hello,

Hello :)
>
> snmpwalk -m /usr/share/squid/mib.txt 127.0.0.1:3405 -c public cacheHttpHits
> snmpwalk: Timeout (Sub-id not found: (top) -> cacheHttpHits)

> snmpwalk 127.0.0.1:3405 -c public -m /usr/share/squid/mib.txt
> snmpwalk: Timeout
>
What about adding the snmp version there..

snmpwalk -m /usr/share/squid/mib.txt -v2c -c public localhost:3405
.1.3.6.1.4.1.3495.1.3.1

Does that work for ya?

Tory


Re: [squid-users] Can one run cache_log thru an ACL?

2007-12-10 Thread Tory M Blue
On Dec 10, 2007 5:18 PM, Amos Jeffries <[EMAIL PROTECTED]> wrote:

> Well, this is a critical error for the data connection.
> A source server is pumping data into squid without proper HTTP header
> information to say what it is.
>
> The server is sending a Content-Length: header with the wrong length (too
> short). Squid notices more data than was told about and terminates
> connection to that source.
>
> It's a design feature added to protect against several very nasty bits of
> viral/trojan/worm infection out in the web and alert people to when it
> happens.
>
> If its your script/server causing those, needs fixing to only send the
> length header when length is pre-known.
> Otherwise you are under attack and have much bigger problems than squid.

Okay well the data is not static, so I do not believe the length is
known until the transaction completes (A search for example, the site
can't provide any length information, until the search, dynamic stuff
is generated),. You cite "if  pre-known", what if it's not pre-known,
than what is one suppose to do in this scenario?

Thanks

Tory


[squid-users] Can one run cache_log thru an ACL?

2007-12-10 Thread Tory M Blue
I have some important information that I would like to log. Like when
the origin servers or other disappear or when squid timeouts trying to
connect to a peer etc.etc.

However I have a ton of information that my developers cite can't be
removed (basically an http error) "Dec 10 16:34:33 cache01
squid[11509]: httpReadReply: Excess data from" Based on some dynamic
generated items.

So obviously I want to log critical system information (well okay,
what's critical to me, is not the same for others), but I would love
to put in a rule that says something like !Excess data, so that my
logs are worth something.

Any ideas, is this even a legit request for new releases?

So in short, would love to be able to add an acl to my cache_log, so I
can decide what is important and what is not.

Thanks
Tory


Re: [squid-users] Peer timeout value - Reverse proxy

2007-10-22 Thread Tory M Blue
On 10/20/07, Adrian Chadd <[EMAIL PROTECTED]> wrote:
> On Sat, Oct 20, 2007, Tory M Blue wrote:
> > On this particular box
> >
> > squid-2.6.STABLE12-1.fc6
> >
> > I do have  squid-2.6.STABLE13-1.fc6,  installed on another test box
> > (have not tested this behavior)
>
> Try it out; I think Henrik fixed that bug since STABLE12.
>

confirmed STABLE13 seems to have resolved my issue, peer is defined
dead, cache serves content from local cache and when peer comes back,
it starts directing  "refresh" hits to the peer.

Thanks!

Tory


Re: [squid-users] Peer timeout value - Reverse proxy

2007-10-20 Thread Tory M Blue
On this particular box

squid-2.6.STABLE12-1.fc6

I do have  squid-2.6.STABLE13-1.fc6,  installed on another test box
(have not tested this behavior)

Thanks for your thoughts on this

Tory

On 10/20/07, Adrian Chadd <[EMAIL PROTECTED]> wrote:
> This sounds like a fixed bug. Which version of Squid are you trying this
> with?
>
>
>
> Adrian
>
> On Fri, Oct 19, 2007, Tory M Blue wrote:
> > Sorry yet another question.
> >
> > I am using origin hosts or vhosts for my cache_peers (not talking to
> > other caches).
> >
> > What I've found, is that in my test environment,  if I take the origin
> > server or vhost down, Squid attempts to connect to it for x
> > seconds/tries and declares it dead.
> > The issue is since the peer knows nothing about ICP, Squid never
> > realizes it's back up and I have to run -k reconfigure or
> > reload/restart squid for it to once again start sending queires to the
> > peer (origin server).
> >
> > cache_peer 10.40.4.229 parent 80 0 no-digest no-query originserver
> >
> > 10.40.4.229 is a single web server..
> >
> > The only setting I see is Cache_peer timeout, but that's not the
> > answer, unless your trying to solve timeout issues related to a quick
> > reboot or restart of a service.
> >
> > I would like squid to know that my server (peer) is back up and it
> > should start once again sending requests for uncached data to it.
> >
> > Does that make sense and am I just missing something?
> >
> > Thanks
> > Tory
>
> --
> - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid Support 
> -
> - $25/pm entry-level bandwidth-capped VPSes available in WA -
>


[squid-users] Peer timeout value - Reverse proxy

2007-10-19 Thread Tory M Blue
Sorry yet another question.

I am using origin hosts or vhosts for my cache_peers (not talking to
other caches).

What I've found, is that in my test environment,  if I take the origin
server or vhost down, Squid attempts to connect to it for x
seconds/tries and declares it dead.
The issue is since the peer knows nothing about ICP, Squid never
realizes it's back up and I have to run -k reconfigure or
reload/restart squid for it to once again start sending queires to the
peer (origin server).

cache_peer 10.40.4.229 parent 80 0 no-digest no-query originserver

10.40.4.229 is a single web server..

The only setting I see is Cache_peer timeout, but that's not the
answer, unless your trying to solve timeout issues related to a quick
reboot or restart of a service.

I would like squid to know that my server (peer) is back up and it
should start once again sending requests for uncached data to it.

Does that make sense and am I just missing something?

Thanks
Tory


Re: [squid-users] Squid on FC6, connections sitting around too long

2007-10-16 Thread Tory M Blue
On 10/15/07, Henrik Nordstrom <[EMAIL PROTECTED]> wrote:
>
> Probably you have a TCP connection based load balancer instead of one
> that balances on actual traffic, and the Netcaches have persistent
> connections disabled..

> See the client_persistent_connections and persistent_request_timeout
> directives.
>
> Regards
> Henrik


For some reason when I initially configured these I thought
persistence was off by default, but looking at the config guides, I
see it's defaulted to on.

Playing with persistence on/off and persistence timeout is helping
things tremendously.

Squid is showing more output/input than the netcaches now. Now I'm
working on finding the right combination to keep open connections to a
minimum while continuing max throughput.

Client persistence in a reverse proxy environment makes no sens, and
since my server environment is also load balanced, not sure it makes
much sense there (still testing), but definitely persistence timeout
plays a big role.

Thanks again
Toryi


[squid-users] Squid on FC6, connections sitting around too long

2007-10-12 Thread Tory M Blue
Trying to figure out how I can reduce connections, sitting around on
my Squid boxes.

I'm still running with both Netcaches and a few Squid boxes and what
I'm seeing in my loadbalancer is that the Netcaches have 50% less
connections at any given time than the Squid boxes. Also the Netcache
(Netapp) is pushing more traffic, so what I'm gathering is that the
Netapp is taking in connections and releasing them faster than the
squid boxes.

I'm wondering if A) how I can diagnose this B), is there Squid
specific settings to handle  this (I don't really want to close an
active connection, but absolutely want a connection that has been left
by a client to go away).

Not sure if I should be looking at kernel/network tweaks or something
in my Squid configuration.

Packets Bits Packets Bits Current Max Total
18.5M   21.5G 23.6M 172.0G 7371.3K 1.2M  Squid
21.2M   26.8G 28.8M 199.2G 2741.3K 1.2M  NetCache
21.6M   27.1G 29.5M 206.4G 2491.3K 1.2M  NetCache
18.6M   21.6G 23.7M 172.5G 7351.2K 1.2M  Squid

Shows that the Squid boxes even though the loadbalancer is handing
connections off in a round robin fashion, Squid twice the connection
count as the Netapps.

What pertinent information would you need to throw an idea over?

Thanks
Tory


Re: [squid-users] Weird 3 second delay between Squid and F5 LB (reverse proxy) (RESOLUTION)

2007-08-17 Thread Tory M Blue
Top posting.

The issue was with the Load Balancer closing the connection and the
client thinking that the port was still open.. F5 did an amazing job
with deciphering my dumps and following the packets. They stepped up
and said "it does in fact look like our system and not Squid"

Looking through cache01-new, every instance of a 3 second delay that I
find, I see where 10.13.200.42 sends the SYN, which is sent through
the BIG-IP to 10.13.200.153.  In each instance, I find that
10.13.200.153, rather than replying to the SYN with a SYN-ACK, simply
sends an ACK.  This, bing incorrect, gets a RST response from
10.13.200.42.  After all, 10.13.200.42 was expecting SYN-ACK, not ACK.
 After a three second delay, 10.13.200.42 initiates the handshake
again, and this time 10.13.200.153 sends the SYN-ACK response, which
allows the handshake to carry on.  I had iniitially thought that this
was the fault of 10.13.200.42, but looking over the w04-new tcpdump,
and matching up the delay packets, it's very clear that there is no
SYN-ACK, resulting in this 3 second delay.

As to why 10.13.200.153 would respond to a SYN with just an ACK, I
believe that this may be due to a port reuse issue.  The server
believes that port to still be in use, while the BIG-IP believes it to
be closed.  To get around this, I would suggest the use of
incrementing autoport, where the BIG-IP does not try to use the same
ephemeral port that the client uses, but rather makes use of some
other ephemeral port.  "

Just wanted to close this out, I didn't think it was the Squid box but
had to be sure. So I'm good to go!



On 7/27/07, Adrian Chadd <[EMAIL PROTECTED]> wrote:
> Have you looked at it through tcpdump?
> Those sorts of delays could be simple stuff like forward/reverse
> DNS..
>
>
> Adrian
>
> On Fri, Jul 27, 2007, Tory M Blue wrote:
> > I'm not sure what is going on and have done so much tracing that I've
> > just probably confused things more then anything else.
> >
> > i'm running Squid Cache: Version 2.6.STABLE12, on Fedora Core 6.
> >
> > It's configured to point to a single parent (which is a Virtual IP on
> > the LB) with multiple servers sitting behind that virtual (so yes a
> > pool).
> >
> > If i run wget's, socket scrpts etc, against squid pointing to a single
> > host, or have all 4 hosts listed as parents, there is no issue.
> >
> > If I have squid pointing at the above mentioned VIP I see a 3 second
> > delay every x connections, can be the 9th, 30th or 100th connection
> > (again the 3 second delays are very random, but very bothersome).
> >
> > Another point of interest, running thru a Netapp Cache there are zero 
> > delays..
> >
> >
> > Socket level test:
> >
> > Fri Jul 27 08:56:53 2007: Iteration #28
> > Fri Jul 27 08:56:53 2007: Connecting to localhost:80...
> > Fri Jul 27 08:56:53 2007: Connected.
> > Fri Jul 27 08:56:53 2007: Sending request...
> > Fri Jul 27 08:56:53 2007: Sent.
> > Fri Jul 27 08:56:53 2007: Receiving response...  <--- 3 second delay..
> > Fri Jul 27 08:56:56 2007: Received complete response.
> > Fri Jul 27 08:56:56 2007: Closing socket.
> > Fri Jul 27 08:56:56 2007: Socket closed.
> >
> > Squid http debug: 3 second delay at end
> > 
> > 2007/07/26 15:36:35| getMaxAge:
> > 'http://host/abc/directorytest/c/i-1.JPG?rand=859749'
> > 2007/07/26 15:36:35| ctx: enter level  0:
> > 'http://host/abc/directorytest/c/i-1.JPG?rand=859749'
> > 2007/07/26 15:36:35| refreshCheck:
> > 'http:/host/abc/directorytest/c/i-1.JPG?rand=859749'
> > 2007/07/26 15:36:35| STALE: expires 1185489395 < check_time 1185489455
> > 2007/07/26 15:36:35| Staleness = 60
> > 2007/07/26 15:36:35| refreshCheck: Matched ' 0 20% 259200'
> > 2007/07/26 15:36:35| refreshCheck: age = 60
> > 2007/07/26 15:36:35|check_time: Thu, 26 Jul 2007 22:37:35 GMT
> > 2007/07/26 15:36:35|entry->timestamp:   Thu, 26 Jul 2007 22:36:35 
> > GMT
> > 1185489395.994 SWAPOUT 00 00081DFD 741C0A705149FFD54F8CE6B6B4486D77
> > 200 1185489396 1185383126 1185489396 image/jpeg 42492/42492 GET
> > http://host/abc/directorytest/c/i-1.JPG?
> > 2007/07/26 15:36:38| ctx: exit level  0   <--- shows the 3 second delay
> >
> >
> > More Squid debug (different times) 3 second delay at end
> > -
> > 2007/07/26 15:06:54| fwdStateFree: 0x85b9388
> > 2007/07/26 15:06:54| fwdStart:
> > 'http://host/abc/directorytest/c/i-1.JPG?rand=279660'
> > 2007/07/26 15:06:54| fwdStartComplete:
> > h

Re: [squid-users] Weird 3 second delay between Squid and F5 LB (reverse proxy)

2007-07-27 Thread Tory M Blue
On 7/27/07, Adrian Chadd <[EMAIL PROTECTED]> wrote:
> On Fri, Jul 27, 2007, Tory M Blue wrote:
>
> > Adiran, I have used straight IP instead of the VIP name with no change
>
> Whats debugging on the F5 say?
>
> (I've not got an F5 so I can't do any testing at my end..)
>

the tcpdumps are showing a reset from the LB, so this could be not
internal to squid, although it felt like it. I think Squid is getting
a reset and then taking sometime to create another connection and
start all over again. I can post a snipet of a sniff, but it would
only do any good for those that undestand what squid is doing at the
packet layer

Tory


Re: [squid-users] Weird 3 second delay between Squid and F5 LB (reverse proxy)

2007-07-27 Thread Tory M Blue
On 7/27/07, Adrian Chadd <[EMAIL PROTECTED]> wrote:
> Have you looked at it through tcpdump?
> Those sorts of delays could be simple stuff like forward/reverse
> DNS..
>
>
> Adrian

Adiran, I have used straight IP instead of the VIP name with no change
in symptoms, so it's not DNS.

I've done tcpdumps and really have a hard time seeing anything that is
questionable..  I am still looking at dumps to see if there is more to
the story, but again to individual servers no issue, to the F5 LB a
frequent 3 second (exact delay)
Thanks
Tory


[squid-users] Weird 3 second delay between Squid and F5 LB (reverse proxy)

2007-07-27 Thread Tory M Blue
I'm not sure what is going on and have done so much tracing that I've
just probably confused things more then anything else.

i'm running Squid Cache: Version 2.6.STABLE12, on Fedora Core 6.

It's configured to point to a single parent (which is a Virtual IP on
the LB) with multiple servers sitting behind that virtual (so yes a
pool).

If i run wget's, socket scrpts etc, against squid pointing to a single
host, or have all 4 hosts listed as parents, there is no issue.

If I have squid pointing at the above mentioned VIP I see a 3 second
delay every x connections, can be the 9th, 30th or 100th connection
(again the 3 second delays are very random, but very bothersome).

Another point of interest, running thru a Netapp Cache there are zero delays..


Socket level test:

Fri Jul 27 08:56:53 2007: Iteration #28
Fri Jul 27 08:56:53 2007: Connecting to localhost:80...
Fri Jul 27 08:56:53 2007: Connected.
Fri Jul 27 08:56:53 2007: Sending request...
Fri Jul 27 08:56:53 2007: Sent.
Fri Jul 27 08:56:53 2007: Receiving response...  <--- 3 second delay..
Fri Jul 27 08:56:56 2007: Received complete response.
Fri Jul 27 08:56:56 2007: Closing socket.
Fri Jul 27 08:56:56 2007: Socket closed.

Squid http debug: 3 second delay at end

2007/07/26 15:36:35| getMaxAge:
'http://host/abc/directorytest/c/i-1.JPG?rand=859749'
2007/07/26 15:36:35| ctx: enter level  0:
'http://host/abc/directorytest/c/i-1.JPG?rand=859749'
2007/07/26 15:36:35| refreshCheck:
'http:/host/abc/directorytest/c/i-1.JPG?rand=859749'
2007/07/26 15:36:35| STALE: expires 1185489395 < check_time 1185489455
2007/07/26 15:36:35| Staleness = 60
2007/07/26 15:36:35| refreshCheck: Matched ' 0 20% 259200'
2007/07/26 15:36:35| refreshCheck: age = 60
2007/07/26 15:36:35|check_time: Thu, 26 Jul 2007 22:37:35 GMT
2007/07/26 15:36:35|entry->timestamp:   Thu, 26 Jul 2007 22:36:35 GMT
1185489395.994 SWAPOUT 00 00081DFD 741C0A705149FFD54F8CE6B6B4486D77
200 1185489396 1185383126 1185489396 image/jpeg 42492/42492 GET
http://host/abc/directorytest/c/i-1.JPG?
2007/07/26 15:36:38| ctx: exit level  0   <--- shows the 3 second delay


More Squid debug (different times) 3 second delay at end
-
2007/07/26 15:06:54| fwdStateFree: 0x85b9388
2007/07/26 15:06:54| fwdStart:
'http://host/abc/directorytest/c/i-1.JPG?rand=279660'
2007/07/26 15:06:54| fwdStartComplete:
http://host/abc/directorytest/c/i-1.JPG?rand=279660
2007/07/26 15:06:54| fwdConnectStart:
http://host/abc/directorytest/c/i-1.JPG?rand=279660
2007/07/26 15:06:54| fwdConnectStart: got addr 0.0.0.0, tos 0
2007/07/26 15:07:03| fwdConnectDone: FD 17:
'http://host/abc/directorytest/c/i-1.JPG?rand=279660'
2007/07/26 15:07:03| fwdDispatch: FD 16: Fetching 'GET
http://host/abc/directorytest/c/i-1.JPG?rand=279660'
2007/07/26 15:07:03| fwdComplete:
http://hostabc/directorytest/c/i-1.JPG?rand=279660
status 200
2007/07/26 15:07:03| fwdReforward:
http://hostabc/directorytest/c/i-1.JPG?rand=279660?
2007/07/26 15:07:03| fwdReforward: No, ENTRY_FWD_HDR_WAIT isn't set
2007/07/26 15:07:03| fwdComplete: not re-forwarding status 200
1185487623.236 SWAPOUT 00 0008077B 8AB49FB3B897FB721E06A8ED91EE1AF  200
1185487623 1185383126 1185487623 image/jpeg 42492/42492 GET
http://host/abc/directorytest/c/i-1.JPG?
2007/07/26 15:07:03| fwdServerClosed: FD 17
http://host/abc/directorytest/c/i-1.JPG?rand=279660


[squid-users] More Accel fun.. The more I look the more I find that I'm not configured right

2007-05-21 Thread Tory M Blue

I have working squid 3.0 boxes, well i think they are working and feel
like they are working, but as I dive further and further into my
configs and the user guides, I find that I have some gum holding
things together.

So my second post...

I currently have a squid config with 3 http_port accel vhost directives

http_port 80 accel vhost
http_port 199  accel vhost
http_port 360  accel vhost

Now obviously the server binds to these 3 ports, now I have some
"actual virtual hosts in the backend behind a LB", so I need to send
different queries to different virtual hosts.

this is what I have now:

cache_peer 10.40.4.229 parent 80 0 no-query originserver
cache_peer 10.40.4.230 parent 80 0 no-query originserver

QUESTION

I do not see a way to send the queries that come in on port 360 to a
different "originserver", then those that come in on port 199. Is
there a way? Do i have to create some ACL's or other that direct
traffic based on incoming port?

Thanks

Tory


[squid-users] ACL assistance -URI and URL

2007-05-18 Thread Tory M Blue

Good morning, afternoon and or evening.

I am either not searching correctly or, nahhh, I've failed to locate
something that must be out there I'm sure.

Squid acl's.

I would like to match on the URI and the URL, I would like to apply
no-cache rules to a domain matching a specific url.

example

acl domain1 dstdomain .example.com
acl domain1l url_regex ^.+\.js$
no_cache deny domain1

Obviously as I've learned, you can't mix types, so how would one go
about creating such a rule, so that it matches the URI, then the URL ?

If URI matches .example.com then check URI , if URI matches "regex"
then no-cache

Thanks

Tory