My logrotating done by logrotated doesn't work anymore...
/var/log/squid/*.log {
weekly
rotate 52
size 100M
compress
notifempty
missingok
sharedscripts
postrotate
# Asks squid to reopen its logs. (logfile_rotate 0 is set in squid.conf)
# errors redirecte
Version : 3.1.9
We're getting (fairly consistently) hanging transfers on a particular
resource. We're using squid as an accelerator, and it is all internal
traffic within our corporate intranet.
I found this, http://wiki.squid-cache.org/KnowledgeBase/BrokenWindowSize,
but I'm skeptical becau
On Mon, Dec 13, 2010 at 2:10 PM, Michael Leong
wrote:
> Hi,
> we're currently running the squid store off a NetApp NFS. According to df,
> it says our cache store is using 140GB. However, when I run the disk usages
> query via
You have the opposite problem I had. Squid was underreporting, not
I'm not convinced I have peering configured correctly. Here is my environment:
These are internal specialized squid servers for serving internal web
sites/deliverables. The main squid server at corporate is intended to
accelerate a few sites. At corporate, we have 4 squid servers fronted
by hap
On Mon, Oct 4, 2010 at 2:56 AM, Matus UHLAR - fantomas
wrote:
> On 29.09.10 17:42, Rich Rauenzahn wrote:
>> This code strikes me as incorrect... Basically for files > 2GB, squid
>> does the accounting wrong!
>
> It's apparently just a filesystem overhead, which var
This code strikes me as incorrect... Basically for files > 2GB, squid
does the accounting wrong!
Note that sizeof(int) is 4 in both 32bit and 64bit compilation models.
I believe that blks * fs.blksize overflows 32bit before it is right
shifted by 10 bits.
void
SwapDir::updateSize(int64_t size, i
>
> This is interesting:
>
> http://bugs.squid-cache.org/show_bug.cgi?id=2313
>
> We're using XFS per our group's standard. I wonder if it exhibits a
> similar problem?
Nope, doesn't seem to be the problem. Reformatted with ext4, wiped
the cache, and it is now 15GB and squid's statistics say it
On Mon, Sep 27, 2010 at 4:03 PM, Rich Rauenzahn wrote:
> Hi,
>
> Our squid servers are consistently goes over their configured disk
> limits. I've rm'ed the cache directories and started over several
> times... yet they slowly grow to over their set limit and fill up t
[resending -- left of list addr last time]
>> These are clearly over the 300,000K limit -- and the swap stat files
>> are less than 1MB.
>
> Um, you mean the 300 GB limit configured. 307,200,000 KB to be precise.
Yes, right.
> Which indicates that something other than Squid data cache is going
[resending, I accidentally left off the list addr]
> If you cache very large files, you may need to change
> cache_swap_low 88
> cache_swap_high 89
> to force the cleanup process to be more aggressive with
> removing the oldest cached files
>
> Marcus
I don't see how increasing those values (exc
Hi,
Our squid servers are consistently goes over their configured disk
limits. I've rm'ed the cache directories and started over several
times... yet they slowly grow to over their set limit and fill up the
filesystem.
These are du -sk's of the squid directories:
squid2: 363520856 /squid/
squi
11 matches
Mail list logo