>
> Okay, I tried that.  Here are the results:

[snip]

Alright, I've figured it out.  It's somewhat counter-intuitive (at least for
me).  It looks like the -l (limit) doesn't take into consideration the size
of the directory.

Example:

 # /opt/apache2/bin/htcacheclean -v -t -p/www/cache -l300M
Statistics:
size limit 300.0M
total size was 399.9M, total size now 299.9M
total entries was 27344, total entries now 20483

# df -h /www/cache/
Filesystem            Size  Used Avail Use% Mounted on
tmpfs                 1.2G  415M  786M  35% /www/cache

But if I just add up the .header and .data files:
# count=0; for i in `find /www/cache -type f -ls | awk '{print $7}'`; do let
"count=$i + $count";done; echo $count

314566786

Convert bytes into megabytes:

# echo "scale=1;314566786/1048576" | bc
299.9

So I just need to keep playing with my -l (limit) and -d (daemonize
interval) so I can avoid hitting the ceiling.

Thanks for the help Morgan and Dan.

Matt

Reply via email to