On 01.11.2011 at 12:03 t...@uncon.org wrote:
> 
> Quoting Peter Palmreuther <li...@zentrumderarbeit.org>:
>> I do exactly this (albeit with a self written script, because when I  
>> started to cleanup my graylisting directory I didn't know about  
>> 'qtp-prune-graylist').
>> Empty files older than 24 hours, "too old" files (->  
>> graylist-max-secs) and subsequently empty directories are removed.  
>> In my case once a day, when there's lower load on the server. The  
>> (for me also separate) graylist filesystem offers enough space and  
>> inodes to cover it's uncleaned usage for several days, so there's no  
>> significant profit to gain cleaning up more than once a day.
> 
> Just my opinion, but I think you're not getting the most out of  
> graylisting if you are pruning the records so aggressively, due to the  
> need for the remote server to do a re-send more than necessary, the  
> annoyance to users of the delays in email and there are still broken  
> servers out there.

I'd agree, if only I could see the aggressiveness ...

I have a 'graylist-max-secs' value equivalent to one week. So somebody sending 
another mail within one week resets the timeout, doesn't it?
Anybody else ... Well, there has to be some limit. Haven't had any problem with 
one week yet, but would increase if I'd see some arising.

Deleting empty files, older than 24 hours, I don't see as an aggressive act 
too. If a mail server didn't retried within 24 hours, how long am I supposed to 
wait, for not being "aggressive"?

Deleting empty folders I only see as a final clean up. I don't see a point in 
keeping them. But I'm open for arguments on this ...

> I'd be interested to see the numbers on your message count and  
> graylist filesystem usage.

As it's a rather small server I don't have any significant statistical data ... 
Sorry.

> Anyway, here's what I do:
> 
> o I use a loopback filesystem (XFS) to hold the graylist data. This  
> has two advantages, I can easily dump the entire dataset by  
> re-formatting the virtual filesystem, and XFS allocates inodes  
> dynamically, so won't easily run out.

So do I. I just have not done the cleanup from the very beginning and ran out 
of inodes once.
That's why I search for "cleanable" data every day ... Which does, as explained 
above, does not imply I'm deleting everything older than one day ;-)

> o I keep 3 weeks worth of graylist history.

I'm not that interested in graylist history yet, that's why I don't keep stuff 
that long or analyze systematically.

> In order to manage that,  
> I've patched the code to change the layout on disk, into per-week  
> directory structures (by week number), like this:
> /graylist-dir/domain/week-no/recip/domain/sender

Well ... This patch might in fact be interesting, because:

> This means that at the end of the week, I can simply delete the  
> directory containing the oldest data, without having to perform any  
> kind of filesystem 'find' operation looking for data that is too old,  
> which is very expensive.

Right. And exactly for this very reason I do the daily run, because it simply 
reduces the number of existing file system entries "more steady" than "once a 
week" and therefore makes the 'find' a little less expensive.
Having weekly folders I'd most probably switch to "delete the whole (too old) 
week" too.
As 'find' would not be necessary in that situation the number of file system 
accesses should, in sum, be the same, wether I delete day-by-day or just the 
whole weekly parent directory.

So ... thanks for the dialogue. New perspectives, different opinions and some 
input keeps the topic vivacious.
Interested in the patch (maybe even it's making it's way upstream?) ...
-- 
Best regards,

Peter
_______________________________________________
spamdyke-users mailing list
spamdyke-users@spamdyke.org
http://www.spamdyke.org/mailman/listinfo/spamdyke-users

Reply via email to