On 12/2/2012 1:20 PM, Alex wrote:

> Thanks for the explanation. Trying to do too many things at once. You
> probably think I'm an idiot by now.

You're welcome.  I understand that completely.  No, not at all.

>> Dropping SMTP packets should be done with care.  If you FP on an email
>> to the CEO and he comes asking, giving you a sender address, you have no
> 
> I meant as it relates to blocking by spamhaus or barracuda. From an FP
> perspective, that doesn't affect it either way. Perhaps the audit
> trail is a little better with just letting postscreen continue to
> block it rather than fail2ban, however.

Yeah, with fail2ban you don't really have no audit trail at all.  Keep
in mind that legit senders can wind up on trap driven lists.  Both Zen
and BRBL expire listings when the IPs are no longer emitting for X
period of time.  So if you feed DNSBL rejected IPs to it you should set
some kind of expiry period.

> Yes, I'm only using it on IPs that have already been confirmed to be
> blacklisted, of course.

See above.

> I'd say I had a few hundred. Dumped some of the old ones, and had them
> there instead of SA for just that reason.

This is a double edged sword:  if you hit with header_checks you save SA
CPU time.  If you miss you've added extra CPU time on top of SA.  But
given you have a 4 core machine, IIRC, CPU shouldn't be much concern,
even with the relatively high msg rate you have.

> Assuming the rhsbl fails in postfix, then SA processes it, doesn't
> that two queries for the same rhsbl entry?

When I say "query" I'm talking about those that come with cost, i.e.
external to your network, with latency.  The 2nd DNS query in this case,
SA, is answered by your local caching resolver, thus no cost.

> I've read it again, but my confusion was with reading about
> reject_rhsbl statements, and forgetting they're domain, not IP based.

Yes, postscreen has but a fraction of the rejection parameters/types
available in SMTPD.  All of the domain based rejections are SMTPD only.

> Unfortunately it's not that easy. You would think it would be that
> easy, but somehow I knew there would be complications. I investigated
> this, and it's just a tray to make it fit in a 3.5" bay, such what
> you'd find in a desktop PC.
> 
> Looking at the picture, it just didn't seem right. I actually called
> Intel, and I explained to them I had a 1U chassis and needed to be
> able to put this 2.5" disk in the tray where normally a 3.5" disk is
> used. He told me what you thought -- that the tray would work for
> that, even though I knew there was no way it would.
> 
> I ordered two of the 60GB 520 series disks instead of the ones you
> mentioned -- better warranty and faster. They arrived on Friday, and
> sure enough, it's just a metal frame to put it in a desktop, not a 1U
> chassis.
> 
> So, considering they generate no heat and take up no space, I'm
> thinking of using velco inside the case. We'll see how that goes.

Don't do that.  Send me the make/model# and/or a picture of link to the
manufacturer product page.  Is this a tier one chassis?  I.e. HP, Dell,
IBM, etc?  Once I see the drive cage arrangement I can point you to
exactly what you need.  However, if the chassis has hot swap 3.5" SATA
bays then the adapter should allow you mount the SSD in the carrier with
perfect interface mating to the backplane.

> I would never use the onboard SATA controller. That's crap.

Not all of them are crap.  Many yes, but not all.  This depends a bit on
what capabilities you need.  If you don't need expander or PMP support
the list of decent ones is larger.  If your board has an integrated
LSISAS 1064/68/78 or 2008/2108 then you're golden for mdraid.

> I have great faith in Neil Brown and his md code :-) I've lost a few
> software arrays over the years, but have always found them reliable
> and better supported in Linux. I've also used the battery-backed
> hardware RAID in the past, which is nice too.

There's nothing inherently wrong with mdraid.  As long as one knows its
limitations and works around them, or doesn't have a configuration or
workload that will bump into said limitations, then you should be fine.

> Yes, there's no debating an SSD would be preferred in all situations.
> When this was built, we used four SATA3 disks with 64MB cache and
> RAID5 because the system was so fast already that the extra expense
> wasn't necessary.

And this is one of the disadvantages of mdraid in absence of a BBWC
controller.  For filesystem and data safety, drive caches should never
be enabled, which murders mdraid write performance, especially RAID5/6.
 If your UPS burps, or with some kernel panic/crash scenarios, you lose
the contents of the write cache in the drives, possibly resulting in
filesystem corruption and lost data.  Mounting a filesystem with write
barriers enabled helps a bit.  If you use a BBWC controller you only
lose what's in flight in the Linux buffer cache.  In this case, with a
journaling FS, the filesystem won't be corrupted.  With mdraid, vanilla
controller, and drive caches enabled, you'll almost certainly get some
FS corruption.

> I also wasn't as familiar and hadn't as extensively tested the RAID10
> config, but will sure use it for the mailstore system I'm building
> next.

With SSD if you use anything other than 2 drive mirroring (RAID1) you're
wasting money and needlessly increasing overhead, unless you need more
capacity than a mirror pair provides.  In that case you'd concatenate
mirror pairs as this method is infinitely expandable, whereas mdraid0/10
cannot be expanded, and concat gives better random IO performance than
striping for small file (mail) workloads.  Note these comments are SSD
specific and filesystem agnostic.

The concat of mirrors only yields high performance with rust if you're
using XFS, and have precisely calculated allocation group size and
number of allocation groups so that each is wholly contained on a single
mirror pair.  This XFS concat setup is a bit of a black art.  If you
want to know more about it I can instruct you off list.

Now if you're using rust, as in a mail store where SSD is cost
prohibitive given capacity needs, then a properly configured RAID10 is
generally the best option for most people (and with any FS other than
XFS).  It gives the best random write performance, good random read
performance, and far lower rebuild time than parity arrays.  Most people
don't appreciate this last point until they have to rebuild, say, an 8x
7.2K 1TB drive RAID6 array on a busy production mail store server and it
takes half a day or longer, increasing latency for all users.  With a
concat of mirrors and XFS, when rebuilding a failed drive, only those
users whose mailboxes reside on that mirror pair will have increased
latency, because this setup automatically spreads all mailboxes
relatively evenly across all mirrors.  Think of it as file level
striping across disks, or more correctly, directory level striping.
Such a setup is perfect for maildir mailboxes or Dovecot m/dbox.

-- 
Stan

Reply via email to