On Fri, Sep 22, 2000 at 06:48:51AM -0400, Michael Cunningham wrote:
> I have a qmail server that is running on a Sun Netra T1
> (solaris 2.6). Its receiving about 300-500k emails per day.
> 
> Unfortunatly it appears to be dieing a VERY quick death.
> The IO loads on the disk are huge and I need up performance
> quite a bit. The cpu and memory are fine but disk io is killing
> me. I was think about a couple possible solutions and I wanted
> your input (since you are qmail experts - at least compared to me:) 
> 
> 1. add a disk/filesystem for each queue subdirectory to reduce 
>    io load
> <snip>
> 2. create a 1+0 raid of at least 5 drives per stripe, and place 
>    the entire queue directory structure on this raid filesystem.
>    If possible I will veritasfs instead of ufs for the filesystem
>    and an a1000 to hold the drives (hardware raid). 

1) Make sure you're using cyclog instead of splogger.  splogger
   uses syslog, which can definitely slow down your system.

2) putting the queue on a 1+0 raid is a good thing.  What you
   seem to be describing, two raid 0's mirrored, isn't 1+0 raid,
   it's 0+1, and isn't as fast or safe.  What you want is to 
   mirror pairs of disk drives, then create a stripe accross
   the mirrored pairs.  The best way to do this is with hardware
   controllers, not a software solution like Veritas (I mention
   this because you mention Vxfs).  A great thing about hardware
   controllers is that they generally come with battery backed
   write-back cache, which will cache all of qmail's fsyncs.

   With just a few mirrored pairs, you'll be writing information
   to disk in chunks much smaller than the individual drives 
   write cache.

3) Someone mentioned using a solid state disk solution, like that
   from Seek Systems.  I own a Seek array, and have this comment:
   I benchmarked the array's small block write performance with
   128MB of write-back cache against a 128MB write-back cache 
   RAID 1+0 array.  The latter blew the former away.  The reason
   is (as far as I can tell), that the Seek array ultimately uses
   a large RAID 5 for it's back-end storage.  RAID 1+0 is much
   faster as a back-end which my benchmark showed.  Perhaps Seek
   might benefit from the fact that it's read cache is supposed
   to use some advanced algorithmic methods to keep what you use
   in the cache.  I didn't see that benefit.
 
   Buying an SSD disk from someone like Quantum is generally
   prohibitively expensive vs. a good 1+0 RAID, and nowhere
   near as safe.

John

Reply via email to